ArticlePDF Available

Transparency of Intentions Decreases Privacy Concerns in Ubiquitous Surveillance

Authors:

Abstract and Figures

An online experiment (n = 1,897) was carried out to understand how data disclosure practices in ubiquitous surveillance affect users’ privacy concerns. Information about the identity and intentions of a data collector was manipulated in hypothetical surveillance scenarios. Privacy concerns were found to differ across the scenarios and moderated by knowledge about the collector’s identity and intentions. Knowledge about intentions exhibited a stronger effect. When no information about intentions was disclosed, the respondents postulated negative intentions. A positive effect was found for disclosing neutral intentions of an organization or unknown data collector, but not for a private data collector. The findings underline the importance of disclosing intentions of data use to users in an easily understandable manner.
Content may be subject to copyright.
ORIGINAL ARTICLE
Transparency of Intentions Decreases Privacy
Concerns in Ubiquitous Surveillance
Antti Oulasvirta, PhD,
1,2,3
Tiia Suomalainen, MSc,
1
Juho Hamari, MSc,
1,4,5
Airi Lampinen, PhD,
1
and Kristiina Karvonen, PhD
1
Abstract
An online experiment (n=1,897) was carried out to understand how data disclosure practices in ubiquitous
surveillance affect users’ privacy concerns. Information about the identity and intentions of a data collector was
manipulated in hypothetical surveillance scenarios. Privacy concerns were found to differ across the scenarios
and moderated by knowledge about the collector’s identity and intentions. Knowledge about intentions ex-
hibited a stronger effect. When no information about intentions was disclosed, the respondents postulated
negative intentions. A positive effect was found for disclosing neutral intentions of an organization or unknown
data collector, but not for a private data collector. The findings underline the importance of disclosing intentions
of data use to users in an easily understandable manner.
Introduction
Ubiquitous surveillance refers to the increasing
penetration of computerized data capture in everyday
life.
1–4
Credit card companies track consumer behavior, secu-
rity companies access live video feeds from homes, employers
read employees’ e-mails, governments tap communications,
sports companies store exercise data, media companies store
digital TV usage, health records are distributed among health-
care service providers, public spaces are under CCTV surveil-
lance, and social media companies capitalize on data on their
customers’ social behavior. The recent uproar triggered by the
National Security Agency whistleblower Edward Snowden
suggests that a key contributor to the negative perception of
ubiquitous surveillance is the lack of transparency.
In this paper, ‘‘transparency’’ denotes any piece of infor-
mation available to the person being surveilled that concerns the
identity, purposes, or practices of the involved data collectors.
Often such pieces of information, like terms of service (ToS)
and control settings, are too complex, and users do not under-
stand or use them.
5–8
These developments call for research on
privacy concerns under uncertain and incomplete information.
Positive effects of transparency have been previously re-
ported in many domains of the information society: online
marketing,
9,10
job applications,
11
work performance monitor-
ing,
12
monitoring by government,
13
and Internet use and online
shopping.
14–17
Disclosing the uses of personal data also reduces
concerns about personality tests
18
and employment applica-
tions.
19
In online behavior, transparency increases willingness
to disclose personal information for marketing as well as trust in
companies who gather personal information.
9,17
In contrast,
opaqueness decreases customer retention in Web services.
20
Oftentimes, transparency is only partial, and users base their
decisions on limited signals of the service provider.
21
However,
the questions remain of whether the effects of transparency
apply across different domains of ubiquitous surveillance and
how different levels of transparency affect privacy concerns.
To understand transparency in ubiquitous surveillance better,
the online experiment reported here (n=1,897) investigates nine
ubiquitous surveillance scenarios within a single study. The
scenarios cover many recently debated privacy issues, ranging
from health records to sports tracking and smartphone logging.
We focus on the asymmetric surveillance case where data are
not voluntarily disclosed by the person being surveilled but
captured by a data collector—here, an organization or an indi-
vidual. There are forms of surveillance (e.g., Internet services
and loyalty cards) where disclosure is a part of using a service.
Our primary goal is to shed light on a pragmatically im-
portant question concerning transparency: what should the
data collector disclose and bring to the attention of the users?
Basing on previous work, we hypothesize that any information
disclosed to people being surveilled has the potential to in-
dicate a risk of a harmful outcome for the individual.
10,22,23
Therefore, particularly useful cues should be those that
1
Helsinki Institute for Information Technology HIIT, Aalto University, Helsinki, Finland.
2
Max Planck Institute for Informatics, Saarbruecken, Germany.
3
Cluster of Excellence on Multimodal Computing and Interaction, Saarland University, Saarbruecken, Germany.
4
Department of Information and Service Economy, Aalto University School of Business, Helsinki, Finland.
5
Game Research Lab, School of Information Sciences, University of Tampere, Tampere, Finland.
CYBERPSYCHOLOGY,BEHAVIOR,AND SOCIAL NETWORKING
Volume 17, Number 10, 2014
ªMary Ann Liebert, Inc.
DOI: 10.1089/cyber.2013.0585
1
decrease the expected level of harm. Such cues are many, but
we chose to focus on two universally applicable cues that
could also be easily disclosed in marketing, ToS, and user
interfaces: the identity and intention of the data collector.
To understand these phenomena, the disclosure of the data
collector’s identity and intentions is controlled, enabling us to
gauge the overall effect, as well as provide a breakdown per cue
type. We further add a blind (hidden) condition where such
information is not disclosed. This emulates the nondisclosure—
or unattendance by users—to these pieces of information.
Our study follows a 9·3·3(scenario·intention ·identi-
ty) design. It is an online experiment in which the content of a
data collection scenario is manipulated, and respondents rate
their concern for privacy. Two variables are manipulated in
each scenario: (a) identity, or what is said about the identity of
the data-collecting actor in the scenario, and (b) intention, or
what is said of the intention that underlies data collecting. The
experiment examines transparency by breaking down both
variables into three levels. For identity, these are private, or-
ganization, and hidden; for intention, these are negative, neutral,
and hidden. To study the effect of uncertainty, we ‘‘sandwich’’
the hidden condition between negative and neutral intentions,
which allows us to learn whether people by default assume that
intentions are closer to neutral or negative. Positive intentions
were not included in the study because they are of secondary
importance in the study of privacy concerns related to ubiqui-
tous surveillance and because previous work has overwhelm-
ingly showed the benefit of disclosing positive intentions.
This design allows us to address the following research
questions related to the effect of transparency:
RQ1: Is there an effect of transparency on privacy con-
cerns, and is it universal across domains of ubiquitous
surveillance?
Based on previous work, we expect that transparency
should generally support users’ ability to draw factually
correct inferences about the collection and use of personal
data and, by that, help them to regulate their behavior and
alleviate their concerns.
22,24
However, this may not occur universally across the di-
verse scenarios we include in the study. There are at least
three theoretically motivated predictions for a null effect.
First, it has been argued that people underestimate the impact
of privacy loss because it arises only later, in an abstract
future, while the utility of the service might manifest itself
instantly.
22,25,26
Second, the cost of reasoning is relatively
high in comparison to the gains and losses that are believed
to be related to the decision of divulging information—a
phenomenon also known as rational ignorance.
27,28
Finally,
the gratifications associated with the received benefits from
different services may be perceived greater than any privacy
threat.
29
We thus predict that large differences can be ex-
pected among scenarios of different kinds.
RQ2: Is the effect of transparency on privacy concerns
dependent on the type of information disclosed about the
data collector? In particular, does knowledge of the identity
versus intentions affect the effect differentially?
Previous findings support the conjecture that both identity
and intention should affect privacy concerns: First, em-
ployees have been found to be concerned not so much about
the nature of the data collected on them, but about the in-
dividuals or groups who can access these data (identity).
30
Second, trust toward online vendors is affected by the per-
ception that the vendor has nothing to gain by cheating (in-
tention).
15
However, previous work provides no clues on
which of the factors has a larger effect size and if these
effects generalize.
We hypothesize that the data collector’s intention is a direct
cue suggesting the probability of privacy loss, and identity is
an indirect cue suggesting the entity’s capacity to realize those
intentions. An organization should, other things being equal,
be perceived as more capable in realizing potential threats
than an individual. The government is a very powerful actor,
but its capacity is of little concern unless it harbors negative
intentions. This effect could interact with scenario because
capacity can be expected to be case specific.
RQ3: Is uncertainty over the data collector’s intentions
worse for privacy concerns than knowing that the intentions
are neutral?
Our two transparency variables—identity and intention—
both include a condition where this information is not closed.
In real world contexts of use, ambiguity can pose a rather
different outcome. One possibility is that the perceived high
risks make people exhibit what is known as risk aversion.
Furthermore, not knowing identity and intentions can create
a situation where the decision maker is unable to assess ac-
curately the probabilities of different possible outcomes and
might thus exhibit what is known as ambiguity aversion.
31–33
Another possibility is that people construct imaginary threat
scenarios wherein their details are accessed and used by
others, in which case privacy concerns amid uncertainty
should be closer to the level of concern arising when data
collectors’ intentions are known to be negative. Such effect
could be bolstered by the negative reporting of ubiquitous
surveillance in the media. Due to case-specific differences,
this effect could also differ among scenarios.
Understanding the effect of uncertainty on privacy con-
cerns is also relevant to existing models of privacy concerns.
For example, the concern for information privacy (CFIP)
model
9,34
predicts privacy concerns as a function of four
beliefs: collection of data, errors in the data, secondary use of
data, and improper access. A recent review models privacy
concerns as the consequence of privacy experiences, privacy
awareness, personality differences, demographic differences,
and culture.
35
Concerns are also linked to regulation, trust,
risks/costs, and behavioral reactions. Trust is modeled as
affected by institution-based structural assurances, calcula-
tion, institution-based normality, and knowledge-based fa-
miliarity.
15
To our knowledge, none of the models specifies
uncertainty as a factor.
RQ4: Are there large individual differences?
Given that individuals differ in their attitudes toward and
prior knowledge about the different scenarios, we expect to
find large individual differences. To learn if they can be
attributed to more stable individual factors, we administered
the Behavioral Inhibition/Approach Scale (BIS/BAS),
36
hy-
pothesizing that individuals high in BIS will respond more
negatively to potential privacy loss than respondents lower in
2 OULASVIRTA ET AL.
BIS. According to the theory, BAS regulates motives in
which the goal is to move toward something desired, such as
the benefits of the services in a ubiquitous surveillance sce-
nario. The BIS system, on the other hand, regulates aversive
motives in which the goal is to move away from something
unpleasant, such as privacy loss.
Method
An online questionnaire with nine scenarios was designed
for the study (Table 1).
Participants
In all, 1,911 Finnish-speaking respondents filled in the on-
line questionnaire. The sample was a nonrandom convenience
sample. Sampling was carried out by means of advertisements
in Finnish student organizations’ e-mail mailing lists, online
mailing lists, Web sites of professional organizations, and
discussion forums of hobbyists targeting different age groups.
The respondents were predominantly female (67.8%). The
mean age was 36.35 years (SD =13.96 years; range 14–80
years). Eighteen percent had a university-level education.
Comparing against the demographics of the Finnish popu-
lation (Statistics Finland 2014), our sample was somewhat
younger (population average 42 years), better educated
(population average 9% university education), and it in-
cluded more females (population average 51%).
Five respondents were randomly chosen and awarded a
small prize (two movie tickets worth e20 in total), but no
other participation fees were paid.
Fourteen blank answer sheets caused by a technical error
in the service were excluded from the final sample.
Design
The experimental design was a 9 ·3·3 design (scenar-
io ·identity: private individual vs. organization vs. hid-
den ·intention: negative vs. neutral vs. hidden). Intention
and identity refer to what information was disclosed about
the entity accessing or using the data collected, with ‘‘hid-
den’’ denoting that information was not provided.
With nine scenarios and nine experimental conditions,
81 sets were created. The order of scenarios was counter-
balanced by rotation. Each respondent was randomly assigned
to a set by a server-side script. Every participant completed
nine scenarios.
Table 1. Summaries of the Data Collection Scenarios with Identities and Intentions
1. Your only computer has a device that logs everything typed on the computer’s keyboard
Organization: employer Private: roommate
Neutral: logging only, not examining Neutral: logging only, not examining
Negative: investigate if guilty of espionage Negative: seeing if you are hiding something
2. You wear a body sensor measuring pulse, body temperature, EKG trace, and respiration
Organization: employer Private: a relative
Neutral: checking your medical health Neutral: reacting quick in case of emergency
Negative: sending you home if sick Negative: checking if you are sick
3. You have a Facebook account
Organization: insurance company Private: acquaintance
Neutral: collects data but not using it Neutral: watching your profile
Negative: inspecting insurance fraud Negative: identity fraud
4. Every room in your house is subject to recording by video cameras around the clock
Organization: security company Private: neighbor
Neutral: security monitoring Neutral: no watching, just recording
Negative: watching clips on coffee breaks Negative: blackmail with video clips
5. All of your health records are in one central register
Organization: hospital Private: spouse
Neutral: diagnosis and treatment Neutral: use in case of accident
Negative: check if you have been lying Negative: check if you have been lying
6. Your phone records all communication (calls and text messaging)
Organization: telecom company Private: unknown individual
Neutral: stored; used only in emergency Neutral: not interested in you
Negative: finding interesting materials Negative: finding interesting materials
7. Your credit card transactions are recorded in a database
Organization: bank Private: acquaintance
Neutral: survey of card use in your district Neutral: report if credit card data stolen
Negative: following your money use Negative: following your money use
8. Your phone determines your location to an accuracy of 50 meters
Organization: state Private: good friend
Neutral: warning in case of natural disaster Neutral: call you up if close by
Negative: check if your self-report holds Negative: check if you are where you told to be
9. Cameras record people in public places
Organization: researchers at a university Private: work colleague
Neutral: study crowd behavior Neutral: follow interesting events in the city
Negative: report if you have secret goings Negative: see how you behave outside work
TRANSPARENCY OF INTENTIONS DECREASES PRIVACY CONCERNS 3
Materials
Table 1 presents the nine data collection scenarios. Each
scenario was a textual description of a sensor that collects
data, the identity of the data collector with access to the data,
and the believed collector’s intention. To make the scenarios
realistic, we perused newspaper articles dealing with sur-
veillance and also interviewed our colleagues informally.
The following scenario for a PC describes a public institution
with a negative intention:
Your only computer has a device that logs everything typed
on the computer’s keyboard to a file. You know it is possible
for your employer [IDENTITY] to view the file. You find it
likely that the employer wants to monitor whether you are
sharing confidential information regarding your job with non-
employees [INTENTION].
In the case where identity and/or intention were hidden,
the corresponding sentences marked in brackets above were
not displayed.
The questionnaire was implemented via the e-Lomake
service, a service that is used for creating, launching, and
administering Web-format questionnaire-based studies. Radio
buttons (HTML) were provided as a means for entering rat-
ings. All content was in Finnish.
Measurements
We constructed a questionnaire with 12 Likert-scale items
measuring privacy concerns. It was administered after every
scenario. The 12 items center on privacy concerns, and they
have some overlap with items in CFIP.
9,34
In particular, the
items cover (a) a general concern for privacy, (b) concern for
the particular data exposed, (c) experiential states (frustra-
tion, anxiety), and (d) and behavioral responses (inhibition of
regular behavior). Two negative questions were included.
The aggregate score was used for analysis.
Post-treatment check of the negative items exposed no
significant response bias. A two-tailed one-sample ttest was
not significant, t(1886) =1.127, p=0.26.
The BIS/BAS scale with five items was administered at
the beginning of each questionnaire.
Procedure
After supplying basic background information (gender,
age, etc.), the respondent was assigned to a condition within
the set of 81 questionnaires. One scenario with the question-
naire was presented at a time. After pressing the ‘‘Continue’
button, the next scenario was loaded. The questionnaire took
about 15 minutes to complete.
Results
On account of the fact that the appropriateness of re-
sponses could only be checked after the study and not during
the random assignment phase, cell sizes were not equal
across the nine sets created to counterbalance for the order of
scenarios. However, thanks to the sample size, this inequality
had no biasing effect. The smallest of the counterbalancing
sets had n=187 and the largest n=236. There was no sig-
nificant effect of the set on the dependent variable, F(8) =0.66,
p=0.73. For statistical testing, we use an alpha value of 0.05
as a threshold.
Main factors and interactions
Table 2 presents results from statistical testing with a
mixed model analysis of variance. Table 2 shows that the
three main factors—scenario, identity, and intention—and
their interaction effects were statistically significant.
The main effect of scenario was significant, as expected.
Table 3 reports privacy concerns measured in each of the nine
scenarios, ranked per severity of concern. Table 3 shows that
privacy concerns were relatively high on average, with four
scenarios scoring an average concern greater than 30 (out of 48),
and all except one scoring more than 25. Domestic video sur-
veillance was closest to the ceiling (M=41.15) and centralized
health records lowest. As Table 2 shows, scenario also had a
significant interaction effect with both identity and intention.
Identity had a significant main effect. A post hoc test
(Bonferroni) exposed that the mean of privacy concern in
scenarios involving a private data collector was higher than
in those involving a public organization ( p=0.009). When
information about identity was not provided (i.e., was hid-
den), privacy concern was lowest. The difference with or-
ganization was not significant, but with private it was
significant ( p<0.001).
Intention also had a significant main effect. Post hoc tests
(Bonferroni) showed that all differences among the levels
were significant ( p<0.001).
Finally, the interaction effect of identity·intention was
significant. Figure 1 presents an overview with the means from
all 3 ·3 conditions, pointing out that intention had a greater
and less variable effect than identity. A positive effect of
transparency was found for disclosing neutral intentions of a
public data or an unknown data collector. Post hoc tests
Table 2. Results of Statistical Testing—Effects
of Experimental Factors on Privacy Concerns
Factor Fdf MS p
Intention 72.43 2 33,376.0 <0.001*
Identity 7.81 2 3,644.2 <0.001*
Scenario 1.56 71 1,224.9 0.002*
Identity ·intention 3.43 4 1,565.2 0.008*
Intention ·scenario 3.096 142 1,426.7 <0.001*
Identity ·scenario 3.263 142 1,522.4 <0.001*
Identity ·intention
·scenario
3.302 284 1,509.3 <0.001*
*Statistically significant at p<0.05.
Table 3. The Nine Scenarios Ranked per Mean
Privacy Concern (Maximum Score 48)
Data collection scenario MSD
Domestic video surveillance 41.15 6.24
Communications recording 36.41 9.49
Keylogging on a personal computer 34.07 10.64
Facebook monitoring 30.83 11.13
Exercise data monitoring 30.41 11.58
Credit card logging 29.19 11.27
Smartphone location tracking 28.97 11.39
Public place video surveillance 27.46 11.73
Centralized health records 22.73 11.93
4 OULASVIRTA ET AL.
(Bonferroni) expose three patterns. First, there were no sig-
nificant differences in the intention conditions within a given
identity condition, except for the condition intention =neutral,
where the sub-conditions identity =private and identity =
hidden differed from each other ( p=0.03). Second, pri-
vacy concern in the condition intention=hidden and
identity =private was not statistically different from the
conditions were intention was negative ( p>0.99), and it
was higher than in the conditions where intention =neutral
and identity =organization/hidden. Third, the intention =
negative condition was different from other conditions
(p<0.004), except the condition where intention was
hidden and identity private.
Individual differences
To understand the effect of BIS/BAS traits, statistical
testing was carried out using general linear model (GLM)
with intention and identity as within-subjects factors and
BIS/BAS as continuous predictors (BAS Drive, BAS Fun
Seeking, BAS Reward Responsiveness, and BIS).
Table 4 reports the effects of BIS/BAS variables. BIS had
a significant main effect on privacy concern. Not surpris-
ingly, respondents scoring high in BIS had higher privacy
concerns overall. Only one of the BAS variables (BAS-Fun)
had a main effect.
Discussion
The high level of concern measured over the nine sce-
narios confirms that people perceive ubiquitous surveillance
negatively, and this motivates the need for studying privacy
concerns. To our knowledge, this is the first time the effect of
transparency has been studied across a large set of surveil-
lance scenarios.
Our findings indicate that transparency decreases the level
of concern.
1,9,18,20,22
We found that the effect is moderated in
an expectable way by the other variables manipulated in the
experiment. First, we found that the nine scenarios differ in
terms of the effect of transparency (RQ1). We found a sig-
nificant main effect of scenario, as well as significant inter-
actions with the transparency-related factors identity and
intention.
Two other findings allow further insight into this effect.
First, although knowledge about the data collector’s identity
moderates privacy concerns, knowledge of intentions has a far
stronger effect on privacy concerns (RQ2). Not surprisingly,
we found that neutral intentions of the data collector decrease
the level of concern, and negative ones increase it. The highest
level of concern is associated with scenarios with negative
intentions, and knowledge about who is accessing data lowers
concerns if the intentions are known to be not antagonistic.
Second, when the data collector’s intention is unknown,
privacy concerns are elevated (RQ3). People do not assume a
priori that data collectors have neutral intentions; rather, they
postulate negative intentions, although they are not as neg-
ative as the ones used in our scenarios.
Moreover, we found that respondents scoring higher in
BIS are relatively more alarmed when they know that the
data collector’s intentions are against them (RQ4).
These findings call for more research to provide exact
models that link disclosed information with privacy con-
cerns. With regard to previous models that consider risks/
gains in threat scenarios,
35
the believed intentions of the data
collector seem to be critical because they indicate the
probability of harm and thereby raise concerns. On the other
hand, believed identity has a secondary effect as an indicator
of the capacity to inflict actual harm. Both might be modeled
in terms of expected harm. Moreover, we found evidence of
pessimism: when it comes to surveillance, by default people
tend to expect high harm in the face of uncertainty rather
than ignore possible threats.
These findings also point toward the question of how to
implement usable transparency. Recent work in human–
computer interaction has started to examine usable ‘‘privacy
nutrition labels,’’ descriptions that employ a standard set of
categories and symbols to communicate privacy practices and
are used for many types of services.
8,17,37
Widespread adop-
tion of such standardization could revolutionize the field
similarly to how Creative Commons licensing has revolu-
tionized copyright management on the Internet. The present
findings underline that both the data collector’s identity and
intention should be disclosed in such privacy nutrition labels.
Furthermore, while exposing the two factors (identity and
intention) will be beneficial, directing the user’s attention to
the data collector’s intention will have a stronger effect than
would drawing attention to identity alone. We hope that the
benefit conveyed by the present data will fuel serious efforts in
this direction.
FIG. 1. Privacy concerns with a breakdown by the type of
information provided about the data collector.
Table 4. Individual Differences: The Effects
of BIS/BAS Components on Concern for Privacy
Factor df MS Fp
BAS-Drive 1 1,193.7 1.503 0.220
BAS-Reward 1 361.1 0.455 0.500
BAS-Fun 1 1,354.7 1.706 0.001*
BIS 1 15,071.9 18.978 <0.001*
*Statistically significant at p<0.05.
TRANSPARENCY OF INTENTIONS DECREASES PRIVACY CONCERNS 5
Author Disclosure Statement
No competing financial interests exist.
References
1. Agre PE. Surveillance and capture: two models of privacy.
The Information Society 1994; 10:101–127.
2. Barnes SB. A privacy paradox: social networking in the
United States. First Monday 2006; 11:11–15.
3. Brin D. (1998) The transparent society. Reading, MA:
Addison-Wesley Longman, Inc.
4. Lyon D. Facing the future: seeking ethics for everyday sur-
veillance. Ethics & Information Technology 2001; 3:171–181.
5. Muise CE, Desmarais S. Information disclosure and control on
Facebook: are they two sides of the same coin or two different
processes? CyberPsychology & Behavior 2009; 12:341–345.
6. Jensen C, Potts C, Jensen C. Privacy practices of Internet
users: self-reports versus observed behavior. International
Journal of Human–Computer Studies 2004; 63:203–227.
7. Leon PG, Ur B, Balebako R, et al. (2011) Why Johnny
can’t opt out: a usability evaluation of tools to limit online
behavioral advertising. Carnegie Mellon University CyLab
Technical Report 11-017. www.cylab.cmu.edu/research/
techreports/2011/tr_cylab11017.html (accessed April 4, 2014).
8. Kelley PG, Bresee J, Reeder RW, et al. (2009) Design of a
privacy label. Symposium on Usable Privacy and Security
SOUPS, Mountain View, CA.
9. Malhotra NK, Kim SS, Agarwal J. Internet users’ infor-
mation privacy concerns (IUIPC): the construct, the scale,
and a causal model. Information Systems Research 2004;
15:336–355.
10. Culnan M. ‘‘How did they get my name?’’: an exploratory
investigation of consumer attitudes toward secondary in-
formation use. MIS Quarterly 1993; 17:341–363.
11. Bauer TN, Truxillo DM, Tucker JS, et al. Selection in the
information age: the impact of privacy concerns and com-
puter experience on applicant reactions. Journal of Man-
agement 2006; 32:601–621.
12. Chen J, Ross W. Individual differences and electronic
monitoring at work. Information, Communication & So-
ciety 2007;10:488–505.
13. Dinev T, Hart P, Mullen MR. Internet privacy concerns and
beliefs about government surveillance—an empirical in-
vestigation. The Journal of Strategic Information Systems
2008; 17:214–233.
14. Dinev T, Hart P. Internet privacy concerns and their ante-
cedents—measurement validity and a regression model.
Behaviour & Information Technology 2004; 23:413–422.
15. Gefen D, Karahanna E, Straub D. Trust and TAM in online
shopping: an integrated model. MIS Quarterly 2003;
27:51–90.
16. Xu H, Dinev T, Smith HJ, et al. (2008) Examining the
formation of individual’s privacy concerns: toward an in-
tegrative view. Proceedings of 29th Annual International
Conference on Information Systems (ICIS), Paris, France.
17. Hui K-L, Teo H, Lee S-Y. The value of privacy assurance:
an exploratory field experiment. MIS Quarterly 2007;
31:19–33.
18. Fink AM, Butcher JN. Reducing objections to personality
inventories with special instructions. Educational & Psy-
chological Measurement 1972; 32:631–639.
19. Fusilier MR, Hoyer WD. Variables affecting perceptions of
invasion of privacy in a personnel selection situation.
Journal of Applied Psychology 1980; 65:623–626.
20. Riegelsberger J, Sasse MA, McCarthy J. The researcher’s
dilemma: evaluating trust in computer-mediated commu-
nication. International Journal of Human Computer Studies
2003; 58:759–781.
21. Chen YH, Chien SH, Wu JJ, et al. Impact of signals and
experience on trust and trusting behavior. Cyberpsychol-
ogy, Behavior, & Social Networking 2010; 13:539–546.
22. Acquisti A, Grossklags J. (2005) Uncertainty, ambiguity
and privacy. 4th Workshop on the Economics of Informa-
tion Security (WEIS), Cambridge, MA.
23. Paradise A, Sullivan M. (In) visible threats? The third-person
effect in perceptions of the influence of Facebook. Cyberp-
sychology, Behavior, & Social Networking 2012; 15:55–60.
24. Acquisti A, Gross R. Imagined communities: awareness,
information sharing, and privacy on the Facebook. Privacy
Enhancing Technologies 2006; 4258:36–58.
25. Ainslie G. Specious reward: a behavioral theory of impul-
siveness and impulse control. Psychological Bulletin 1975;
82:463–496.
26. Thaler R. Some empirical evidence on dynamic inconsis-
tency. Economics Letters 1981; 8:201–207.
27. Caplan B. Rational ignorance versus rational irrationality.
KYKLOS 2001; 54:3–26.
28. Downs A. (1957) An economics theory of democracy. New
York: Harper.
29. Chen HT, Kim Y. Problematic use of social network sites:
the interactive relationship between gratifications sought
and privacy concerns. Cyberpsychology, Behavior, & So-
cial Networking 2013; 16:806–812.
30. Taylor GS, Davis JS. Individual privacy and computer-
based human resources systems. Journal of Business Ethics
1989; 8:596–576.
31. Camerer CF, Weber M. Recent developments in modelling
preferences: uncertainty and ambiguity. Journal of Risk &
Uncertainty 1992; 5:325–370.
32. Ellsberg D. Risk, ambiguity, and the savage axioms.
Quarterly Journal of Economics 1961; 75:643–699.
33. Fox CR, Tversky A. Ambiguity aversion and comparative
ignorance. Quarterly Journal of Economics 1995; 110:585–
603.
34. Smith HJ, Milberg SJ, Burke SJ. Information privacy: mea-
suring individuals’ concerns about organizational practices.
MIS Quarterly 1996; 20:167–196.
35. Smith HJ, Dinev T, Xu H. Information privacy research: an
interdisciplinary review. MIS Quarterly 2011; 35:989–1015.
36. Carver CS, White TL. Behavioral inhibition, behavioral
activation, and affective responses to impending reward
and punishment: the BIS/BAS scales. Journal of Person-
ality & Social Psychology 1994; 67:319–333.
37. Tsai J, Egelman S, Cranor L, et al. (2007) The effect of
online privacy information on purchasing behavior: an
experimental study. 6th Workshop on the Economics of
Information Security (WEIS), Pittsburgh, PA.
Address correspondence to:
Prof. Antti Oulasvirta
Department of Communications and Networking
School of Electrical Engineering
Aalto University
Otakaari 5
13000 Aalto University
Helsinki, Finland
E-mail: antti.oulasvirta@gmail.com
6 OULASVIRTA ET AL.
... Previous studies suggest that increasing transparency about the type of data collected and to what purpose could influence individuals' level of concern and disclosure attitudes (Chua et al., 2021). While Willford and colleagues (2015) found that individuals who did not know if they were being monitored provided the most negative ratings on measures of privacy invasion, Oulasvitra (2014) found that transparency about the intention of data collection reduces privacy concerns. Providing transparency can even help employees reach their goals (Locke & Latham, 2005) by providing feedback (Urbaczewski & Jessup, 2002) and could promote confidence and trust in the organization (Chua et al., 2021). ...
... Consistent with the literature (Cramer et al., 2008;Venkatesh et al., 2003), we found that understanding the value of monitoring can prevent privacy risk from exerting negative effects on acceptance of monitoring when transparency is high. Previous studies have highlighted the importance of disclosing intentions for data use to users in an easily understandable manner (Anderson & Agarwal, 2011;Oulasvitra, 2014). That is, in order to diminish the negative effect of perceived privacy risk, individuals need to be aware of the purposes of monitoring, but also understand its value. ...
Article
Full-text available
In the context of smart manufacturing, the technical development of monitoring systems has made it possible to track employees with the same systems that are used to track assets. This study contributes to our understanding of the acceptance of location-based monitoring of employees and investigates how the perceived privacy risk regarding monitoring can be tackled by examining the role of transparency and the perceived value of monitoring. We designed an experimental setting in which students assembled a 3D printer and manipulated transparency with two conditions: a detailed explanation of monitoring during the task vs. monitoring without any explanation. The results show that the higher the privacy concerns and perceived risks were, the lower was the acceptance for monitoring. However, the negative effect of perceived risk diminishes when both, transparency and the value of monitoring are high, but becomes even stronger when only transparency is high and perceived value is low.
... Dinev and her colleagues [30,33] showed that government intrusion concerns are positively related to privacy concerns which, in turn, are negatively related to the willingness to provide personal information over the Internet. Several studies have argued that such concerns lower privacy self-efficacy [57] and intention to share information online [e.g., 33,72]. ...
... Mitsilegas [66] Tilburg Law Review Legal review Highlighted the transformation of the right to privacy by judiciaries in Europe to counter generalized, massive pre-emptive surveillance in the EU, the USA, and globally. Oulasvirta et al. [72] Cyberpsychology, Behavior, and Social Networking ...
Article
In the post-Snowden revelation era, concerns related to government surveillance and oversight have come to the forefront. The ability of the Internet to remember “everything” (or forget anything) also raises a privacy concern associated with the right to be forgotten (RTBF). In this paper, we examine the conceptualization of Internet privacy concerns (IPC) by extending Hong and Thong's (2013) model with the addition of two dimensions: oversight (i.e., due to surveillance) and the RTBF. We provide theoretical and empirical evidence for our proposed integrated conceptualization. Data were collected from Amazon's Mechanical Turk and analyzed with structural equation modeling using a nomological network that includes trusting beliefs. This research contributes to a better understanding of the conceptualization of IPC and provides a reliable and valid contemporary instrument for IPC.
... The value of a transparency artifact designed according to TIPP theory is greater than the value of its parts since additional positive effects can be achieved through joint consideration of both feedback loops. For instance, consumers may presume negative provider intentions if privacy practices remain unknown (Oulasvirta et al. 2014). This constitutes a problem because the coverage feedback loop only promotes presentation of information on privacy practices carried out but not information on other privacy practices of interest to consumers. ...
Article
Full-text available
The rising diffusion of information systems (IS) throughout society poses an increasingly serious threat to privacy as a social value. One approach to alleviating this threat is to establish transparency of information privacy practices (TIPP) so that consumers can better understand how their information is processed. However, the design of transparency artifacts (eg, privacy notices) has clearly not followed this approach, given the ever-increasing volume of information processing. Hence, consumers face a situation where they cannot see the 'forest for the trees' when aiming to ascertain whether information processing meets their privacy expectations. A key problem is that overly comprehensive information presentation results in information overload and is thus counterproductive for establishing TIPP. We depart from the extant design logic of transparency artifacts and develop a theoretical foundation (TIPP theory) for transparency artifact designs useful for establishing TIPP from the perspective of privacy as a social value. We present TIPP theory in two parts to capture the sociotechnical interplay. The first part translates abstract knowledge on the IS artifact and privacy into a description of social subsystems of transparency artifacts, and the second part conveys prescriptive design knowledge in form of a corresponding IS design theory. TIPP theory establishes a bridge from the complexity of the privacy concept to a metadesign for transparency artifacts that is useful to establish TIPP in any IS. In essence, transparency artifacts must accomplish more than offering comprehensive information; they must also be adaptive to the current information needs of consumers.
... Giving up privacy holds risks of uncertainty and losses, which are undescribed in economics and in particular the behavioral economics literature on intertemporal decision-making (Gaudeul & Giannetti, 2017). People tend to be more willing to voluntarily sacrifice privacy if the data gatherer is seen to be transparent as to what information is gathered and how the information will be used (Oulasvirta, Suomalainen, Hamari, Lampinen & Karvonen, 2014). Privacy as a prerequisite for the development of a sense of self-identity is a core of humanness (Altman, 1975). ...
Chapter
In an impressive line of experiments and field studies, the growing field of behavioral finance has offered behavioral insights on how markets deviate from rationality. Human actors are prone to base their investment choices on very many other factors than simply volatility and profit maximization opportunities. Investors decisions tend to be biased by heuristics, which are covered in this chapter. The following part reviews some of the behavior insights gained in the last decades and shows ways how to profit from heuristics and biases. Most recently, nudging has started using the emerging insights about human heuristics and biases to improve decision-making in different domains ranging from health, wealth, and prosperity. This part of the book explains some behavioral finance techniques that can be used to enhance your financial gain by diversification, investing in crises-robust, long-term sustainable market options, demographics-based forecasting, saving money through tangibility and safe havens, or reaping benefits from outperforming market strategies. The role of inflation and a low interest rate in markets will be considered from a behavioral consumer perspective as well as from a disparate impact analysis view.
... But transparency should go beyond meeting legal requirements, as various advantages are associated with it. Firms can reduce privacy concerns through transparent data management (Malhotra et al., 2004;Oulasvirta et al., 2014). Consumers reward transparency, for example, by sharing more personal information when websites use privacy policies (Hui et al., 2007), and report a higher likelihood to purchase if privacy policies are easily accessible and likewise comprehensible (Tsai et al., 2011). ...
Chapter
Over the course of digitization, many innovative marketing technologies have emerged that—theoretically speaking—promise firms gains in efficiency and/or effectiveness. However, a central task for marketing is not to allow the use of these technologies to become an end in itself, but to preserve the guiding principle of marketing, namely customer orientation. This means that the new technologies only offer added value for firms if they also offer (perceived) added value for consumers. Using three specific application areas as examples (chatbots, voice assistants, and data privacy management), we show how firms can combine innovative marketing technologies and consumer interests in a purposeful manner.
... H2: explicitly mentioning the trigger increases comprehension and perceived comprehension Our main interest is to assess contextualisation as a strategy to improve the transparency of the privacy notices. However, the literature points at the possibility that the transparency of a website or service can encourage incautious behaviours in the users, by making them feel that they can trust the service Masotina et al. 2019;Oulasvirta et al. 2014;Paunov, Wänke, and Vogel 2019). Therefore, we will also consider the participant's response to the notice. ...
Article
Full-text available
Data protection regulations require that website visitors provide their informed consent before their personal data are collected. However, privacy notices are often misunderstood. We claim that the context of the notice, i.e. the visitor’s action preceding their appearance, can affect its comprehension. In two studies with an ad hoc website (N = 132, 128), we either preserved the sequential connection between the notice and the action triggering it or broke it with a delay. We also manipulated the reference to the triggering action in the notice's title. The participants’ comprehension, perceived comprehension, and response were measured. In a third study (N = 91), we investigated how different contexts (i.e. generic vs. specific) affect the participants’ interpretation of the notice. Overall, the results suggest that the action preceding the notice affects the identification of its cause (studies 1, 2, and 3) and the interpretation of its content (study 3), whereas the explicit content of the notice does not. The acceptance of the notice does not seem to directly follow an improvement in comprehension, as would be foreseen by the transparency paradox. Considering the sequential context in which the notice appears seems then a good design practice to achieve genuine comprehension.
... In fact, some users voluntarily yield their privacy in exchange for using social media platforms and services. Research into privacy issues suggests that people are more likely to voluntarily sacrifice their privacy if the data collector is perceived to be transparent about the nature of the information being collected and how it will be used (Oulasvirta et al., 2014). ...
Article
Full-text available
The rise of the Internet and smartphones in the 21st century has created and developed social media as an extremely effective means of communication in society. In life, business, sports, and politics, social media facilitates the democratization of ideas like never before. Social media content gives consumers different information sources that they must decipher to discern its trustworthiness and influence in their own opinions. Marketers must be savvy about using social media in their attempts to persuade consumers and build brand equity. As social media has permeated our everyday lives, scholars in various disciplines are actively conducting research into this aspect regarding our way of life. In this scholarly endeavor, marketing has taken a leading role in this research endeavor as a discipline involving human communications and idea persuasion. Thus, rather than considering social media broadly across multiple disciplines, in this monograph, we concentrate on social media analytics in marketing. This monograph comprises the following four sections: • First, we provide an overview of social media and social media analytics (SMA). While much has already been said about social media generally, relatively less has been said about social media analytics. Thus, much of our focus is on SMA in terms of contributing to the current understanding of SMA in the field. • Second, we concentrate on social media analytics in marketing. We discuss practical industry perspectives and examples, as well as recent marketing research by academics. Notably, we show how analytics may be used to address concerns about social media privacy and help detect fake reviews. • Third, we summarize common tools for social media analytics in marketing. These methods can be complex, but they must be mastered for sound SMA practice. They encompass big data, artificial intelligence, machine learning, deep learning, text analytics, and visual analytics. • Fourth, we discuss trends and a future research agenda. We also discuss how SMA might be better integrated into higher education.
Chapter
Today enormous data storage capacities and computational power in the e-big data era have created unforeseen opportunities for big data hoarding corporations to reap hidden benefits from individuals' information sharing, which occurs bit by bit in small tranches over time. Behavioral economics describes human decision-making fallibility over time but has—to this day—not covered the problem of individuals' decisions to share information about themselves in tranches on social media and big data administrators being able to reap a benefit from putting data together over time and reflecting the individual's information in relation to big data of others.
Chapter
Today enormous data storage capacities and computational power in the e-big data era have created unforeseen opportunities for big data hoarding corporations to reap hidden benefits from individuals' information sharing, which occurs bit by bit in small tranches over time. Behavioral economics describes human decision-making fallibility over time but has—to this day—not covered the problem of individuals' decision to share information about themselves in tranches on social media and big data administrators being able to reap a benefit from putting data together over time and reflecting the individual's information in relation to big data of others. The decision-making fallibility inherent in individuals having problems understanding the impact of their current information sharing in the future is introduced as hyper-hyperbolic discounting decision-making predicament.
Chapter
The introduction of artificial intelligence (AI) in our contemporary society imposes historically unique challenges for humankind. The emerging autonomy of AI holds unique potentials of eternal life of robots, AI, and algorithms alongside unprecedented economic superiority, data storage, and computational advantages. This chapter provides information on what impact AI taking over the workforce will have on economic growth. A theoretical background on standard neoclassical and heterodox economics growth theories will be given to then be empirically validated for economic growth in the digital century. Over a cross-sectional worldwide investigation as well as time series, empirical results find that Internet connectivity is associated with economic growth decline, which is explained by standard neoclassical growth theory not attributing digitalization-induced growth, e.g., big data analytics and internet information transfer on social media. The discussion closes with a future outlook on the law and economics of the AI entrance into our contemporary economies and society in order to aid a successful and humane introduction and blending in of AI into our world. Most novel insights on searchplace discrimination as the strategic manipulation of search engine results in competitive settings will be outlined and potential remedies for the sake of searchplace ethics envisioned.
Article
Full-text available
Abstract Problematic Internet use has long been a matter of concern; however, few studies extend this line of research from general Internet use to the use of social network sites (SNSs), or explicate the problematic use of SNSs by understanding what factors may enhance or reduce users' compulsive behaviors and excessive form of use on SNSs. Building on literature that found a positive relationship between gratifications sought from the Internet and problematic Internet use, this study first explores the types of gratifications sought from SNSs and examines their relationship with problematic SNS use. It found that three types of gratifications-diversion, self-presentation, and relationship building-were positively related to problematic SNS use. In addition, with a growing body of research on SNS privacy, a moderating role of privacy concerns on SNSs has been proposed to understand how it can influence the relationship between gratifications sought from SNSs and problematic SNS use. The findings suggest that different subdimensions of privacy concerns interact with gratifications sought in different manners. In other words, privacy concerns, including unauthorized secondary use and improper access, play a more influential role in constraining the positive relationship between gratifications sought and problematic SNS use when individuals seek to build relationships on SNSs. However, if individuals seek to have diversion on SNSs, their privacy concerns will be overridden by their gratifications sought, which in turn leads to problematic SNS use. Implications of these findings for future research are discussed.
Article
Full-text available
The authors examined the influence of personal information privacy concerns and computer experience on applicants’ reactions to online screening procedures. Study 1 used a student sample simulating application for a fictitious management intern job with a state personnel agency (N = 117) and employed a longitudinal, laboratory-based design. Study 2 employed a field sample of actual applicants (N = 396) applying for jobs online. As predicted, procedural justice mediated the relationship between personal information privacy concerns and test-taking motivation, organizational attraction, and organizational intentions in the laboratory and field. Experience with computers moderated the relationship between procedural justice with test-taking motivation and organizational intentions in the field but not in the laboratory sample. Implications are discussed in terms of the importance of considering applicants’ personal information privacy concerns and testing experience when designing online recruitment and selection systems.
Article
Full-text available
We present results of a 45-participant laboratory study investigating the usability of nine tools to limit online behavioral advertising (OBA). We interviewed participants about OBA and recorded their behavior and attitudes as they configured and used a privacy tool, such as a browser plugin that blocks requests to specific URLs, a tool that sets browser cookies indicating a user's preference to opt out of OBA, or the privacy settings built into a web browser. We found serious usability flaws in all tools we tested. Participants found many tools difficult to configure, and tools' default settings were often minimally protective. Ineffective communication, confusing interfaces, and a lack of feedback led many participants to conclude that a tool was blocking OBA when they had not properly configured it to do so. Without being familiar with many advertising companies and tracking technologies, it was difficult for participants to use the tools effectively.
Article
Full-text available
423 undergraduates read a scenario of an employment application and personnel selection procedure. Manipulations of (a) perceived control over the personal information disclosed in the selection process, (b) the outcome of the disclosure, (c) the type of disclosure, and (d) the type of information disclosed were incorporated into the scenario. Perceptions of invasion of privacy were also measured. Results suggest that both the perceived control variable and the disclosure outcome affected perceptions of invasion of privacy. (5 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
In subjective expected utility (SEU), the decision weights people attach to events are their beliefs about the likelihood of events. Much empirical evidence, inspired by Ellsberg (1961) and others, shows that people prefer to bet on events they know more about, even when their beliefs are held constant. (They are averse to ambiguity, or uncertainty about probability.) We review evidence, recent theoretical explanations, and applications of research on ambiguity and SEU.
Article
Gray (1981, 1982) holds that 2 general motivational systems underlie behavior and affect: a behavioral inhibition system (BIS) and a behavioral activation system (BAS). Self-report scales to assess dispositional BIS and BAS sensitivities were created. Scale development (Study 1) and convergent and discriminant validity in the form of correlations with alternative measures are reported (Study 2). In Study 3, a situation in which Ss anticipated a punishment was created. Controlling for initial nervousness, Ss high in BIS sensitivity (assessed earlier) were more nervous than those low. In Study 4, a situation in which Ss anticipated a reward was created. Controlling for initial happiness, Ss high in BAS sensitivity (Reward Responsiveness and Drive scales) were happier than those low. In each case the new scales predicted better than an alternative measure. Discussion is focused on conceptual implications.
Article
Individual differences such as personality and demographic factors have effects on how people react to Electronic Performance Monitoring (EPM), yet the literature on this aspect of electronic monitoring has been scattered. The present paper summarizes this body of empirical research and presents a framework for organizing current research findings based on two dimensions: the probability of successful work under the monitoring and the probability of accepting that the monitoring is of value. The framework also allows researchers to make predictions regarding additional individual difference variables. Managers may use this information to select employees who are likely to respond well to monitoring conditions and to structure monitoring procedures so that they are likely to be accepted by their employees with particular individual difference characteristics.