ArticlePDF Available

GENSI: A new graphical tool to collect ego-centered network data

Authors:

Abstract and Figures

This study (1) tested the effectiveness of a new survey tool to collect ego-centered network data and (2) assessed the impact of giving people feedback about their network on subsequent responses. The new tool, GENSI (Graphical Ego-centered Network Survey Interface), allows respondents to describe all network contacts at once via a graphical representation of their networks. In an online experiment, 434 American adults were randomly assigned to answer traditional network questions or GENSI and were randomly assigned to receive feedback about their network or not. The traditional questionnaire and GENSI took the same amount of time to complete, and measurements of racial composition of the network showed equivalent convergent validity in both survey tools. However, the new tool appears to solve what past researchers have considered to be a problem with online administration: exaggerated numbers of network connections. Moreover, respondents reported enjoying GENSI more than the traditional tool. Thus, using a graphical interface to collect ego-centered network data seems to be promising. However, telling respondents how their network compared to the average Americans reduced the convergent validity of measures administered after the feedback was provided, suggesting that such feedback should be avoided.
Content may be subject to copyright.
Social
Networks
48
(2017)
36–45
Contents
lists
available
at
ScienceDirect
Social
Networks
jo
u
r
n
al
hom
ep
age:
www.elsevier.com/locat
e/socnet
GENSI:
A
new
graphical
tool
to
collect
ego-centered
network
data
Tobias
H.
Starka,,
Jon
A.
Krosnickb
aUtrecht
University/ICS,
Padualaan
14,
3584
CH
Utrecht,
The
Netherlands
bStanford
University,
450
Serra
Mall,
Stanford,
CA
94305,
United
States
a
r
t
i
c
l
e
i
n
f
o
Article
history:
Keywords:
Ego-centered
networks
Software
Data
collection
Graphical
interface
Online
surveys
a
b
s
t
r
a
c
t
This
study
(1)
tested
the
effectiveness
of
a
new
survey
tool
to
collect
ego-centered
network
data
and
(2)
assessed
the
impact
of
giving
people
feedback
about
their
network
on
subsequent
responses.
The
new
tool,
GENSI
(Graphical
Ego-centered
Network
Survey
Interface),
allows
respondents
to
describe
all
network
contacts
at
once
via
a
graphical
representation
of
their
networks.
In
an
online
experiment,
434
American
adults
were
randomly
assigned
to
answer
traditional
network
questions
or
GENSI
and
were
randomly
assigned
to
receive
feedback
about
their
network
or
not.
The
traditional
questionnaire
and
GENSI
took
the
same
amount
of
time
to
complete,
and
measurements
of
racial
composition
of
the
network
showed
equivalent
convergent
validity
in
both
survey
tools.
However,
the
new
tool
appears
to
solve
what
past
researchers
have
considered
to
be
a
problem
with
online
administration:
exaggerated
numbers
of
network
connections.
Moreover,
respondents
reported
enjoying
GENSI
more
than
the
traditional
tool.
Thus,
using
a
graphical
interface
to
collect
ego-centered
network
data
seems
to
be
promising.
However,
telling
respondents
how
their
network
compared
to
the
average
Americans
reduced
the
convergent
validity
of
measures
administered
after
the
feedback
was
provided,
suggesting
that
such
feedback
should
be
avoided.
©
2016
Elsevier
B.V.
All
rights
reserved.
1.
Introduction
Questions
about
people’s
social
contacts
have
become
increas-
ingly
popular
in
surveys,
because
hundreds
of
studies
in
many
scientific
fields
have
shown
that
social
contacts
influence
people’s
behavior
and
attitudes
(e.g.,
Berg,
2009;
Borgatti
and
Foster,
2003;
Burt
et
al.,
2013).
In
typical
ego-centered
network
surveys,
respon-
dents
(egos)
are
first
asked
to
list
their
social
contacts
(alters)
in
name
generator
questions
(Hsieh,
2015).
Subsequently,
egos
are
asked
to
report
attributes
of
their
alters
in
name
interpreter
ques-
tions
(Marsden,
2011).
To
determine
the
structure
of
the
network
(Burt,
1984),
surveys
typically
continue
to
ask
respondents
to
indi-
cate,
for
every
pair
of
alters,
whether
or
how
well
the
two
people
know
each
other
(e.g.,
“Does
Joe
know
Mary?”).
Some
researchers
have
raised
questions
about
the
quality
of
answers
given
in
ego-centered
network
surveys
because
repeatedly
answering
the
same
questions
for
each
alter
or
pair
of
alter
may
impose
a
cognitive
burden
on
respondents
(Hsieh,
2015;
Tubaro
Corresponding
author.
E-mail
addresses:
t.h.stark@uu.nl
(T.H.
Stark),
krosnick@stanford.edu
(J.A.
Krosnick).
et
al.,
2014).
For
each
of
a
respondent’s
contacts,
he
or
she
must
report
that
person’s
educational
attainment,
religious
preference,
number
of
children,
and
many
more
variables
in
typical
studies.
Moreover,
the
number
of
pairs
of
alters
increases
exponentially
with
the
size
of
the
network,
so
the
number
of
questions
about
the
network
structure
may
be
substantial
(McCarty
et
al.,
2007).
This
may
reduce
the
quality
of
answers
given
particularly
in
online
surveys
where
no
interviewer
is
present
who
can
perhaps
motivate
respondents
to
answer
such
repetitive
questions
effortfully
(Matzat
and
Snijders,
2010;
Vehovar
et
al.,
2008).
With
the
increasing
use
of
computers
in
survey
data
collection
(e.g.,
computer-assisted
self-interviewing
(CASI),
web
surveys),
the
availability
of
graphical
interfaces
allows
changing
the
way
ques-
tions
in
ego-centered
network
surveys
are
asked
(Coromina
et
al.,
2014).
Some
observers
have
speculated
that
the
use
of
graphic
displays
may
enhance
respondents’
enjoyment
of
the
reporting
process
(Hogan
et
al.,
2007),
reduce
cognitive
burden
(Matzat
and
Snijders,
2010;
Tubaro
et
al.,
2014;
Vehovar
et
al.,
2008),
and
increase
data
quality
(Coromina
and
Coenders,
2006).
A
number
of
researchers
have
developed
graphical
interfaces
to
collect
network
information
(e.g.,
McCarty
and
Govindaramanujam,
2005;
Tubaro
et
al.,
2014),
but
none
of
these
tools
has
been
tested
against
a
tra-
ditional
ego-centered
network
survey.
http://dx.doi.org/10.1016/j.socnet.2016.07.007
0378-8733/©
2016
Elsevier
B.V.
All
rights
reserved.
T.H.
Stark,
J.A.
Krosnick
/
Social
Networks
48
(2017)
36–45
37
In
this
study,
we
explore
whether
a
richly
graphical
presen-
tation
via
a
computer
can
be
advantageous
in
the
measurement
of
social
network
features.
Building
on
insights
from
earlier
research,
we
have
developed
a
new,
freely
available,
software
tool
for
ego-centered
network
surveys.
We
test
whether
the
graphical
data
collection
tool
increases
respondents’
enjoyment
of
the
survey,
reduces
administration
time,
reduces
problems
with
affirmative
answering,
and
increases
the
validity
of
indica-
tors.
Also,
building
on
the
new
trend
of
“personal
informatics”
in
human–computer
interaction
research
and
in
medical
research,
we
explored
the
impact
of
a
promise
to
respondents
that
they
will
learn
new
information
about
themselves
by
participating
in
the
study.
1.1.
Existing
software
Some
existing
software
programs
make
use
of
graphic
interac-
tive
features
to
collect
ego-centered
network
data
in
online
surveys.
The
programs
EgoNet,
EgoWeb
2.0
and
C-IKNOW
first
ask
ques-
tions
about
alters
and
their
relationships
with
each
other
and
then
present
a
picture
of
the
network
(McCarty
and
Govindaramanujam,
2005).
In
contrast,
the
program
EgoWeb
displays
and
updates
a
net-
work
picture
in
real
time
as
a
respondent
inputs
information
about
the
network
(McCarty
and
Govindaramanujam,
2005).
Although
these
programs
make
use
of
graphical
features
for
the
network
gen-
eration
process,
they
still
follow
the
traditional
approach
of
asking
each
name
interpreter
question
separately
for
each
alter
or
pair
of
alters.
More
recent
programs
make
use
of
Web
2.0
graphical
features
to
allow
answering
name
interpreter
questions
for
all
alters
at
once.
Lackaff’s
(2012)
survey
tool
PASN
first
mines
a
respondents’
Facebook
profile
and
then
uses
Facebook
profile
pictures
of
the
respondents’
friends
to
represent
network
members.
Respondents
can
then
drag
and
drop
those
pictures
into
answer
categories
to
answer
questions.
To
indicate
relationships
between
alters,
the
software
requires
respondents
to
answer
for
each
alter
separately,
“Which
of
these
people
does
[alter]
know?”
by
dragging
and
drop-
ping
the
names
of
the
contacts
into
answer
areas.
The
software
TellUsWho
(Ricken
et
al.,
2010)
first
mines
a
respondents’
email
account
for
names
of
potential
contacts.
These
names
are
displayed
on
the
computer
screen,
and
respondents
drag
and
drop
the
names
to
answer
name
generator
and
name
inter-
preter
questions.
This
program
does
not
offer
the
possibility
to
ask
for
relationships
between
network
contacts
to
assess
the
network
structure
but
only
allows
generating
groups
of
alters.
In
the
program
ANAMIA
EGOCENTER,
respondents
draw
a
sociogram
of
their
networks,
which
constitutes
reporting
of
con-
nections
between
individuals
visually
by
clicking
on
alters
that
have
a
relationship
with
each
other
(Tubaro
et
al.,
2014).
This
software
thus
makes
use
of
graphical
features
to
generate
the
network
structure
but
follows
the
traditional
approach
of
asking
each
name
interpreter
questions
separately
for
each
alter.
The
soft-
ware
OpenEddi
follows
a
similar
approach
by
allowing
respondents
to
indicate
relationships
by
drawing
lines
between
alters
(Fagan
and
Eddens,
2015).
Additionally,
relationships
can
be
indicated
by
sorting
alters
into
piles
or
by
the
traditional
approach
of
asking
sep-
arately
for
each
pair
of
alters
whether
a
relationship
exists.
Name
interpreter
questions
can
be
asked
separately
for
each
alter
or
by
dragging
and
dropping
names
of
alters
into
answer
categories.
An
innovative
tool
that
combines
the
advantages
of
the
existing
software
has
recently
been
presented.
netCanvas
has
been
designed
to
handle
large
and
complex
networks
by
allowing
respondents
to
interact
with
a
visual
representation
of
their
network
on
a
touch-
screen
device
and
it
allows
answering
name
interpreter
questions
Fig.
1.
Graphical
representation
of
an
ego-centered
network
with
five
alters.
at
once
for
all
alters
by
dragging
and
dropping
names
into
answer
categories
(Hogan
et
al.,
2016).
1.2.
The
new
graphical
tool
GENSI
(Graphical
Ego-centered
Network
Survey
Interface),
the
new
tool
evaluated
in
this
paper,
combines
the
ease
of
interacting
with
a
graphical
representation
of
a
network
when
reporting
on
the
network
structure
with
the
possibility
to
ask
name
interpreter
questions
about
all
alters
at
once.
The
approach
is
similar
to
net-
Canvas
and
OpenEddi
but
aims
at
survey
researchers
who
want
to
implement
a
short
ego-centered
network
module
in
a
larger
ques-
tionnaire.
In
GENSI,
respondents
type
the
names
of
their
alters
into
a
single
box,
one
after
the
other
(Vehovar
et
al.,
2008).1After
a
name
is
typed,
a
little
circle
flies
onto
the
screen
displaying
the
name
and
is
linked
by
a
line
to
a
circle
with
the
label
“You”.
Thus,
a
visual
rep-
resentation
of
a
person’s
network
is
generated
in
real
time
(Fig.
1).
Respondents
answer
subsequent
questions
about
their
network
by
interacting
with
this
visual
representation.
GENSI
asks
a
single
name
interpreter
question
requesting
an
attribute
of
all
network
members
(e.g.,
“Which
of
these
people
are
women?”),
rather
than
asking
separate
questions
about
indi-
vidual
network
members,
as
is
done
in
traditional
ego-centered
surveys
(e.g.,
“Is
Robert
a
man
or
a
woman?”).
Such
name
inter-
preter
questions
can
be
answered
in
either
of
two
ways
in
GENSI.
First,
a
dichotomous
question
about
each
network
contact
can
be
answered
by
clicking
on
names
of
alters.
This
changes
the
color
of
the
circle
around
the
name
(Fig.
2a).
Respondents
can
inspect
and
correct
their
answers
by
clicking
(again)
on
the
circles
(Fig.
2b).
Second,
to
report
categorical
attributes
of
the
alters,
respondents
can
drag
and
drop
the
circles
containing
the
names
of
each
alter
in
a
response
box
for
each
answer
option
(Fig.
3).2The
approach
implemented
in
GENSI
asking
about
one
attribute
of
all
alters
before
asking
about
another
attribute
of
all
alters
has
been
shown
to
1GENSI
is
designed
to
improve
answers
given
to
name
interpreter
questions
in
digital
surveys.
How
to
address
the
potential
problem
of
an
underreporting
of
the
network
size
in
name
generator
questions
in
online
surveys
(Matzat
and
Snijders,
2010)
has
been
discussed
by
Hsieh
(2015).
2Categorical
attributes
also
include
Likert-type
scales
with
separate
response
options.
38
T.H.
Stark,
J.A.
Krosnick
/
Social
Networks
48
(2017)
36–45
Fig.
2.
Screenshot
of
a
question
for
a
dichotomous
name
interpreter
question
The
questions
read,
a:
“Which
of
these
people
are
women?
Please
click
on
the
names
of
the
women
in
your
network.
Click
‘Next’
when
you
are
done
or
if
there
are
no
women
among
your
friends”
and
b:
“We
added
a
color
for
the
men
in
your
network.
Please
click
‘Next’
when
everything
is
correct.
If
you
want
to
change
the
sex
of
a
person,
simply
click
on
the
name.”
reduce
item
nonresponse,
break-offs
(Vehovar
et
al.,
2008),
and
to
produce
more
reliable
data
(Coromina
and
Coenders,
2006)
than
does
asking
all
questions
about
one
alter
before
asking
all
questions
about
the
next
alter.
Relationships
between
the
alters
in
a
person’s
social
network
(the
network
structure)
can
be
indicated
by
drawing
lines
between
the
names
of
two
or
more
alters.
Thus,
instead
of
evaluating
every
pair
of
alters
separately
as
it
is
done
in
traditional
surveys
(e.g.,
“Does
Joe
know
Mary?”),
respondents
can
click
on
the
name
of
one
person
and
then
on
the
name
of
another
person
to
create
a
line
between
the
two
circles
(Fig.
4).
Right-clicking
on
a
line
removes
incorrectly
positioned
lines.
This
drawing
of
lines
may
reduce
the
burden
for
respondents,
as
only
existing
relationships
have
to
be
indicated,
instead
of
having
to
explicitly
report
the
presence
or
absence
of
a
relation
between
every
possible
pair
of
alters
(McCarty
and
Govindaramanujam,
2005).
A
video
clip
of
GENSI
is
available
in
the
online
supplementary
material.
Fig.
4.
Screenshot
of
a
question
asking
for
the
network
structure.
1.3.
Hypotheses
Some
researchers
have
argued
that
respondent
motivation
drops
quickly
while
people
answer
similar
questions
about
each
alter
in
an
ego-centered
network
study
(Hogan
et
al.,
2007;
Matzat
and
Snijders,
2010).
If
this
is
true,
this
drop
in
motivation
may
compromise
measurement
accuracy,
because
less
motivated
peo-
ple
are
generally
less
productive
and
effective
(Csikszentmihalyi,
1990)
and
are
less
willing
to
invest
mental
effort
when
execut-
ing
tasks
(Capa
et
al.,
2008).
Furthermore,
people
who
enjoy
a
task
more
make
fewer
mistakes
when
executing
it
(Puca
and
Schmalt,
1999).
Because
past
research
has
found
that
interactive
elements
can
increase
respondents’
task
enjoyment
(Venkatesh,
1999)
and
might
thereby
increase
motivation,
the
graphical
and
interactive
features
of
GENSI
may
create
a
more
enjoyable
survey
experience
that
keeps
respondents
motivated.
Moreover,
motivation
may
not
decline
as
quickly,
because
answering
a
single
question
about
all
alters
at
once
may
speed
up
the
answering
process.
H1.
Respondents
may
enjoy
answering
GENSI
more
than
answer-
ing
a
traditional
ego-centered
network
questionnaire.
H2.
GENSI
may
be
completed
more
quickly
than
a
traditional
ego-
centered
network
questionnaire.
The
theory
of
survey
satisficing
(Krosnick,
1991,
1999)
states
that
some
respondents
may
not
think
deeply
about
their
answers
when
they
face
a
difficult
reporting
task.
Instead,
some
respondents
may
satisfice
and
use
a
strategy
to
expedite
the
interview.
They
may
say
“don’t
know”
if
such
an
answer
option
is
explicitly
offered
or
use
an
answer
strategy
that
makes
it
look
as
if
a
valid
answer
was
chosen,
when
in
fact,
an
answer
was
provided
thoughtlessly
(Krosnick
et
al.,
2002).
Such
answer
strategies
increase
measure-
ment
error,
which
in
turn
reduces
convergent
validity
(i.e.,
the
relations
between
correlated
variables;
e.g.,
Chang
and
Krosnick,
2009).
Graphical
elements
in
online
surveys
appear
to
reduce
sat-
isficing,
as
indicated
by
fewer
“don’t
know”
answers
provided
Fig.
3.
Screenshot
of
a
drag-and-drop
question
for
a
categorical
name
interpreter
question.
The
question
read,
“How
often
do
you
talk
to
each
person?
Drag
the
circles
with
the
names
of
each
person
into
the
box
below
that
indicates
how
often
you
talk
to
each
other.”
T.H.
Stark,
J.A.
Krosnick
/
Social
Networks
48
(2017)
36–45
39
(Deutskens
et
al.,
2004).
If
the
graphical
features
of
GENSI
increase
respondent
motivation,
respondents
may
also
satisfice
less,
reduc-
ing
measurement
error
and
increasing
convergent
validity.
The
visual
aids
given
in
GENSI
may
reduce
measurement
error
for
another
reason
as
well.
Specifically,
changing
colors
of
the
dis-
play
and
seeing
name
circles
placed
next
to
each
other
in
answer
boxes
may
make
it
easier
for
respondents
to
notice
and
correct
mistakes.
This,
again,
may
increase
convergent
validity
of
answers
given
with
GENSI.3
H3.
The
convergent
validity
of
survey
questions
may
be
higher
with
GENSI
than
with
a
traditional
ego-centered
network
ques-
tionnaire.
Especially
potentially
problematic
with
regard
to
survey
satis-
ficing
are
questions
about
the
structure
of
a
person’s
social
network
when
every
pair
of
alters
has
to
be
evaluated.
Matzat
and
Snijders
(2010)
found
that
online
data
collection
yielded
higher
network
density
(meaning
more
network
contacts
know
each
other)
than
did
face-to-face
data
collection.
This
was
due
to
significantly
more
people
saying
that
everybody
in
their
network
knew
everyone
else
in
the
online
survey
(density
=
1)
than
in
the
face-to-face
survey.
These
investigators
attributed
that
difference
to
more
non-
differentiation
in
online
answers
in
the
form
of
repeated
affirmative
answering
(what
the
investigators
called
“mechanical
clicking”),
in
order
to
end
the
questionnaire
more
quickly.
The
ability
to
indi-
cate
relationships
between
alters
in
GENSI
by
drawing
lines
may
reduce
cognitive
burden,
as
not
every
possible
pair
of
alters
must
be
explicitly
described.
As
a
consequence,
there
may
be
less
affir-
mative
answering
and
thus
less
exaggerated
numbers
of
network
connections.
H4.
The
density
of
ego-networks
may
be
lower
with
GENSI
than
with
a
traditional
ego-centered
network
questionnaire.
H5.
Respondents
may
be
less
likely
to
indicate
relationships
between
all
network
contacts
with
GENSI
than
with
a
traditional
ego-centered
network
questionnaire.
1.4.
Personal
informatics
The
study
described
in
this
paper
also
tested
the
impact
of
another
manipulation:
the
provision
of
information
about
a
respon-
dent’s
social
network.
This
idea
builds
on
the
notion
of
personal
informatics,
also
called
quantified
self
or
personal
analytics,
which
refers
to
giving
people
information
about
themselves
(Choe
et
al.,
2014).
This
approach
has
been
used
to
motivate
people
to
partici-
pate
in
burdensome
research
projects
that
ask
them
to
track
their
amount
of
walking
(Zulman
et
al.,
2013)
or
take
minute-by-minute
photographs
(for
an
overview,
see
Li
et
al.,
2010).
The
idea
behind
personal
informatics
is
similar
to
the
very
old
approach
of
promis-
ing
to
give
respondents
results
of
a
survey
after
they
participate
in
it,
as
a
way
to
increase
willingness
to
participate
(Levine
and
Gordon,
1958).
Whereas
the
promise
of
survey
results
has
not
proven
to
increase
response
rates
(Yu
and
Cooper,
1983),
personal
infor-
matics
may
be
more
successful,
because
they
offer
immediate
feedback
that
is
tailored
to
the
individual
participant.
In
the
old
approach,
respondents
would
receive
a
report
of
the
results
weeks
or
months
after
the
survey
took
place,
and
such
a
report
would
con-
tain
only
aggregate
statistics.
The
approach
we
tested
instead
gives
respondents
feedback
immediately
after
they
answer
questions
and
facilitates
comparison
between
each
respondent’s
answers
3Unfortunately,
it
was
not
recorded
whether
corrections
were
made
during
the
answering
process
in
the
present
study
that
might
explain
higher
convergence
validity.
and
aggregate
results.
Accordingly,
respondents
may
report
more
enjoyment
after
answering
the
questions
and
receiving
feedback
than
after
answering
the
questions
alone.
H6.
Respondents
who
anticipate
receiving
feedback
about
their
network
may
enjoy
participation
in
the
survey
more
than
those
who
do
not
anticipate
receiving
such
feedback.
Moreover,
personal
informatics
may
increase
respondent
moti-
vation
to
provide
accurate
answers,
because
respondents
will
learn
about
their
similarity
to
others
accurately
only
if
they
give
correct
answers
about
themselves
and
their
networks.
As
a
consequence,
respondents
who
are
promised
feedback
may
be
less
likely
to
sat-
isfice.
With
less
satisficing,
we
expect
to
see
higher
convergent
validity
and
less
dense
networks,
because
respondents
are
less
likely
to
exaggerate
the
number
of
relationships
through
affirma-
tive
answering.
H7.
The
convergent
validity
of
questions
may
be
higher
among
respondents
who
anticipate
receiving
feedback
about
their
networks
than
among
people
who
do
not
anticipate
receiving
such
feedback.
H8.
The
density
of
the
networks
may
be
lower
among
respondents
who
anticipate
receiving
feedback
about
their
network
than
among
those
who
do
not
anticipate
receiving
such
feedback.
H9.
Respondents
may
be
less
likely
to
indicate
relationships
between
all
network
contacts
if
they
anticipate
receiving
feedback
about
their
network.
1.5.
The
present
study
These
hypotheses
were
tested
in
an
online
survey
experiment,
in
which
respondents
were
randomly
assigned
to
either
answer
a
traditional
ego-centered
network
questionnaire
or
the
new
graphi-
cal
tool
GENSI.
Respondents
were
also
randomly
assigned
to
be
told
at
the
beginning
of
the
questionnaire
that
they
would
receive
the
personal
informatics
or
not,
yielding
a
2
×
2
design.
Convergent
validity
of
the
network
questions
(H3
and
H7)
was
assessed
by
comparing
the
strength
of
the
relation
between
white
respondents’
reports
of
having
black
network
contacts
and
the
respondents’
attitudes
toward
blacks.
Convergent
validity
refers
to
the
extent
to
which
measures
of
different
constructs
that
theoreti-
cally
should
be
related
are
indeed
related.
Numerous
studies
have
shown
that
white
people
with
more
black
contacts
(for
an
overview
see
Pettigrew
and
Tropp,
2006)
and
with
more
white
contacts
who
have
black
friends
(extended
contact,
for
an
overview
see
Vezzali
et
al.,
2014)
have
more
positive
attitudes
toward
blacks.
Also
an
ego-centered
network
study
found
that
white
people
who
have
more
ethnic
minority
members
in
their
network
have
more
posi-
tive
attitudes
toward
these
minorities
(Berg,
2009).
The
correlation
between
contact
with
blacks
and
positive
attitudes
toward
blacks
is
thus
well
established.
Finding
such
a
correlation
would
indicate
convergent
validity
for
the
ego-centered
network
measurement.
If
GENSI
improves
measurement
quality,
the
correlation
should
be
stronger
than
with
a
traditional
network
measurement.
2.
Materials
and
methods
2.1.
Data
Data
were
collected
in
a
non-probability
sample
of
U.S.
resi-
dents
recruited
through
Amazon
Mechanical
Turk
(MTurk).
An
invitation
to
participate
in
a
survey
about
social
relationships
was
published
on
MTurk
on
July
20,
2014,
and
468
respondents
com-
pleted
the
questionnaire
within
3
h.
Completing
the
questionnaire
took
6.07
min
on
average,
and
respondents
were
paid
$1
for
their
40
T.H.
Stark,
J.A.
Krosnick
/
Social
Networks
48
(2017)
36–45
Table
1
Means
and
standard
deviations
of
continuous
variables.
Variable
Mean
SD
Range
Valid
N
Enjoyment
3.62
1.02
1–5
434
Participate
again 4.34
0.85
1–5
434
Interesting
3.90
0.95
1–5
434
Completion
time
(minutes)a6.07
6.56
1.62–126
434
Direct
intergroup
contactb0.27
0.57
0–5
316
Extended
intergroup
contactb1.83
1.39
0–5
316
Differential
attitudes
toward
blacksb3.79
0.57
1–7
313
Network
size
4.39
1.08
1–5
434
Network
densityc0.55
0.33
0–1 426
Age
33.66
11.57
19–74
434
aMean
completion
time
after
removing
19
outliers
was
5.32.
bThese
values
are
based
on
the
subsample
of
white
respondents
(n
=
316).
cThe
values
are
based
on
the
subsample
of
respondents
with
at
least
two
alters
(n
=
426).
participation.
Thirty-four
respondents
failed
an
attention
check
at
the
end
of
the
questionnaire
and
were
removed
from
the
sample.4
The
remaining
434
respondents
in
the
final
sample
were
predom-
inantly
highly
educated
(12%
had
a
high
school
degree
or
less,
39%
had
some
college
experience,
and
49%
had
a
4-year
college
degree),
male
(56%),
and
young
(21%
aged
19–24,
45%
aged
25–34,
19%
aged
35–44,
15%
aged
>44).
Descriptive
statistics
for
all
variables
are
shown
in
Table
1.
The
name
generator
question
looked
the
same
for
the
217
respondents
(50%)
who
completed
a
traditional
ego-centered
net-
work
questionnaire
and
the
217
respondents
(50%)
who
used
GENSI.
This
name
generator
asked
“Who
are
the
people
outside
of
your
home
that
you
feel
closest
to?
These
may
be
friends,
co-
workers,
neighbors,
relatives,
or
anyone
else
who
does
not
live
with
you”
(Emerson
et
al.,
2010).5Respondents
could
enter
up
to
five
contacts.
The
limit
of
five
was
chosen
to
keep
the
survey
at
a
rea-
sonable
length
and
to
mirror
many
large
population
surveys
such
as
the
General
Social
Survey
(GSS),
the
American
National
Election
Study
(ANES),
or
the
Netherlands
Longitudinal
Lifecourse
Study
(NELLS).
The
benefit
of
going
past
five
nominations
seemed
min-
imal
since
95.1%
of
respondents
in
the
2004
GSS6nominated
five
or
fewer
contacts
and
in
a
recent
representative
U.S.
online
survey
88.3%
or
respondents
nominated
five
or
fewer
network
contacts
even
though
nominations
were
not
limited
(Brashears,
2011).
In
the
traditional
ego-centered
network
questionnaire,
respon-
dents
were
asked
every
name
interpreter
question
for
each
of
their
network
contacts
on
a
separate
page.
For
instance,
the
question
about
the
contacts’
gender
read,
“Is
[name
1]
male
or
female?”
and
offered
the
answer
options
“male”
and
“female.”
After
answering
the
question,
respondents
had
to
click
“Next”
to
receive
the
same
question
for
the
second
network
contact.
2.2.
Personal
informatics
All
respondents
saw
the
following
message
before
seeing
the
name
generator
question:
“In
the
next
section,
you
will
be
asked
questions
about
people
outside
of
your
home
to
whom
you
feel
closest.
These
may
be
friends,
co-workers,
neighbors,
or
relatives.
4To
be
sure
that
only
respondents
who
read
the
survey
instructions
were
included,
the
attention
check
question
asked
respondents
to
click
on
the
fourth
response
option.
Every
respondent
who
failed
to
do
so
was
removed
from
the
sample
at
the
time
of
analysis.
Experimental
conditions
were
not
correlated
with
failing
the
attention
check.
5This
wording
of
the
name
generator
question
is
taken
from
the
2006
“Portraits
of
American
Life
Study”
(Emerson
and
Sikkink,
2006),
a
national
representative
study
that
served
as
basis
of
comparison
in
the
personal
informatics
reports.
6The
2004
GSS
allowed
unlimited
nominations
of
names
even
though
name
interpreter
questions
were
only
asked
for
the
first
five
names.
Together,
these
people
make
up
your
social
network.”
About
half
of
the
respondents
(N
=
230,
53%)
were
randomly
assigned
to
also
see
this
text:
“At
the
end
of
the
questionnaire,
you
will
receive
a
report
that
tells
you
how
similar
your
social
network
is
compared
to
the
average
American.”
The
feedback
was
provided
after
respondents
described
their
social
networks
and
before
they
reported
their
attitudes
toward
blacks
and
their
enjoyment
of
the
questionnaire.
The
feedback,
compared
size,
density,
and
racial
composition
of
respondent’s
network
with
data
from
a
representative
U.S.
face-to-face
ego-centered
network
study
(Emerson
and
Sikkink,
2006).
The
exact
wording
of
the
report
depended
on
respondents’
answers
and
can
be
seen
in
the
Online
Appendix.
2.3.
Measures
of
enjoyment
Enjoyment.
At
the
end
of
the
questionnaire,
all
respondents
were
asked,
“How
much
did
you
enjoy
answering
this
survey?”
Response
options
on
a
5-point
scale
ranged
from
“not
at
all”
to
“a
great
deal.”
Higher
values
indicate
more
enjoyment.
Participate
again.
Respondents
were
asked:
“How
likely
is
it
that
you
will
participate
in
another
survey
like
this
about
your
social
network?”
Answers
could
be
given
on
a
5-point
scale
ranging
from
“not
at
all
likely”
to
“extremely
likely.”
Higher
values
indicate
a
higher
likelihood
to
participate
again.
Interesting.
The
question,
“How
interesting
was
this
survey?”
could
be
answered
on
a
5-point
scale
ranging
from
“not
at
all
interesting”
to
“extremely
interesting.”
Higher
values
indicate
that
respondents
found
the
survey
more
interesting.
Completion
time.
Completion
time
was
measured
in
milliseconds
from
the
moment
respondents
clicked
“next”
after
answering
the
first
question
until
he
or
she
clicked
“next”
after
answering
the
last
question
in
the
survey,
asking
how
interesting
the
survey
was.
2.4.
Measures
used
to
assess
convergent
validity
Direct
intergroup
contact.
The
traditional
version
of
the
ques-
tionnaire
asked
for
each
network
contact,
“To
which
racial/ethnic
group
does
[name
alter]
belong?”
Answer
options
were
“White
(Caucasian),”
“Black,”
“American
Indian,
Alaska
Native,
or
Native
Hawaiian,”
“Asian,”
“Hispanic,”
“Other,”
and
respondents
could
check
all
that
applied.
In
the
graphical
tool,
the
question
read,
“To
which
racial/ethnic
group
do
these
people
belong?
Drag
the
circles
with
the
names
of
each
person
into
the
box
below
that
indicates
their
racial/ethnic
group.”
The
answer
categories
were
presented
as
boxes
below
the
network
picture,
and
respondents
could
drag
and
drop
the
names
of
their
network
contacts
into
the
appropriate
box.
Answer
options
were
the
same
as
for
the
traditional
questionnaire
but
the
last
box
read
“Mixed/Other.”
For
each
name
dragged
into
this
box,
a
separate
pop-up
window
appeared
after
the
respondents
clicked
“Next,”
asking
“What
is
[name
alter]’s
race/ethnicity?”
The
same
response
categories
as
in
the
traditional
version
of
the
ques-
tionnaire
were
offered,
and
respondents
could
check
each
response
option
that
applied.
The
number
of
black
people
(black
only
and
black
plus
other
race)
in
the
respondent’s
network
was
treated
as
an
assessment
of
direct
intergroup
contact,
which
could
range
from
0
to
5.
Extended
intergroup
contact.
For
each
alter,
all
white
respondents
were
asked,
“Does
[name
alter]
have
one
or
more
close
friends
who
are
black?”
Extended
contact
is
defined
as
the
number
of
ingroup
friends
who
have
outgroup
friends
(Vezzali
et
al.,
2014).
Accord-
ingly,
it
was
measured
by
the
number
of
white
alters
who
had
black
friends.
This
indicator
could
range
from
0
to
5.
Differential
attitudes
toward
blacks.
After
all
questions
about
the
ego-centered
network
had
been
answered,
all
respondents
were
asked,
“Do
you
feel
warm,
cold,
or
neither
warm
nor
cold
toward
most
white
people?”
and
“Do
you
feel
warm,
cold,
or
neither
warm
T.H.
Stark,
J.A.
Krosnick
/
Social
Networks
48
(2017)
36–45
41
nor
cold
toward
most
black
people?”
Answers
were
given
on
a
7-
point
scale
ranging
from
“extremely
warm”
to
“extremely
cold.”
Attitudes
toward
blacks
were
measured
as
the
difference
between
the
black
feeling
thermometer
rating
and
the
white
feeling
ther-
mometer
rating.
This
measure
was
recoded
to
range
from
1
to
7,
with
higher
values
indicating
more
positive
attitudes
toward
blacks.
2.5.
Constructed
measures
of
network
characteristics
Network
size.
Network
size
was
the
number
of
alters
entered
by
the
respondent,
which
empirically
ranged
from
1
to
5.
Network
density.
Network
density
was
the
number
of
network
members
who
knew
each
other
divided
by
the
number
of
poten-
tial
relationships
between
all
network
members
(alter-density,
see
Wasserman
and
Faust,
1994).
In
the
traditional
ego-centered
ques-
tionnaire,
respondents
could
answer
yes
or
no
to
the
question,
“Does
[name
alter
1]
know
[name
alter
2]?”
This
question
was
asked
about
each
dyad
in
the
network.
In
the
graphical
version
of
the
questionnaire,
respondents
were
asked,
“Which
of
these
people
know
each
other?
To
indicate
that
two
persons
know
each
other,
click
on
the
name
of
the
first
person
and
then
on
the
name
of
the
second
person.
This
will
create
a
line
between
the
two.”
Fig.
4
shows
how
respondents
could
indicate
which
alters
knew
each
other.
Density
of
1
and
of
0.
A
dummy
variable
was
coded
1
for
respon-
dents
whose
network
contacts
all
knew
each
other
(22%)
and
0
otherwise.
A
second
dummy
variable
was
code
1
for
respondents
who
indicated
that
none
of
their
network
contacts
knew
each
other
(9%)
and
0
otherwise.
2.6.
Non-response
probe
Because
multiple
studies
have
suggested
that
accidental
or
intentional
item
nonresponse
may
be
a
problem
in
online
ego-
centered
network
surveys
(Matzat
and
Snijders,
2010;
Vehovar
et
al.,
2008),
we
implemented
a
nonresponse
probe
in
all
versions
of
the
questionnaire.
If
a
question
in
any
part
of
the
questionnaire
was
unanswered,
a
pop-up
screen
appeared
on
which
respondents
had
to
click
“OK”
to
proceed.7If
they
really
did
not
want
to
answer
a
question,
they
had
to
click
“Next”
a
second
time.
Thus,
not
answer-
ing
a
question
took
one
additional
click
than
selecting
an
answer
and
proceeding
to
the
next
question.
3.
Results
3.1.
Break-offs
and
item
non-response
There
were
no
break-offs.
This
means
that
neither
the
graphical
tool
nor
the
personal
informatics
report
caused
respondents
to
end
the
survey
prematurely.
Having
no
break-offs
allows
comparing
answers
given
between
experimental
conditions,
which
could
oth-
erwise
be
confounded
with
the
break-off
rate.
Three
respondents
did
not
answer
the
questions
about
their
attitudes
toward
blacks.
There
was
no
item
non-response
on
any
of
the
other
questions
suggesting
that
the
non-response
probe
worked.
3.2.
The
graphical
tool
GENSI
GENSI
led
to
more
enjoyment
of
the
questionnaire.
People
who
used
the
new
tool
enjoyed
the
process
significantly
more
than
did
the
people
who
answered
the
traditional
ego-centered
network
7The
pop-up
screen
read,
“We
noticed
that
you
didn’t
answer
this
question.
It
would
very
helpful
for
our
research
if
you
did.
Please
feel
free
to
either
give
an
answer
or
to
go
to
the
next
question
by
clicking
‘Next’
again.”
questionnaire
(MGENSI =
3.78
vs.
Mtraditional =
3.45;
F(1,
430)
=
11.62,
p
<
.001,
d
=
.33),
thought
the
survey
was
significantly
more
inter-
esting
(MGENSI =
4.09
vs.
Mtraditional =
3.72;
F(1,
430)
=
16.98,
p
<
.001,
d
=
.40),
and
said
they
were
significantly
more
likely
to
partic-
ipate
in
a
similar
survey
in
the
future
again
(MGENSI =
4.42
vs.
Mtraditional =
4.26;
F(1,
430)
=
3.94,
p
=
.048,
d
=
.19).
This
is
in
line
with
Hypothesis
1.
The
more
positive
experience
of
answering
GENSI
was
not
due
to
shorter
completion
times.
Answering
the
questions
took
equally
long
with
the
new
graphical
interface
and
the
traditional
ver-
sion.
The
average
completion
time
was
5.39
min
for
GENSI
and
5.25
min
for
the
traditional
ego-centered
network
questionnaire
(F(1,
411)
=
.69,
p
=
.406).8This
refutes
Hypothesis
2.
The
two
questionnaire
forms
were
equivalent
in
terms
of
the
convergent
validity
of
various
measures
of
intergroup
contact
when
predicting
attitudes
toward
blacks.
Controlling
for
white
respon-
dents’
sex,
age,
and
education,
differential
attitudes
toward
blacks
was
significantly
predicted
by
direct
intergroup
contact
(b
=
.16,
SE
=
.06,
p
=
.004),
and
extended
intergroup
contact
(b
=
.07,
SE
=
.02,
p
=
.004).
However,
insignificant
interactions
of
the
type
of
ques-
tionnaire
answered
with
direct
contact
(b
=
.10,
SE
=
.11,
p
=
.358,
Model
1
in
Table
2)
or
extended
contact
(b
=
.02,
SE
=
.05,
p
=
.649,
Model
2
in
Table
2)
indicated
that
the
graphical
survey
tool
did
not
affect
the
relations
between
these
variables
and
attitudes
toward
blacks.
This
challenges
Hypothesis
3.
The
new
graphical
tool
produced
less
dense
ego-centered
networks
than
the
traditional
questionnaire.
An
ANOVA
indi-
cated
that
network
size
did
not
differ
between
the
conditions
(MGENSI =
4.47
vs.
Mtraditional =
4.31;
F(1,
430)
=
2.29,
p
=
.131).
How-
ever,
of
the
respondents
who
mentioned
at
least
two
alters
(N
=
426),
those
who
saw
the
graphical
version
reported
a
mean
density
of
0.52,
whereas
those
who
saw
a
traditional
survey
design
reported
a
mean
density
of
0.61
(F(1,
422)
=
8.49,
p
=
.004,
d
=
.28).
This
is
in
line
with
Hypothesis
4.
The
difference
was
driven
by
more
respondents
indicating
relationships
between
all
network
contacts
in
the
traditional
questionnaire.
A
significantly
larger
proportion
of
respondents
in
this
condition
reported
a
network
density
of
1
than
in
the
condition
with
the
new
survey
tool
(traditional
=
0.27
vs.
GENSI
=
0.17;
!2(1)
=
5.42,
p
=
.020,
d
=
.23).
This
is
in
line
with
the
idea
that
the
graphical
interface
reduces
mechanical
clicking
that
has
been
identified
as
problem
in
online
network
surveys
(Matzat
and
Snijders,
2010)
and
thus
supports
Hypothesis
5.
The
lower
density
in
the
GENSI
condition
was
not
due
to
a
larger
proportion
of
people
reporting
a
density
of
zero
with
the
graphical
interface
(GENSI
=
0.11
vs.
traditional
=
0.07;
!2(1)
=
1.25,
p
=
.263).
3.3.
Personal
informatics
Receiving
personal
informatics
reports
did
not
increase
respon-
dents’
enjoyment.
People
who
got
feedback
about
their
networks
did
not
enjoy
the
survey
more
(Minformatics =
3.63
vs.
Mnone =
3.60;
F(1,
430)
=
.001,
p
=
.96),
did
not
think
it
was
more
interesting
(Minformatics =
3.93
vs.
Mnone =
3.88;
F(1,
430)
=
.06,
p
=
.811),
and
did
not
say
they
would
be
more
likely
to
participate
in
a
similar
sur-
vey
(Minformatics =
4.33
vs.
Mnone =
4.35;
F(1,
430)
=
.23,
p
=
.633)
than
respondents
who
did
not
get
the
report.9
In
fact,
the
personal
informatics
report
seemed
to
have
under-
mined
the
positive
effect
of
the
new
graphical
survey
tool
on
819
respondents
in
both
conditions
who
took
extremely
long
times
to
complete
the
questionnaire
(3rd
quartile
+
2.2
IQR,
Hoaglin
and
Iglewicz,
1987)
were
removed
from
this
calculation.
9Not
surprisingly,
receiving
the
personal
informatics
report
increased
the
completion
time
significantly
(M
=
5.48
vs.
M
=
5.13
after
removing
outliers;
F(1,
411)
=
4.56,
p
=
.033,
d
=
.21).
42
T.H.
Stark,
J.A.
Krosnick
/
Social
Networks
48
(2017)
36–45
Table
2
Coefficients
of
OLS
regressions
predicting
differential
attitudes
toward
blacks
among
white
respondents.
Parameters
Model
1
Model
2
Model
3
Model
4
b
SE
b
SE
b
SE
b
SE
Sex
(male)
.17**
.06
.17**
.06
.16*
.06
.16*
.06
Age
.002
.002
.002
.003
.002
.003
.002
.003
Educationa
Some
college .20*
.10
.20*
.10
.19*
.10
.19
.10
4-year
college
degree
.19*
.09
.20*
.09
.17
.09
.19*
.09
Direct
contact
.12
.07
.16**
.06
.32***
.09
.15**
.06
Extended
contact
.06**
.02
.07*
.03
.07**
.02
.11***
.03
GENSIb.09
.07
.15
.10
GENSI
×
direct
contact
.10
.11
GENSI
×
extended
contact –
.02 .05
Personal
informaticsc
.14
.07
.21*
.10
Personal
informatics
×
direct
contact
.27*
.11
Personal
informatics
×
extended
contact
.08
.05
Intercept
3.56***
.15
3.54***
.15
3.54***
.15
3.53***
.15
Adj.
R2.07
.07
.08
.07
Note:
N
=
313
due
to
three
cases
with
missing
values
on
the
dependent
variable.
aReference
category
is
high
school
degree
or
less.
bReference
category
is
the
traditional
questionnaire.
cReference
category
is
not
being
promised
and
receiving
the
personal
informatics
report.
***p
<
.001;
**p
<
.01;*p
<
.05
(two-tailed
tests).
enjoyment
of
the
experience.
ANOVAs
indicated
significant
interac-
tions
between
the
type
of
network
reporting
tool
used
and
receipt
of
personal
informatics
when
predicting
respondents’
likelihood
to
participate
again
(F(1,
430)
=
5.73,
p
=
.017,
d
=
.23)
and
their
judg-
ments
of
how
interesting
the
survey
was
(F(1,
430)
=
4.21,
p
=
.041,
d
=
.20).
There
was
no
such
significant
interaction
predicting
enjoy-
ment
of
the
survey
(F(1,
430)
=
.45,
p
=
.504).
Fig.
5
depicts
the
two
significant
interactions.
Using
the
new
survey
tool
increased
the
likelihood
to
partici-
pate
in
the
future
again
only
among
respondents
who
did
not
get
the
personal
informatics
report
(Fig.
5a).
The
graphical
survey
tool
increased
respondents’
reports
of
how
interesting
the
survey
was
less
among
people
who
got
the
personal
informatics
report
than
among
those
who
did
not
(Fig.
5b).
This
suggests
that
the
personal
informatics
report
reduced
respondents’
enjoyment
of
the
new
tool,
thus
undermining
its
positive
impact
on
ultimate
enjoyment
of
the
entire
survey
experience.
This
unexpected
pattern
of
results
may
be
due
to
the
content
of
the
information
that
some
respondents
received
in
their
personal
informatics
reports.
Among
those
who
got
a
personal
informat-
ics
report
(N
=
230),
people
who
were
told
that
their
network
was
larger
than
that
of
most
Americans
were
marginally
significantly
more
likely
to
participate
again
in
the
future
(M
=
4.41
vs.
M
=
4.13;
t(98)
=
1.95,
p
=
.054,
d
=
.28)
and
said
the
survey
was
more
inter-
esting
(M
=
4.03
vs.
M
=
3.68;
t(108)
=
2.46,
p
=
.015,
d
=
.36)
than
respondents
who
were
told
that
their
network
is
equally
large
or
smaller
than
that
of
most
Americans.
ANOVAs
indicated
that
the
feedback
that
people’s
networks
were
loosely
connected,
closely
connected,
or
very
closely
connected
was
related
to
differences
in
the
enjoyment
indicators
(enjoyment:
M
=
3.69
vs.
M
=
3.87
vs.
M
=
3.17;
F(2,
224)
=
7.45,
p
<
.001,
d
=
.60;
participate
again:
M
=
4.29
vs.
M
=
4.56
vs.
M
=
4.08;
F(2,
224)
=
5.16,
p
=
.006,
d
=
.38;
inter-
esting:
M
=
4.00
vs.
M
=
4.10
vs.
M
=
3.58;
F(2,
224)
=
5.16,
p
=
.006,
d
=
.51).
And
respondents
who
were
told
that
their
network
was
racially
more
diverse
than
that
of
most
Americans
said
they
enjoyed
the
questionnaire
more
(M
=
3.86
vs.
M
=
3.54;
t(112)
=
2.10,
p
=
.038,
d
=
.31)
and
said
the
survey
was
more
interesting
(M
=
4.12
vs.
M
=
3.85;
t(110)
=
1.93,
p
=
.056,
d
=
.28)
than
respondents
who
were
not
told
this
information.
These
results
suggest
that
certain
feedback
messages
in
a
personal
informatics
report
might
be
coun-
terproductive
for
respondents’
enjoyment
of
the
survey.
The
personal
informatics
report
reduced
the
convergent
valid-
ity
of
subsequent
measures.
The
association
of
direct
intergroup
contact
with
blacks
in
the
network
with
attitudes
toward
blacks
was
significantly
weaker
among
white
respondents
in
the
personal
informatics
condition
than
among
those
who
did
not
get
the
feed-
back
about
their
network
(b
=
.27,
SE
=
.11,
p
=
.015,
Model
3
in
Table
2).
The
association
of
extended
intergroup
contact
with
atti-
tudes
toward
blacks
was
marginally
significantly
weaker
among
respondents
who
anticipated
receiving
the
personal
informatics
report
(b
=
.08,
SE
=
.05,
p
=
.078,
Model
4
in
Table
2).
This
discon-
firms
Hypothesis
7.
Network
size
(Minformatics =
4.37
vs.
Mnone =
4.42;
F(1,
430)
=
.32,
p
=
.571)
and
network
density
did
not
differ
significant
between
respondents
who
were
promised
personal
informatics
and
those
who
were
not
(Minformatics =
0.53
vs.
Mnone =
0.57;
F(1,
422)
=
1.58,
p
=
.210).
Likewise,
the
proportion
of
respondents
who
said
that
all
network
contacts
knew
each
other
did
not
vary
between
3
3.5
4
4.5
5
No
Yes
Participating in the future again
Graph
ica
l too
l
3
3.5
4
4.5
5
No Yes
Survey was interesting
Graphica
l too
l
No person
al
infor
mat
ics
Person
al
informatic
s
Fig.
5.
Mean
values
of
enjoyment
measures
in
each
experimental
condition.
T.H.
Stark,
J.A.
Krosnick
/
Social
Networks
48
(2017)
36–45
43
these
groups
of
respondents
(informatics
=
0.21
vs.
none
=
0.23;
!2(1)
=
0.25,
p
=
.621).
This
disconfirms
Hypotheses
8
and
9.
Also
the
proportion
of
respondents
with
a
density
of
zero
did
not
vary
significantly
(informatics
=
0.10
vs.
none
=
0.07;
!2(1)
=
2.22,
p
=
.14).
Among
white
respondents,
the
amount
of
direct
contact
with
blacks
in
the
network
(Minformatics =
0.27
vs.
Mnone =
0.26;
F(1,
312)
=
.05,
p
=
.833)
and
the
frequency
of
extended
contact
with
blacks
(Minformatics =
1.73
vs.
Mnone =
1.94;
F(1,
312)
=
1.87,
p
=
.172)
did
not
differ
between
people
promised
the
feedback
and
those
not
promised
the
feedback.
Use
of
the
graphical
interface
vs.
the
tra-
ditional
questions
did
not
interact
with
the
personal
informatics
condition
in
predicting
any
of
the
network
characteristics.
4.
Discussion
4.1.
Graphical
survey
tool
This
study
suggests
that
graphical
features
available
in
computer-based
surveys
allow
a
more
effective
way
to
mea-
sure
ego-centered
networks
than
traditional
online
questionnaires.
Advocates
of
graphical
elements
have
suggested
that
these
features
may
increase
respondents’
enjoyment
of
a
survey
(Hogan
et
al.,
2007),
may
reduce
cognitive
burden
(Matzat
and
Snijders,
2010;
Vehovar
et
al.,
2008),
and
may
also
increase
data
quality
(Coromina
and
Coenders,
2006).
The
study
reported
here
constitutes
the
first
test
that
we
know
of
such
claims
with
a
digital
questionnaire
and
offers
some
support
for
them.
GENSI
increased
respondents’
enjoyment
of
the
questionnaire.
Those
who
saw
the
graphical
interface
enjoyed
answering
the
ques-
tionnaire
significantly
more,
thought
the
questionnaire
was
more
interesting,
and
were
more
likely
to
participate
in
a
similar
sur-
vey
in
the
future
again.
This
was
not
due
to
faster
administration
of
the
questionnaire
with
the
graphical
tool.
Average
completion
time
did
not
differ
between
GENSI
and
the
traditional
design.
However,
the
finding
of
more
enjoyment
is
in
line
with
expectations
that
graphical
features
in
ego-centered
network
questionnaires
might
increase
respondents’
enjoyment
of
the
survey
(Hogan
et
al.,
2007)
and
might
increase
respondent
motivation
to
complete
the
ques-
tionnaire
properly
(Matzat
and
Snijders,
2010).
These
results
contrast
with
those
of
some
pioneering
research
on
graphical
elements
in
online
surveys,
which
did
not
indicate
beneficial
results
(Couper
et
al.,
2004).
Other
researchers
drew
more
positive
conclusions
about
the
impact
of
graphical
elements
on
answer
quality
(Coromina
and
Coenders,
2006;
Deutskens
et
al.,
2004).
However,
a
lot
of
this
research
was
conducted
over
a
decade
ago
when
graphical
features
in
web
surveys
were
much
more
rudimentary
and
when
people
were
less
used
to
interacting
with
graphical
features
on
the
Internet.
Our
study
found
that
the
graphical
tool
positively
affected
respondents’
evaluation
of
the
questionnaire.
It
therefore
seems
worthwhile
to
explore
the
impact
of
newly
available
Web
2.0
graphic
technologies
in
other
online
surveys
(see
also
Dillman
et
al.,
2009).
Matzat
and
Snijders
(2010)
found
that
ego-centered
networks
were
denser
in
an
online
survey
than
in
a
face-to-face
survey.
Sig-
nificantly
more
people
in
their
online
survey
said
that
everyone
in
their
network
knew
each
other,
which
the
authors
took
to
be
a
mea-
surement
artifact.
The
lower
network
density
produced
by
GENSI
than
by
traditional
measures
administered
online
may
therefore
suggest
superiority
of
the
new
tool
over
a
traditional
ego-centered
network
questionnaire
without
graphical
features
in
online
admin-
istration.
In
line
with
this
idea,
fewer
respondents
who
used
GENSI
indicated
relationships
between
all
alters
(density
=
1)
than
did
respondents
who
answered
traditional
questions,
perhaps
reduc-
ing
the
problem
of
“mechanical
clicking”
identified
by
Matzat
and
Snijders
(2010).
Use
of
GENSI
did
not
affect
another
indicator
of
data
quality,
the
convergent
validity
of
the
network
characteristics
and
subse-
quently
asked
measures.
Network
measures
of
interracial
contact
were
significant
predictors
of
interracial
attitudes,
just
as
in
ear-
lier
research
(Berg,
2009).
However,
these
relations
did
not
differ
between
the
traditional
questionnaire
and
GENSI.
This
suggests
that
the
new
graphical
tool
does
not
produce
data
of
a
lower
quality
than
the
existing
approach.
4.2.
Personal
informatics
Telling
respondents
up
front
that
they
would
receive
feedback
on
how
their
network
compared
to
the
average
American
did
not
affect
respondents’
answers
to
the
network
questions.
Researchers
in
the
field
of
human-computer
interaction
have
used
the
promise
that
people
would
learn
something
about
themselves
to
recruit
par-
ticipants
(Li
et
al.,
2010;
Zulman
et
al.,
2013).
However,
the
mere
promise
of
feedback
about
their
network
in
the
present
study
did
not
affect
or
improve
people’s
reports
of
the
network
size,
den-
sity,
or
characteristics
of
the
network
contacts.
This
is
in
line
with
decades
old
research,
which
found
that
people
were
no
more
likely
to
participate
in
a
survey
if
they
had
been
promised
to
later
receive
a
report
of
the
results
of
the
survey
(Yu
and
Cooper,
1983).
Providing
the
feedback
that
compared
the
respondents’
net-
work
to
the
average
American
network
had
negative
consequences
for
people’s
enjoyment
of
the
questionnaire
and
for
data
qual-
ity.
Specifically,
receiving
feedback
about
one’s
network
countered
the
positive
effect
of
the
graphical
survey
tool
on
respondents’
enjoyment
of
the
survey.
There
were
also
weaker
correlations
between
network
characteristics
and
subsequent
questions
about
known
correlates
when
respondents
saw
the
personal
informatics
report,
indicating
worse
convergent
validity.
Thus,
the
provision
of
comparative
feedback
about
respondents’
networks
at
the
end
of
the
questionnaire
was
not
an
effective
strategy
to
increase
respondents’
motivation.
This
is
in
line
with
research
on
feedback
interventions
more
generally,
which
have
been
found
to
very
often
reduce
instead
of
enhance
performance
(Kluger
and
DeNisi,
1996).
It
is
also
consistent
with
research
on
health
communication
that
found
that
framing
information
in
terms
of
comparisons
can
be
ineffective.
For
instance,
black
people
reacted
with
more
positive
emotions
to
information
about
cancer
when
the
message
empha-
sized
progress
than
when
the
message
emphasized
a
poorer
cancer
outcome
for
blacks
compared
to
whites
(Nicholson
et
al.,
2008).
This
suggests
that
comparing
answers
of
respondents
to
other
people
in
a
personal
informatics
report
is
not
a
promising
tool
to
improve
answer
quality.
Exploratory
analyses
shed
some
light
on
this
finding.
Respon-
dents’
enjoyment
of
the
survey
differed
depending
upon
the
messages
they
received
in
the
personal
informatics
report.
Peo-
ple
who
were
told
that
their
network
was
larger
than
that
of
most
Americans,
who
were
told
that
their
network
was
loosely
con-
nected
or
closely
connected,
and
people
who
were
told
that
their
network
was
more
diverse
than
that
of
most
American
enjoyed
the
survey
more
than
respondents
who
received
different
messages.
This
suggests
that
some
messages
in
a
personal
informatics
can
be
perceived
as
unpleasant
and
should
be
avoided.
However,
knowing
in
advance
what
will
be
unpleasant
seems
challenging.
Even
within
the
U.S.,
there
are
regional
differences
in
the
extent
to
which
people
derive
well-being
from
being
autonomous
(high
in
the
Mountain
region,
low
in
West
North
Central)
or
having
positive
relations
with
others
(high
in
New
England,
low
in
the
East
South
Central,
see
Plaut
et
al.,
2002).
Thus,
telling
some
people
that
they
are
very
different
from
the
average
American
may
be
uplifting,
whereas
this
may
be
bad
news
to
other
people.
44
T.H.
Stark,
J.A.
Krosnick
/
Social
Networks
48
(2017)
36–45
4.3.
Limitations
The
study
reported
in
this
paper
was
only
implemented
with
MTurk
participants,
who
are
not
representative
of
any
population.
It
remains
thus
unclear
how
well
the
present
results
general-
ize
to
other
potential
respondents.
Future
studies
could
compare
GENSI
to
traditional
ego-centered
questionnaires
with
representa-
tive
samples.
The
setup
of
the
current
study
does
not
allow
ruling
out
an
alternative
explanation
for
the
lower
density
of
networks
when
respondents
used
GENSI
compared
to
a
traditional
questionnaire.
Without
being
asked
to
evaluate
each
pair
of
alters
separately
in
the
graphical
tool,
some
respondents
may
have
forgotten
to
indicate
some
relationships.
Other
respondents
were
perhaps
discouraged
by
the
extra
work
involved
in
drawing
each
additional
line.
We
could
rule
out
that
the
lower
network
density
was
caused
by
more
people
indicating
no
ties
at
all
in
GENSI.
However,
the
fact
that
there
are
no
ties
present
in
the
default
setting
of
GENSI
reduces
network
density
if
relationships
are
forgotten.
One
direct
solu-
tion
to
the
problem
of
forgetting
relationships
would
be
to
prompt
respondents
to
indicate
for
every
pair
of
alters
whether
a
tie
exists
or
not.
However,
this
would
mirror
the
traditional
questionnaire
approach
in
which
a
separate
question
is
asked
for
every
alter-alter
combination.
This
would
counter
our
effort
to
reduce
respondent
burden
through
the
graphical
interface.
Thus,
more
research
is
needed
to
test
whether
the
default
setting
of
GENSI
is
problematic.
Future
research
could
test
this
with
a
debriefing
interview
in
which
respondents
are
asked
to
reconsider
each
tie
decision
they
made
in
the
graphical
tool
to
find
out
if
they
accidentally
or
purposefully
overlooked
relationships.
4.4.
Practicality
GENSI
can
be
easily
implemented
in
existing
large-scale
online
surveys.
For
instance,
it
has
been
successfully
applied
in
a
study
of
the
nationally
representative
LISS
panel
in
the
Netherlands.
It
works
on
all
commonly-used
Internet
browsers
(with
the
exception
of
very
old
versions
of
Internet
Explorer)
and
can
be
completed
on
desktop
computers
and
tablets.
GENSI
also
works
on
smaller
mobile
devices,
such
as
mobile
phones,
though
the
displays
are
mostly
too
small
to
show
all
details.
Importantly,
even
though
GENSI
was
designed
for
online
surveys,
it
can
also
be
applied
during
computer-
assisted
self-interviewing
(CASI)
during
face-to-face
surveys.
A
practical
limitation
for
future
research
with
GENSI
is
that
the
tool
is
only
suitable
for
small
ego-centered
networks.
When
the
number
of
alters
exceeds
seven
or
eight,
it
gets
visually
chal-
lenging
to
see
all
circles
in
a
network.
A
more
complex
tool
such
as
netCanvas
(Hogan
et
al.,
2016),
TellUsWho
(Ricken
et
al.,
2010)
or
ANAMIA
EGOCENTER
(Tubaro
et
al.,
2014)
may
be
bet-
ter
suited
for
research
with
big
ego-centered
networks.
However,
many
researchers
and
many
large
population
surveys
(e.g.,
GSS,
ANES,
NELLS)
limit
respondents
to
five
or
fewer
alters
to
keep
the
network
questions
to
a
feasible
length
within
a
larger
survey.
The
new
tool
is
well-suited
for
this
purpose.
It
is
available
in
Javascript
source
code
and
as
such
easy
to
implement
in
existing
survey
soft-
ware.
Responses
are
recorded
in
a
CSV
file
that
can
be
read
into
any
statistical
software
package.
Interested
researchers
can
down-
load
GENSI
from
http://www.tobiasstark.nl/GENSI
and
use
it
free
of
charge.
5.
Conclusion
Social
network
measurement
has
become
hugely
important
in
many
lines
of
social
science
investigation.
Previous
research
has
suggested
that
collecting
ego-centered
network
data
in
online
surveys
can
be
problematic
(Matzat
and
Snijders,
2010).
Even
thought
data
quality
can
also
be
jeopardized
by
interviewers
who
falsely
report
no
or
very
few
network
contacts
(Eagle
and
Proeschold-Bell,
2015;
Paik
and
Sanchagrin,
2013),
face-to-face
interviews
or
telephone
interviews
in
which
interviewers
can
motivate
respondents
to
answer
repetitive
name
interpreter
ques-
tions
effortfully
may
still
be
the
best
way
to
gauge
ego-centered
network
data
(Marsden,
2011).
No
matter
which
mode
of
data
collection
is
chosen,
the
mea-
surement
of
attributes
of
social
networks
can
be
time-consuming
and
cognitively
demanding
for
survey
respondents.
Therefore,
researchers
have
tremendous
incentives
to
make
the
process
as
efficient
and
as
enjoyable
as
possible.
The
study
reported
here
yielded
promising
findings
encouraging
further
pursuit
of
GENSI,
which
takes
advantage
of
computer
administration
to
reduce
cog-
nitive
burden
and
increase
respondent
engagement.
However,
there
are
still
open
questions
such
as
the
way
in
which
the
structure
of
a
respondent’s
network
should
best
be
measured
with
a
graphi-
cal
tool.
We
look
forward
to
future
research
exploring
the
potential
value
that
GENSI
or
tools
like
it
may
bring
to
making
social
network
measurement
more
efficient
and
effective.
Acknowledgements
Jon
Krosnick
is
University
Fellow
at
Resources
for
the
Future.
This
work
was
supported
by
the
European
Commission
(FP7-
PEOPLE-2011-IOF,
Grant
Agreement
Number
299939).
Appendix
A.
Supplementary
data
Supplementary
data
associated
with
this
article
can
be
found,
in
the
online
version,
at
http://dx.doi.org/10.1016/j.socnet.2016.07.
007.
References
Berg,
J.A.,
2009.
Core
networks
and
whites’
attitudes
toward
immigrants
and
immi-
gration
policy.
Public
Opin.
Q.
73,
7–31.
Borgatti,
S.P.,
Foster,
P.C.,
2003.
The
network
paradigm
in
organizational
research:
a
review
and
typology.
J.
Manage.
29,
991–1013.
Brashears,
M.E.,
2011.
Small
networks
and
high
isolation?
A
reexamination
of
Amer-
ican
discussion
networks.
Soc.
Netw.
33,
331–341.
Burt,
R.S.,
1984.
Network
items
and
the
general
social
survey.
Soc.
Netw.
6,
293–339.
Burt,
R.S.,
Kilduff,
M.,
Tasselli,
S.,
2013.
Social
network
analysis:
foundations
and
frontiers
on
advantage.
Annu.
Rev.
Psychol.
64,
527–547.
Capa,
R.L.,
Audiffren,
M.,
Ragot,
S.,
2008.
The
interactive
effect
of
achievement
moti-
vation
and
task
difficulty
on
mental
effort.
Int.
J.
Psychophysiol.
70,
144–150.
Chang,
L.,
Krosnick,
J.A.,
2009.
National
surveys
via
RDD
telephone
interviewing
versus
the
internet.
Public
Opin.
Q.
73,
641–678.
Choe,
E.K.,
Lee,
N.B.,
Lee,
B.,
Pratt,
W.,
Kientz,
J.A.,
2014.
Understanding
quantified-
selfers’
practices
in
collecting
and
exploring
personal
data.
Proc.
CHI
2014.
Coromina,
L.,
Coenders,
G.,
2006.
Reliability
and
validity
of
egocentered
network
data
collected
via
web
a
meta-analysis
of
multilevel
multitrait,
multimethod
studies.
Soc.
Netw.
28,
209–231.
Coromina,
L.,
Hlebec,
V.,
Kogovsek,
T.,
Coenders,
G.,
2014.
Network
data
collected
via
the
web.
In:
Alhaij,
R.,
Rokne,
J.
(Eds.),
Encyclopedia
of
Social
Network
Analysis
and
Mining.
Springer,
New
York,
pp.
1069–1076.
Couper,
M.P.,
Tourangeau,
R.,
Kenyon,
K.,
2004.
Picture
this!
Exploring
visual
effects
in
web
surveys.
Public
Opin.
Q.
68,
255–266.
Csikszentmihalyi,
M.,
1990.
Flow:
The
Psychology
of
Optimal
Experience.
Harper,
New
York.
Deutskens,
E.,
Ruyter,
K.,
Wetzels,
M.,
Oosterveld,
P.,
2004.
Response
rate
and
response
quality
of
internet-based
surveys:
an
experimental
study.
Market.
Lett.,
15.
Dillman,
D.A.,
Smyth,
J.D.,
Christian,
L.M.,
2009.
Internet,
Mail,
and
Mixed-moded
Surveys:
The
Tailored
Design
Method.
Wiley,
Hoboken,
NJ.
Eagle,
D.E.,
Proeschold-Bell,
R.J.,
2015.
Methodological
considerations
in
the
use
of
name
generators
and
interpreters.
Soc.
Netw.
40,
75–83.
Emerson,
M.O.,
Sikkink,
D.,
2006.
Portraits
of
American
Life
Study.
1st
Wave.
Emerson,
M.O.,
Sikkink,
D.,
James,
A.D.,
2010.
The
panel
study
on
American
religion
and
ethnicity:
background,
methods,
and
selected
results.
J.
Sci.
Stud.
Relig.
49,
162–171.
Fagan,
J.,
Eddens,
K.,2015.
OpenEddi.
In:
XXXV
Sunbelt
Conference.
Brighton,
UK.
Hoaglin,
D.C.,
Iglewicz,
B.,
1987.
Fine
tuning
some
resistant
rules
for
outlier
labeling.
J.
Am.
Stat.
Assoc.
82,
1147–1149.
T.H.
Stark,
J.A.
Krosnick
/
Social
Networks
48
(2017)
36–45
45
Hogan,
B.,
Carrasco,
J.A.,
Wellman,
B.,
2007.
Visualizing
personal
networks:
working
with
participant-aided
sociograms.
Field
Methods
19,
116–144.
Hogan,
B.,
Melville,
J.R.,
I.I.,
G.L.P.,
Janulis,
P.,
Contractor,
N.,
Mustanski,
B.S.,
Bir-
kett,
M.,
2016.
Evaluating
the
paper-to-screen
translation
of
participant-aided
sociograms
with
high-risk
participants.
In:
Human
Factors
in
Computing,
San
Jose,
CA.
Hsieh,
Y.P.,
2015.
Check
the
phone
book:
testing
information
and
communica-
tion
technology
(ICT)
recall
aids
for
personal
network
surveys.
Soc.
Netw.
41,
101–112.
Kluger,
A.N.,
DeNisi,
A.,
1996.
The
effects
of
feedback
interventions
on
performance:
a
historical
review,
a
meta-analysis,
and
a
preliminary
feedback
intervention
theory.
Psychol.
Bull.
119,
254–284.
Krosnick,
J.A.,
1991.
Response
strategies
for
coping
with
the
cognitive
demands
of
attitude
measures
in
surveys.
Appl.
Cogn.
Psychol.
5,
213–236.
Krosnick,
J.A.,
1999.
Survey
research.
Annu.
Rev.
Psychol.
50,
537–567.
Krosnick,
J.A.,
Holbrook,
A.L.,
Berent,
M.K.,
Carson,
R.T.,
Hanemann,
W.M.,
Kopp,
R.J.,
Mitchell,
R.C.,
Presser,
S.,
Ruud,
P.A.,
Smith,
V.K.,
Moody,
W.R.,
Green,
M.C.,
Conaway,
M.,
2002.
The
impact
of
“No
Opinion”
response
options
on
data
qual-
ity
non-attitude
reduction
or
an
invitation
to
satisfice?
Public
Opin.
Q.
66,
371–403.
Lackaff,
D.,
2012.
New
opportunities
in
personal
network
data
collection.
In:
Zacarias,
M.,
De
Oliveira,
J.V.
(Eds.),
Human–Computer
Interaction.
Springer,
Berlin.
Levine,
S.,
Gordon,
G.,
1958.
Maximizing
returns
on
mail
questionnaires.
Public
Opin.
Q.
22,
568–575.
Li,
I.,
Dey,
A.,
Forlizzi,
J.,
2010.
A
stage-based
model
of
personal
informatics
systems.
Proc.
CHI
2010.
Marsden,
P.V.,
2011.
Survey
methods
for
network
data.
In:
Scott,
J.,
Carrington,
P.J.
(Eds.),
The
SAGE
Handbook
of
Social
Network
Analysis.
Sage,
London,
pp.
370–388.
Matzat,
U.,
Snijders,
C.,
2010.
Does
the
online
collection
of
ego-centered
network
data
reduce
data
quality?
An
experimental
comparison.
Soc.
Netw.
32,
105–111.
McCarty,
C.,
Govindaramanujam,
S.,
2005.
Modified
elicitation
of
personal
networks
using
dynamic
visualization.
Connections
26,
9–17.
McCarty,
C.,
Killworth,
P.D.,
Rennell,
J.,
2007.
Impact
of
methods
for
reducing
respon-
dent
burden
on
personal
network
structural
measures.
Soc.
Netw.
29,
300–315.
Nicholson,
R.A.,
Kreuter,
M.W.,
Lapka,
C.,
Wellborn,
R.,
Clark,
E.M.,
Sanders-
Thompson,
V.,
Jacobsen,
H.M.,
Casey,
C.,
2008.
Unintended
effects
of
emphasizing
disparities
in
cancer
communication
to
African-Americans.
Cancer
Epidemiol.
Biomark.
Prev.
17,
2946–2953.
Paik,
A.,
Sanchagrin,
K.,
2013.
Social
isolation
in
America:
an
artifact.
Am.
Sociol.
Rev.
78,
339–360.
Pettigrew,
T.F.,
Tropp,
L.R.,
2006.
A
meta-analytic
test
of
intergroup
contact
theory.
J.
Pers.
Soc.
Psychol.
90,
751–783.
Plaut,
V.C.,
Markus,
H.R.,
Lachman,
M.E.,
2002.
Place
matters:
consensual
features
and
regional
variation
in
american
well-being
and
self.
J.
Pers.
Soc.
Psychol.
83,
160–184.
Puca,
R.M.,
Schmalt,
H.D.,
1999.
Task
enjoyment:
a
mediator
between
achievement
motives
and
performance.
Motiv.
Emot.
23,
15–29.
Ricken,
S.T.,
Schuler,
R.P.,
Grandhi,
S.A.,
Jones,
Q.,
2010.
TellUsWho:
guided
social
network
data
collection.
In:
Proceedings
of
the
43rd
Hawaii
International
Con-
ference
on
System
Sciences.
Tubaro,
P.,
Casilli,
A.A.,
Mounier,
L.,
2014.
Eliciting
personal
network
data
in
web
surveys
through
participant-generated
sociograms.
Field
Methods
26,
107–125.
Vehovar,
V.,
Manfreda,
K.L.,
Koren,
G.,
Hlebec,
V.,
2008.
Measuring
ego-centered
social
networks
on
the
web:
questionnaire
design
issues.
Soc.
Netw.
30,
213–222.
Venkatesh,
V.,
1999.
Creation
of
favorable
user
perceptions:
exploring
the
role
of
intrinsic
motivation.
Mis
Q.
23,
239–260.
Vezzali,
L.,
Hewstone,
M.,
Capozza,
D.,
Giovannini,
D.,
Wölfer,
R.,
2014.
Improving
intergroup
relations
with
extended
and
vicarious
forms
of
indirect
contact.
Eur.
Rev.
Soc.
Psychol.
25,
314–389.
Wasserman,
S.,
Faust,
K.,
1994.
Social
Network
Analysis:
Methods
and
Applications.
University
Press,
Cambridge.
Yu,
J.,
Cooper,
H.,
1983.
A
quantitative
review
of
research
design
effects
on
response
rates
to
questionnaires.
J.
Market.
Res.
20,
36–44.
Zulman,
D.M.,
Damschroder,
L.J.,
Smith,
R.G.,
Resnick,
P.J.,
Sen,
A.,
Krupka,
E.L.,
Richardson,
C.R.,
2013.
Implementation
and
evaluation
of
an
incen-
tivized
Internet-mediated
walking
program
for
obese
adults.
TBM
3,
357–369.
... We developed a user-friendly software integration that can facilitate the collection of ESM and PSN data by combining the experience-sampling application m-Path with the Graphical Ego-Centered Network Survey Interface (GENSI; Stark & Krosnick, 2017), which assesses personal networks. ...
... As a PSN assessment tool, we used the newest version of the open-source code of GENSI (Stark & Krosnick, 2017; for the adapted version we used, see Stulp, 2021). GENSI uses an interactive graphical representation of the participant's social network, which is more efficient and enjoyable for respondents than asking them to list names and leads to better data quality than nongraphical assessment methods (McCarty & Govindaramanujam, 2005;Stark & Krosnick, 2017;Tubaro et al., 2014). ...
... As a PSN assessment tool, we used the newest version of the open-source code of GENSI (Stark & Krosnick, 2017; for the adapted version we used, see Stulp, 2021). GENSI uses an interactive graphical representation of the participant's social network, which is more efficient and enjoyable for respondents than asking them to list names and leads to better data quality than nongraphical assessment methods (McCarty & Govindaramanujam, 2005;Stark & Krosnick, 2017;Tubaro et al., 2014). GENSI provides a template for a name generator (see Figs. 2a-2c), different item types to ask questions about the people included in the network (e.g., their gender or closeness of the relationship; see Figs. 2d-2g), and a user-friendly assessment of ties between network members (see Fig. 2h). ...
Article
Full-text available
The daily social life of a person can be captured with different methodologies. Two methods that are especially promising are personal-social-network (PSN) data collection and experience-sampling methodology (ESM). Whereas PSN data collections ask participants to provide information on their social relationships and broader social environment, ESM studies collect intensive longitudinal data on social interactions in daily life using multiple short surveys per day. In combination, the two methods enable detailed insights into someone’s social life, including information on interactions with specific interaction partners from the personal network. Despite many potential uses of such data integration, there are few studies to date using the two methods in conjunction. This is likely due to their complexity and lack of software that allows capturing the full social life of someone while keeping the burden for participants and researchers sufficiently low. In this article, we report on the development of methodology and software for an ESM/PSN integration within the established ESM tool m-Path. We describe results of a first study using the developed tool that illustrate the feasibility of the proposed method combination and show that participants consider the assessments insightful. We further outline study-design choices and ethical considerations when combining the two methodologies. We hope to encourage applications of the presented methods in research and practice across different fields.
... We developed a user-friendly software integration that can facilitate the collection of ESM and PSN data by combining the experience-sampling application m-Path with the Graphical Ego-Centered Network Survey Interface (GENSI; Stark & Krosnick, 2017), which assesses personal networks. ...
... As a PSN assessment tool, we used the newest version of the open-source code of GENSI (Stark & Krosnick, 2017; for the adapted version we used, see Stulp, 2021). GENSI uses an interactive graphical representation of the participant's social network, which is more efficient and enjoyable for respondents than asking them to list names and leads to better data quality than nongraphical assessment methods (McCarty & Govindaramanujam, 2005;Stark & Krosnick, 2017;Tubaro et al., 2014). ...
... As a PSN assessment tool, we used the newest version of the open-source code of GENSI (Stark & Krosnick, 2017; for the adapted version we used, see Stulp, 2021). GENSI uses an interactive graphical representation of the participant's social network, which is more efficient and enjoyable for respondents than asking them to list names and leads to better data quality than nongraphical assessment methods (McCarty & Govindaramanujam, 2005;Stark & Krosnick, 2017;Tubaro et al., 2014). GENSI provides a template for a name generator (see Figs. 2a-2c), different item types to ask questions about the people included in the network (e.g., their gender or closeness of the relationship; see Figs. 2d-2g), and a user-friendly assessment of ties between network members (see Fig. 2h). ...
Preprint
The daily social life of a person can be captured with different methodologies. Two methods that are especially promising are personal social network (PSN) data collection and experience sampling methodology (ESM). While PSN data collections ask participants to provide information on their social relationships and broader social environment, ESM studies collect intensive longitudinal data on social interactions in daily life using multiple short surveys per day. In combination, the two methods enable detailed insights into someone’s social life, including information on interactions with specific interaction partners from the personal network.Despite many potential uses of such data integration, there are few studies to date using the two methods in conjunction. This is likely due to their complexity and lack of software that allows capturing the full social life of someone while keeping the burden for participants and researchers sufficiently low.In this paper, we report on the development of methodology and software for an ESM/PSN integration within the established ESM tool m-Path. We describe results of a first study using the developed tool which illustrate the feasibility of the proposed method combination and show that participants consider the assessments insightful. We further outline study design choices and ethical considerations when combining the two methodologies. We hope to encourage applications of the presented methods in research and practice across different fields.
... The social network survey was administered via an online social network tool, i.e. Graphical Ego-centered Network Survey Interface 28 . The sociodemographic and well-being questionnaires were administered through LimeSurvey, a protected web-based survey tool. ...
Article
The well-being and functioning of individuals with chronic pain (CP) varies significantly. Social factors, such as social integration, may help explain this differential impact. Specifically, structural (network size, density) as well as functional (perceived social support, conflict) social network characteristics may play a role. However, it is not yet clear whether and how these variables are associated with each other. Objectives were to examine: (1) both social network characteristics in individuals with primary and secondary CP, (2) the association between structural network characteristics and mental distress, and functioning/participation in daily life, and (3) whether the network's functionality mediated the association between structural network characteristics, and mental distress respectively functioning/participation in daily life. Using an online ego-centered social network tool, cross-sectional data were collected from 303 individuals with CP (81.85% women). No significant differences between individuals with fibromyalgia versus secondary CP were found regarding network size and density. In contrast, ANCOVA models showed lower levels of perceived social support and higher levels of conflict in primary (vs. secondary) CP. Structural equation models showed that: (1) larger network size indirectly predicted lower mental distress via lower levels of conflict; (2) higher network density increased mental distress via the increase of conflict levels. Network size or density did not (in)directly predict functioning/participation in daily life. The findings highlight that the role of conflict, in addition to support, should not be underestimated as a mediator for mental well-being. Research on explanatory mechanisms for associations between the network's structure, functionality and well-being is warranted. Perspective: This paper presents results on associations between structural (network size and density) and functional (social support and conflict) social network characteristics and well-being in the context of chronic pain by making use of an ego-centered network design. Results suggest an indirect association between structural social network characteristics and individuals with CP their mental well-being, but not with physical/social functioning.
... The full study protocol was preregistered on the OSF (https://osf.io/twjup). Participants followed an online link to the survey which was completed in Graphical Ego-centered Network Survey Interface (GENSI) software (Stark & Krosnick, 2017;Stulp, 2021) to allow the collection of social network data. Participants were presented with an information sheet, provided demographic information (age, gender, level of educational attainment), and then provided information about their social network using the graphical interface. ...
Article
Full-text available
Use of Instagram has grown rapidly in the last decade, but the effects of Instagram use on well-being are still unclear, with many studies based on younger samples with a female bias. The aim of this study was to examine the associations between Instagram use and levels of anxiety, depression, and loneliness in a nationally representative sample of UK adults by age and gender. An online sample of 498 UK adults were recruited using Prolific (Age: M = 49, SD = 15, range 19–82 years old; 52% female, 47% male). Participants stated whether or not they used Instagram, reported their frequency of Broadcast, Interaction and Browsing Instagram use and completed the Revised UCLA Loneliness Scale, and the Hospital Anxiety and Depression Scale. A genetic matching algorithm was used to match Instagram users (n = 372) and non-Instagram users (n = 100) on age, gender, education and nationality. There were no significant differences between users versus non-users of Instagram in levels of anxiety, depression or loneliness. There were also no significant associations between type of Instagram use (Broadcast, Interaction or Browsing) and levels of anxiety, depression or loneliness. The Bayes Factors for these models moderately to strongly supported the null model of no effect for Depression and Loneliness. This research adds to recent findings that suggests that the overall effect of SNSs on well-being may be small to non-existent. Future research should examine how exposure to different types of content on social media are related to well-being.
... A newly developed tool, GENSI [Graphical Ego-centered Network Survey Interface], was used to collect personal network data (Stark and Krosnick 2017;Stulp 2021). GENSI provides a specific visual survey interface that has been developed to reduce respondent burden when collecting data on respondents' social relations (see Figure 1 for an illustration). ...
... We adapted open-source code from the GENSI tool (Stark & Krosnick, 2017;Stulp, 2021) and implemented it in the ESM software m-Path (Mestdagh et al., 2022). To obtain a comprehensive list of names of our participant's social contacts we used three name generating questions: First, participants were prompted to come up with names of people they interact with during their daily life. ...
Article
Full-text available
The social context of a person, meaning their social relationships and daily social interactions, is an important factor for understanding their mental health. However, personalised feedback approaches to psychotherapy do not consider this factor sufficiently yet. Therefore, we developed an interactive feedback prototype focusing specifically on a person’s social relationships as captured with personal social networks (PSN) and daily social interactions as captured with experience sampling methodology (ESM). We describe the development of the prototype as well as two evaluation studies: Semi-structured interviews with students (N = 23) and a focus group discussion with five psychotherapy patients. Participants from both studies considered the prototype useful. The students considered participation in our study, which included social context assessment via PSN and ESM as well as a feedback session, insightful. However, it remains unclear how much insight the feedback procedure generated for the students beyond the insights they already gained from the assessments. The focus group patients indicated that in a clinical context, (social context) feedback may be especially useful to generate insight for the clinician and facilitate collaboration between patient and clinician. Furthermore, it became clear that the current feedback prototype requires explanations by a researcher or trained clinician and cannot function as a stand-alone intervention. As such, we discuss our feedback prototype as a starting point for future research and clinical implementation.
Article
Full-text available
People with schizophrenia face challenges with forming and maintaining social relationships, often resulting in poor social functioning. Commonly used measures of social functioning provide broad information relating to social relationships, but they do not adequately capture information regarding network structure and characteristics of network members. One method that can assess these more detailed aspects of social networks and provide a more comprehensive understanding of social functioning deficits is egocentric social network analysis (SNA). SNA is a scientific discipline that uses principles of network science and graph theory to analyze social relations quantitatively. Even though some types of SNA have been applied in prior schizophrenia studies, its application as a framework to measure social functioning has been extremely limited. Therefore, this article aims to formally introduce SNA and select quantitative SNA metrics, including measures of network composition, structure, homophily, and centrality, to schizophrenia researchers as novel ways of measuring components of social functioning. To demonstrate the application of SNA, we provide illustrative examples of the SNA metrics and graphical diagrams of social networks for two individuals with schizophrenia.
Article
Purpose The existing literature on business incubators has rarely addressed network establishments thus far. The purpose of this study is to shed light on the process of network formation and its structure during the incubator creation process. The study focuses on establishing a network involving three key types of partners in the initial phase of setting up four agribusiness incubators. These partners come from universities, research organisations and private companies operating in a developing context. Design/methodology/approach This study uses social network theory, using a combination of qualitative and network survey approaches in Kenya, Uganda and Zambia. The qualitative data were used to investigate partnership formation, while the network survey was conducted to map the organisational network of business incubator partners. Constructs of social network theory, including relational content, relational form, centrality of actors and instrumentality, were qualitatively measured in this study. Findings The findings indicate that partners rely on previous informal relationships, which are formalised during the creation of business incubator partnerships. In the African context, once these relationships are formalised, they become part of what is referred to as business networks, irrespective of the nature of the relationship content. Personal networks serve as precursors to establishing organisational networks that cater to incubated firms. Incubator partners facilitate the networking process and enhance the formation of new connections in the early-stage partnership-based tripartite business incubators. They act as brokers, bridging structural holes by coordinating actors across the hole and linking disconnected nodes by activating their sub-networks. The results reveal that the partners' level of embeddedness in various organisational settings increases the diversity of contacts integrated into the incubator networks. In terms of relational content, partners tend to perceive the ties as business-oriented, even though the content of the relationship may differ. The strength of relationships depends on their formalization and the frequency of interaction. Research limitations/implications The findings of the study contradict the reviewed social network literature, emphasising the necessity to adapt methodological approaches based on the cultural and institutional context in which they are applied. The social network questionnaire requires modification when used in different contexts and settings. Specifically, methodologies should be adjusted in situations where actors need to be discreet concerning their various relationships. It is important to note that organisational culture does influence actors' behaviours. Practical implications This study is deemed relevant to managers and practitioners of business incubators alike. It highlights that understanding the contextual factors that influence networking practices, the type and strength of networks and the resources provided to participants are crucial elements that should be considered in future policy and intervention initiatives. Originality/value This paper addresses the identified gap in examining network formation during the establishment of business incubators. The research is significant as it provides insights into networking at the incubator level of analysis within a tripartite business incubator setup. Ultimately, this paper helps increase our understanding of networking within the context of emerging countries.
Article
Full-text available
According to many seasoned survey researchers, offering a no-opinion option should reduce the pressure to give substantive responses felt by respondents who have no true opinions. By contrast, the survey satisficing perspective suggests that no-opinion options may discourage some respondents from doing the cognitive work necessary to report the true opinions they do have. We address these arguments using data from nine experiments carried out in three household surveys. Attraction to no-opinion options was found to be greatest among respondents lowest in cognitive skills (as measured by educational attainment), among respondents answering secretly instead of orally, for questions asked later in a survey, and among respondents who devoted little effort to the reporting process. The quality of attitude reports obtained (as measured by over-time consistency and responsiveness to a question manipulation) was not compromised by the omission of no-opinion options. These results suggest that inclusion of no-opinion options in attitude measures may not enhance data quality and instead may preclude measurement of some meaningful opinions.
Article
Full-text available
Research in social psychology has provided impressive evidence that intergroup contact reduces prejudice. However, to the extent that strategies based on direct contact are sometimes difficult to implement, scholars have more recently focused on indirect contact. An effective form of indirect contact is extended contact. According to the extended contact hypothesis, simply knowing that ingroup members have outgroup friends (extended contact), or observing these friendships vicariously (vicarious contact), can improve intergroup relations. Since its initial formulation a large body of studies has supported the validity of the extended contact hypothesis. In reviewing the available literature on two forms of indirect contact (extended and vicarious), we outline a model that identifies their antecedents and consequences, spanning from cognitive to affective to behavioural outcomes. In addition to identifying the main moderators of indirect contact, we also distinguish two different routes, one cognitive and one affective, that underlie what processes mediate their effects. Finally, we indicate some possible avenues for future research and we consider how direct and indirect contact strategies can be used in combination to improve intergroup relations.
Article
Full-text available
The article presents a method to elicit personal network data in Internet surveys, exploiting the renowned appeal of network visualizations to reduce respondent burden and risk of dropout. It is a participant-generated computer-based sociogram, an interactive graphical interface enabling participants to draw their own personal networks with simple and intuitive tools. In a study of users of websites on eating disorders, we have embedded the sociogram within a two-step approach aiming to first elicit the broad ego network of an individual and then to extract subsets of issue-specific support ties. We find this to be a promising tool to facilitate survey experience and adaptable to a wider range of network studies.
Article
Full-text available
In response to rising health care costs associated with obesity rates, some health care insurers are adopting incentivized technology-enhanced wellness programs. The purpose of this study is to evaluate the large-scale implementation of an incentivized Internet-mediated walking program for obese adults and to examine program acceptance, adherence, and impact. A mixed-methods evaluation was conducted to investigate program implementation, acceptance, and adherence rates, and physical activity rates among program participants. Program implementation was shaped by national and state policies, data security concerns, and challenges related to incentivizing participation. Among 15,397 eligible individuals, 6,548 (43 %) elected to participate in the walking program, achieving an average of 6,523 steps/day (SD 2,610 steps). Participants who uploaded step counts for 75 % of days for a full year (n = 2,885) achieved an average of 7,500 steps (SD 3,093). Acceptance and participation rates in this incentivized Internet-mediated walking program suggest that such interventions hold promise for engaging obese adults in physical activity.
Chapter
The Encyclopedia of Social Network Analysis and Mining (ESNAM) is the first major reference work to integrate fundamental concepts and research directions in the areas of social networks and applications to data mining. While ESNAM reflects the state-of-the-art in social network research, the field had its start in the 1930s when fundamental issues in social network research were broadly defined. These communities were limited to relatively small numbers of nodes (actors) and links. More recently the advent of electronic communication, and in particular on-line communities, have created social networks of hitherto unimaginable sizes. People around the world are directly or indirectly connected by popular social networks established using web-based platforms rather than by physical proximity.Reflecting the interdisciplinary nature of this unique field, the essential contributionsof diverse disciplines, from computer science, mathematics, and statistics tosociology and behavioral science, are described among the 300 authoritative yet highly readable entries.Students will find a world of information and insight behind the familiar faade of the social networks in which they participate. Researchers and practitioners will benefit from a comprehensive perspective on the methodologies for analysis of constructed networks, and thedata mining and machine learning techniques that have proved attractive for sophisticated knowledge discovery in complex applications. Also addressed is the application of social network methodologies to other domains, such as web networks and biological networks.
Article
This study tested two recall aids for the name generator procedure via a randomized web experiment with 447 college students, eliciting their personal networks. Compared to participants solely presented with the name generator, participants being prompted and probed to consult records saved in their communication devices provided more comprehensive network data and more weak ties. Furthermore, these data were garnered without either a substantial increase in item nonresponse or a decrease in completion time for subsequent name interpreters. Thus, ICT recall aids are deemed cost-effective and context-neutral techniques to improve the recall accuracy of data collected by the name generator.
Article
Researchers have studied how people use self-tracking technologies and discovered a long list of barriers including lack of time and motivation as well as difficulty in data integration and interpretation. Despite the barriers, an increasing number of Quantified-Selfers diligently track many kinds of data about themselves, and some of them share their best practices and mistakes through Meetup talks, blogging, and conferences. In this work, we aim to gain insights from these "extreme users," who have used existing technologies and built their own workarounds to overcome different barriers. We conducted a qualitative and quantitative analysis of 52 video recordings of Quantified Self Meetup talks to understand what they did, how they did it, and what they learned. We highlight several common pitfalls to self-tracking, including tracking too many things, not tracking triggers and context, and insufficient scientific rigor. We identify future research efforts that could help make progress toward addressing these pitfalls. We also discuss how our findings can have broad implications in designing and developing self-tracking technologies.
Article
In a national field experiment, the same questionnaires were administered simultaneously by RDD telephone interviewing, by the Internet with a probability sample, and by the Internet with a nonprobability sample of people who volunteered to do surveys for money. The probability samples were more representative of the nation than the nonprobability sample in terms of demographics and electoral participation, even after weighting. The nonprobability sample was biased toward being highly engaged in and knowledgeable about the survey's topic (politics). The telephone data manifested more random measurement error, more survey satisficing, and more social desirability response bias than did the Internet data, and the probability Internet sample manifested more random error and satisficing than did the volunteer Internet sample. Practice at completing surveys increased reporting accuracy among the probability Internet sample, and deciding only to do surveys on topics of personal interest enhanced reporting accuracy in the nonprobability Internet sample. Thus, the nonprobability Internet method yielded the most accurate self-reports from the most biased sample, while the probability Internet sample manifested the optimal combination of sample composition accuracy and self-report accuracy. These results suggest that Internet data collection from a probability sample yields more accurate results than do telephone interviewing and Internet data collection from nonprobability samples.