Conference PaperPDF Available

BotViz: Data Visualizations for Collaborations With Bots and Volunteers

Authors:

Abstract and Figures

Online bots are quickly becoming important collaborators with humans in tackling issues in healthcare, politics, and even activism. Recently, non-profits have used many bots in place of their human members to scaffold collaborations with citizens. However, this shift invites new challenges: it is difficult for outsiders to understand the joint effort that bots have now initiated with humans, limiting the goals reached collectively. To help non-profits coordinate the volunteers recruited by online bots, we propose BotViz. BotViz is a new online platform that via data visualizations provides outsiders a clear understanding of the interactions of bots with volunteers. Our data visualization presents two ben-efits related to traditional interfaces: 1) Diversity, wherein people can understand the diversity of the volunteers, especially their unique strengths; 2) Stalling, wherein people who may be delaying the collective effort triggered by bots can be easily identified by volunteers. Together, our data vi-sualizations point to a future where humans and online bots can better collaborate in order to have large scale impact.
Content may be subject to copyright.
BotViz: Data Visualizations for
Collaborations With Bots and
Volunteers
Carlos Toxtli
CS & EE
West Virginia University
Needa Almani
Computer Science
University of California, Santa
Barbara (UCSB)
Claudia Flores-Saviaga, Flor
Aguilar, Alejandra Monroy,
Juan Pablo Flores, Jeerel
Herrejon, Norma Elva Chavez
Computer Engineering
Universidad Nacional Autonoma
de Mexico (UNAM)
Shloka Desai
Computer Science
Stanford University
William Dai
Troy High School
Fullerton, California
Saiph Savage
CS & EE
West Virginia University
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the Owner/Author. Copyright is held by the
owner/author(s). CSCW ’16 Companion, February 27 - March 02, 2016,
San Francisco, CA, USA
ACM 978-1-4503-3950-6/16/02.
http://dx.doi.org/10.1145/2818052.2869132
Abstract
Online bots are quickly becoming important collaborators
with humans in tackling issues in healthcare, politics, and
even activism. Recently, non-profits have used many bots
in place of their human members to scaffold collaborations
with citizens. However, this shift invites new challenges: it is
difficult for outsiders to understand the joint effort that bots
have now initiated with humans, limiting the goals reached
collectively. To help non-profits coordinate the volunteers
recruited by online bots, we propose BotViz. BotViz is a
new online platform that via data visualizations provides
outsiders a clear understanding of the interactions of bots
with volunteers. Our data visualization presents two ben-
efits related to traditional interfaces: 1) Diversity, wherein
people can understand the diversity of the volunteers, es-
pecially their unique strengths; 2) Stalling, wherein people
who may be delaying the collective effort triggered by bots
can be easily identified by volunteers. Together, our data vi-
sualizations point to a future where humans and online bots
can better collaborate in order to have large scale impact.
Introduction
There is growing interest in designing autonomous agents
that can act as important teammates for humans to over-
come societal problems and challenges. Several investiga-
tors have focused on creating systems that can empower
better collaborations between automated agents and hu-
421
CSCW '16 COMPANION, FEBRUARY 27–MARCH 2, 2016, SAN FRANCISCO, CA, USA
mans. Ramchurn et al. [3] studied techniques for integrating
humans and robots to provide more effective responses to
disasters. Recently, we have also seen a proliferation of col-
laborations between humans and online social agents, i.e.,
bots. Companies, organizations, and even governments are
using bots to disseminate their messages to larger audi-
ences and influence behavior. Researchers have started to
study community scaffolding between volunteers and auto-
mated agents [4].
The high number of collaborations between humans and
bots shows the need for interfaces which can better ease
the communication between the two. However, given their
novelty, most work focuses on simply initiating these col-
laborations or only detecting bots for spam filtering. As a
result, people outside of the experiment cannot easily un-
derstand the collaboration nor build upon the scaffolding
that bots initiate. This also limits the goals reached by the
collective effort, as leaders are obstructed from adopting
what the bots started and thus directing the effort them-
selves for success.
Figure 1: Overview of BotViz’s
interface. Non-profits input the bot
accounts whose collaborations with
humans they want to better
understand, and BotViz returns
data visualizations that show the
diverse types of volunteers that
bots recruited and the people who
could be stalling the effort.
Figure 2: Overview of BotViz’s
functionality: users input the cause
they want bots to scaffold
collaborations with humans, and
users can then see the type of
volunteers a specific bot recruited,
and the work produced by such
volunteers.
We hypothesize that data visualizations that profile volun-
teers and identify stalling can help to better direct a scaffold
built by bots. We embody our vision with BotViz: an online
platform which, via data visualizations, enables people to
understand the characteristics of a volunteer workforce that
was recruited by online bots as well as the labor such vol-
unteers have produced. Our vision is that online bots can
be used to scaffold community efforts; data visualizations
can give insights on how to best orchestrate the collabora-
tions bots have initiated. We tested BotViz with online bots
who recruit citizens for different collective efforts, such as to
fight corruption or to reduce the gender gap on Wikipedia
(For more information on these bots see [5, 4]). Our vision
is that BotViz will become a platform that helps humans (in
particular, non-profits) scaffold off the collaborations that
bots have initiated with human volunteers. Figure 1 shows
an overview of BotViz’s interfaces, and Figure 2 of BotViz’s
functionality.
BotViz
Our system has two parts: a data modeling component that
infers the characteristics of the volunteers and the type of
work they produce; and a data visualization component
that presents the information back to the end-user. BotViz
focuses on visualizing the information that can help non-
profits better build off the collaborations bots have initiated.
BotViz presents two core benefits related to traditional inter-
faces:
1) Diversity. The ease by which users can understand the
different types of volunteers who were recruited. In particu-
lar, we emphasize the diverse types of expertise present in
volunteers, as knowing this information can help non-profits
to dispatch tasks more efficiently [1]. For instance, a person
with specialization in law could help when there are possi-
ble violations to the non-profit’s members.
We identify the areas of specialization of volunteers based
on what they tweet in their personal timeline. We consider
that a volunteer’s areas of specialization correspond to the
topics she tweets about the most. For this purpose, we first
link each volunteer to all of the tweets she has ever gener-
ated. We then input the tweets of all volunteers into a topic
modeling algorithm (in specific Latent Dirchlecht Allocation),
in order to discover volunteers’ main areas of specialization.
An area of specialization is defined as a set of words that
frequently co-occur together in the tweets of volunteers.
For instance, an area of specialization on “Feminism” might
be linked to the words or hashtags of “female”, “gender,
“#women’sRights”. Once the system discovers the differ-
422
SESSION: POSTERS
ent areas of specialization of volunteers, it measures how
much each volunteer has tweeted about the different areas
in comparison to other volunteers. This helps end-users to
compare and contrast volunteers.
After this data modeling step, each volunteer is represented
as a vector denoting how much she/he relates to a given
area of specialization. We then use mean shift algorithm
to group together similar vectors and discover clusters of
people. We opted to use mean shift algorithm because it is
based on a nonparametric density estimation, and there-
fore we would not need to know the number of clusters
beforehand (unlike K-means). Instead, we let mean shift
algorithm discover the clusters from our data. The clusters
represent volunteers with similar areas of specialization and
also allow us to discover the diverse types of specializations
present among volunteers. A cluster can, for instance, rep-
resent volunteers specialized in technology and feminism.
Figure 3: Example of BotViz’s
diversity interface showing the
diverse types of volunteers two
different bots recruited.
Figure 4: Zoomed-in version of
BotViz’s diversity interface. It helps
understand the details of each
volunteer’s specialization by seeing
the related words or hashtags the
person used the most. In this case,
a volunteer had two main
specialization areas, some words
that she used frequently for one
area were: Internet,
InternetFreedom, Machitroll.
The information of the discovered clusters, along with the
volunteers belonging to each cluster, is fed into BotViz’s
data visualization engine. The engine graphically repre-
sents the data to help users understand the diverse type
of volunteers that bots recruited. The engine visualizes
each of the discovered cluster as circles, and inside each
cluster it presents the volunteers categorized in that clus-
ter. Each volunteer is represented as a circle with a set of
nested circles representing her different areas of special-
ization. For instance, a volunteer specialized in “Feminism”
and “Technology”, will be illustrated with two circles inside a
main circle. Each area of specialization is denoted by a cer-
tain color and its size represents how much the volunteer
tweets about that area in comparison to others. We pro-
pose nested-circles to save space, while still helping users
to understand at a glance the diverse types of volunteers
present. We play with size to help users to rapidly identify
the people with the most expertise in a certain area. Figure
3 presents BotViz’s diversity interface showing an overview
of the volunteers recruited by different bots. Users can also
zoom-in and obtain a more detailed understanding of what
each person’s specialization entails, specifically the words
or hashtags that the person uses the most for each of her
specialization areas (see Figure 4). We also plan to explore
with interfaces where volunteers can correct the system,
and self-report their own expertise and uniqueness.
2) Stalling. While the above helps to understand volun-
teers’ traits, this is not optimized to detect who might be
delaying the collective effort initiated by bots. We propose
a visualization that helps non-profits to identify who is mak-
ing off-topic contributions, and could be derailing/stalling an
effort from its objectives. Visualizing stalling helps non prof-
its to take action to get the effort back on course. To detect
off-topic contributions, we take ideas from crowd markets
that use gold standards to detect quality work [2]. In this
case, non-profits provide examples of on-topic contributions
(gold standard) and we use that to detect when a volunteer
is on-track or derailing from the effort’s purpose. For each
volunteer we then calculate the percentage of contributions
which were on-topic and off-topic; total number of contribu-
tions; and the list of volunteers with whom they collaborated
(i.e., they had at least one tweet mentioning them.)
The stalling interface uses this data to showcase the work
of volunteers. It represent volunteers as circles, and color-
codes them based on their type of contributions. We use a
gradient scale to denote how much people’s contributions
were on-topic or off-topic work. The size of each volunteer
shows the number of contributions made. The way by which
we encode size and color helps to identify the volunteers
who deliver many quality contributions. This is important as
such volunteers have the potential of becoming long-term
423
CSCW '16 COMPANION, FEBRUARY 27–MARCH 2, 2016, SAN FRANCISCO, CA, USA
core members of a non-profit. The interface also shows
links between volunteers who have collaborated. BotViz’s
stalling interface thus grants users the ability to not only
detect the volunteers who could be derailing an effort, but
also the people who might be directly affected by them.
Figure 5 presents the stalling interface. Overall, BotViz can
empower non-profits to better understand the scaffolding
initiated by bots to take action and also to better orchestrate
the recruited human volunteers.
Figure 5: BotViz’s stalling
visualization helps to understand
the type of work contributed by the
volunteers recruited by the bots,
with a focus on highlighting the
volunteers who could be delaying
others by being off-topic.
Usability Inspection
We ran a series of in-depth cognitive walkthroughs with a
small number of subjects in order to solicit feedback about
BotViz’s basic design, as well as to identify how effective
this type of data visualizations might be for: (1) understand-
ing the diverse types of volunteers that a bot recruited; (2)
identifying the volunteers stalling the effort by making off-
topic contributions. For this purpose, we first ran our bots to
initiate collaborations with volunteers. Four of our bots fo-
cused on initiating scaffolding to fight corruption, and two on
reducing the gender gap in Wikipedia. Our bots recruited
216 volunteers (41 for gender equality and 175 for corrup-
tion) who then produced 494 contributions. We collected
volunteers’ online interactions with the bots, as well as their
personal timelines. We fed this data into BotViz in order to
generate the data visualizations. We then had our subjects
use BotViz to visualize the scaffoldings from each bot. All
subjects were able to navigate the represented informa-
tion within a few seconds and with only minimal instruction.
Subjects had positive responses to the visualization and
noted that it aided them in knowing the type of volunteers
recruited by each bot. Subjects also felt that the visualiza-
tions helped to identify the volunteers who might be stalling
the effort due to their off-topic contributions. However, some
subjects noted that having a balance of off-topic and on-
topic contributions might be important in order to not stress
volunteers and could actually help execute the effort faster
in the long-run.
Discussion
This paper introduced BotViz, an online platform that aims
to help humans better build off the scaffolds that bots have
initiated. BotViz presents data visualizations that highlight
the diversity of volunteers recruited by bots, as well as the
volunteers currently delaying a collective effort. A formal
user study of our system is forthcoming.
References
[1] E Gil Clary, Mark Snyder, and Robert Ridge. 1992.
Volunteers’ motivations: A functional strategy for the
recruitment, placement, and retention of volunteers.
Nonprofit Management and Leadership 2, 4 (1992),
333–350.
[2] David Oleson, Alexander Sorokin, Greg P Laughlin,
Vaughn Hester, John Le, and Lukas Biewald. 2011.
Programmatic Gold: Targeted and Scalable Quality
Assurance in Crowdsourcing. Human computation 11,
11 (2011).
[3] Sarvapali D. Ramchurn, Trung Dong Huynh, Yuki
Ikuno, Jack Flann, Feng Wu, Luc Moreau, Nicholas R.
Jennings, Joel E. Fischer, Wenchao Jiang, Tom Rod-
den, Edwin Simpson, Steven Reece, and Stephen J.
Roberts. HAC-ER: A Disaster Response System
Based on Human-Agent Collectives. In AAMAS ’15.
[4] Saiph Savage, Andres Monroy-Hernandez, and Tobias
Hollerer. 2016. Botivist: Calling Volunteers to Action
using Online Bots. In CSCW’16. ACM.
[5] Claudia Saviaga, Saiph Savage, and Dario Taraborelli.
2016. LeadWise: Using Online Bots to Recruit and
Guide Expert Volunteers. In CSCW’16 Posters. ACM.
424
SESSION: POSTERS
... Future work is needed to analyze how these factors influence the results obtained from content creation requests online and to study the generalizability of our results for different instances of the crowd categories tested in this work. We look forward to testing the results obtained by tackling social media volunteers with more domain expertise, different content creation tasks, different sizes of social networks, and offer visualization of the task created [22]. ...
Article
Full-text available
Crowdsourced content creation like articles or slogans can be powered by crowds of volunteers or workers from paid task markets. Volunteers often have expertise and are intrinsically motivated, but are a limited resource, and are not always reliably available. On the other hand, paid crowd workers are reliably available, can be guided to produce high-quality content, but cost money. How can these different populations of crowd workers be leveraged together to power cost-effective yet high-quality crowd-powered content-creation systems? To answer this question, we need to understand the strengths and weaknesses of each. We conducted an online study where we hired paid crowd workers and recruited volunteers from social media to complete three content creation tasks for three real-world non-profit organizations that focus on empowering women. These tasks ranged in complexity from simply generating keywords or slogans to creating a draft biographical article. Our results show that paid crowds completed work and structured content following editorial guidelines more effectively. However, volunteer crowds provide content that is more original. Based on the findings, we suggest that crowd-powered content-creation systems could gain the best of both worlds by leveraging volunteers to scaffold the direction that original content should take; while having paid crowd workers structure content and prepare it for real world use.
Preprint
Full-text available
Human–computer interaction scholars are increasingly touching on topics relatedto politics or democracy. As these concepts are ambiguous, an examination ofconcepts’ invoked meanings aids in the self-reflection of our research efforts. Weconduct a thematic analysis of all papers with the word ‘politics’ in abstract,title or keywords (n=378) and likewise 152 papers with the word ‘democracy.’We observe that these words are increasingly being used in human-computerinteraction, both in absolute and relative terms. At the same time, we show thatresearchers invoke these words with diverse levels of analysis in mind: the earlyresearch focused on mezzo-level (i.e., small groups), but more recently the workhas begun to include macro-level analysis (i.e., society and politics as played inthe public sphere). After the increasing focus on the macro-level, we see a tran-sition towards more normative and activist research, in some areas it replacesobservational and empirical research. These differences indicate semantic differ-ences, which – in the worst case – may limit scientific progress. We bring thesedifferences visible to help in further exchanges of ideas and human–computerinteraction community to explore how it orients itself to politics and democracy.
Preprint
Full-text available
This is a manuscript I'm working to map the HCI work touching democratic decision making. Ask me for access to the full paper ;) Political events throughout the Western societies have created great concerns about the sustainability and stability of the society. How can the SIGCHI community address part of these concerns in the digital society? A systematic literature review on SIGCHI's work on democracy, politics and civic topics is conducted. After limiting the review to topics related to political decision making, a total of 46 papers were reviewed. Following findings can be summarized. The published papers focused more on developing new systems and practices and less on observations and case studies. Roughly equal number of papers advocated liberal-individualist and deliberative models of democracy. These findings demonstrate the need for further integration of political science knowledge, such as the work on democratic innovations, to the field of SIGCHI. At the same time, the political science community could apply the design-driven research to rethink the e-democracy systems developed.
Conference Paper
We present the findings of a quantitative and qualitative empirical research to understand the possibilities of engagement and affection in the use of conversational agents (chatbots). Based on an experiment with 13 participants, we explored on one hand the correlation between the user expectation, user experience and intended use and, on the other, whether users feel keen and engaged in having a personal, empathic relation with an intelligent system like chatbots. We used psychological questionnaires to semi-structured interviews for disentangle the meaning of the interaction. In particular, the personal psychological background of participants was found critical while the experience itself allowed them to imagine new possible relations with chatbots. Our results show some insights on how people understand and empathize with future interactions with conversational agents and other non-visual interfaces.
Conference Paper
In order to help non-profits recruit volunteers with special-ized knowledge we propose LeadWise, a system that uses social media bots to recruit and guide contributions from ex-perts to assist non-profits in reaching their goals. We test the feasibility of using the system to recruit specialized vol-unteers for Wikipedia. We focus in particular on experts who can help Wikipedia in its objective of reducing the gen-der gap by covering more women in its articles. Results from our first pilot show that LeadWise was able to obtain a noteworthy number of expert participants in a two week period with limited requests to targeted specialists.
Conference Paper
In order to help non-profits recruit volunteers with special-ized knowledge we propose LeadWise, a system that uses social media bots to recruit and guide contributions from ex-perts to assist non-profits in reaching their goals. We test the feasibility of using the system to recruit specialized vol-unteers for Wikipedia. We focus in particular on experts who can help Wikipedia in its objective of reducing the gen-der gap by covering more women in its articles. Results from our first pilot show that LeadWise was able to obtain a noteworthy number of expert participants in a two week period with limited requests to targeted specialists.
Article
To help activists call new volunteers to action, we present Botivist: a platform that uses Twitter bots to find potential volunteers and request contributions. By leveraging different Twitter accounts, Botivist employs different strategies to encourage participation. We explore how people respond to bots calling them to action using a test case about corruption in Latin America. Our results show that the majority of volunteers (>80%) who responded to Botivist's calls to action contributed relevant proposals to address the assigned social problem. Different strategies produced differences in the quantity and relevance of contributions. Some strategies that work well offline and face-to-face appeared to hinder people's participation when used by an online bot. We analyze user behavior in response to being approached by bots with an activist purpose. We also provide strong evidence for the value of this type of civic media, and derive design implications.
Article
A psychological strategy for understanding the motivational underpinnings of volunteerism is described. In a presentation that merges the theoretical interests of researchers with the practical interests of volunteer administrators, six different motivational functions served by volunteerism are identified, and an inventory designed to measure these motivations is presented. The implications of this functional approach for the recruitment, placement, and retention of volunteers are then elaborated. Finally, recommendations are provided for volunteer administrators who seek to increase the number of people who volunteer and to improve their human resource management.
Programmatic Gold: Targeted and Scalable Quality Assurance in Crowdsourcing
  • David Oleson
  • Alexander Sorokin
  • Greg P Laughlin
  • Vaughn Hester
  • John Le
  • Lukas Biewald
David Oleson, Alexander Sorokin, Greg P Laughlin, Vaughn Hester, John Le, and Lukas Biewald. 2011. Programmatic Gold: Targeted and Scalable Quality Assurance in Crowdsourcing. Human computation 11, 11 (2011).
HAC-ER: A Disaster Response System Based on Human-Agent Collectives
  • D Sarvapali
  • Trung Dong Ramchurn
  • Yuki Huynh
  • Jack Ikuno
  • Feng Flann
  • Luc Wu
  • Nicholas R Moreau
  • Joel E Jennings
  • Wenchao Fischer
  • Jiang
Sarvapali D. Ramchurn, Trung Dong Huynh, Yuki Ikuno, Jack Flann, Feng Wu, Luc Moreau, Nicholas R. Jennings, Joel E. Fischer, Wenchao Jiang, Tom Rodden, Edwin Simpson, Steven Reece, and Stephen J. Roberts. HAC-ER: A Disaster Response System Based on Human-Agent Collectives. In AAMAS '15.