Conference PaperPDF Available

Studying Animation for Real-Time Visual Analytics: A Design Study of Social Media Analytics in Emergency Management

Authors:

Abstract and Figures

Domains such as emergency management have a need for real-time change monitoring and pattern analysis, but interface design principles for real-time visual analysis situations are still under development. In this paper, we present early results from a design study in social media visual analytics for emergency management. Our motivation is a main information visualization challenge: the lack of clear design principles informed by research in human cognition for the use of animation in real-time streams. We discuss three domain-specific challenges: (1) Coping with the high volume of social media data that is generated during disaster response, (2) analysts' need to quickly extract relevant features for real-time sense-making, and (3) the effective analysis of social media streams even when some critical attributes are absent. This paper presents preliminary results on a research-based design principle for the use of animation in real-time visual analytics, targeted to support the real-time analysis of social media data in emergency management.
Content may be subject to copyright.
Studying Animation for Real-Time Visual Analytics: A Design Study of Social
Media Analytics in Emergency Management
Nadya A. Calderon
Interactive Arts and Technology
Simon Fraser University
ncaldero@sfu.ca
Richard Arias-Hernandez
Interactive Arts and Technology
Simon Fraser University
ariasher@sfu.ca
Brian Fisher
Interactive Arts and Technology
Simon Fraser University
bfisher@sfu.ca
Abstract
Domains such as emergency management have a
need for real-time change monitoring and pattern
analysis, but interface design principles for real-time
visual analysis situations are still under development.
In this paper, we present early results from a design
study in social media visual analytics for emergency
management. Our motivation is a main information
visualization challenge: the lack of clear design
principles informed by research in human cognition for
the use of animation in real-time streams. We discuss
three domain-specific challenges: (1) Coping with the
high volume of social media data that is generated
during disaster response, (2) analysts’ need to quickly
extract relevant features for real-time sense-making;
and (3) the effective analysis of social media streams
even when some critical attributes are absent. This
paper presents preliminary results on a research-based
design principle for the use of animation in real-time
visual analytics, targeted to support the real-time
analysis of social media data in emergency
management.
1. Introduction
Social media has gained increasing attention in the
crisis and emergency management community during
the past six years, specially after its use by citizens
during the 2007 Virginia Tech shooting [37], the 2007
California wildfires [38], the 2010 Haiti earthquake
[19], the 2011 Norway attacks [33], the 2011
Fukushima nuclear disaster [30], and the 2012 strike of
Hurricane Sandy in the USA [26]. Surveys conducted
by the Red Cross in USA in 2010 and in Canada in
2012[9] have also enhanced the interest of practitioners
in social media since these surveys have clarified
current citizens positions, perceptions, and
expectations about the place of social media during
crisis response.
However, only recently, information visualization
and visual analytics researchers have been working on
this application domain [23,32] and there are still many
challenges presented by crisis-related social media data
that these researchers need to attend to. One of these
challenges is the immediacy of the analysis of social
media data for emergency management. Computational
processing of social media data for crisis management
and its subsequent visual representation has mostly
focused on supporting retrospective visual analytics
rather than the ad-hoc, real-time visual analytics that is
required during crisis response. One of the caveats that
have prevented these kinds of approaches is the lack of
consensus on what constitutes the best way for visually
representing continuous streams of data to support
analytical processes.
We report on qualitative and quantitative research
conducted with emergency management practitioners
involved in social media data analysis. The results of
this research structure the domain problem, the target
users, and the analytical tasks that configure pragmatic
design requirements. We also report on the
development of a first prototype for the visual
encoding of streaming social media data in this
domain. We use our research results to justify design
decisions for the visual representation of real-time
data, such as the use of animation, choice of visual
encoding, and choice of colors (See Fig. 1).
The theoretical focus of this paper is on cognitive-
informed design principles for real-time visualization
and analysis of textual streaming data, such as the one
provided by Twitter. In this paper, we talk about
cognitive-research under the understanding of
perception as visual cognition. Our laboratory conducts
more of the higher order reasoning works as presented
in [15], however, this study explores more of the
perceptual saliency and lower order cognitive aspects.
We still refer to it as cognitive research, as other works
investigating animation and visualization do [17].
Although results are yet at an early stage, the two
main contributions of this paper are: (1) theoretical
contribution: a cognitive-informed research on a design
2014 47th Hawaii International Conference on System Science
978-1-4799-2504-9/14 $31.00 © 2014 Crown Copyright
DOI 10.1109/HICSS.2014.176
1364
principle to make animation work for real-time visual
analytics; and (2) applied and domain-specific
contribution: the study of a visualization technique to
support the real-time analysis of social media data in
emergency management.
2. Related Work
2.1. Social media for emergency management
An in-depth content analysis of all of the papers
published on the proceedings of the International
Conference on Information Systems for Crisis
Response and Management from 2008 to 2012 showed
an increasing interest from researchers on current and
potential uses of social media data for emergency
response and management. The number of papers that
included social media for emergency management as a
research subject went from 4 papers in 2008 (4.2% of
the total) to 13 in 2012 (12.9% of the total). Most of
the research reported (60%) focused on Twitter-related
uses, given the open access provided by the Twitter
Search API and Twitter Stream API to harvest data for
design and analysis.
The kind of research reported on the subject is also
shifting. Early research tended to focus on descriptive
approaches, which focused on understanding what
citizens actually do when engaging with social media
during crisis and disasters [37, 38]. More recently, the
tendency is towards prescriptive and design-oriented
approaches, which focus on what citizens, agencies,
and designers should do to make the most of social
media for effective crisis response and emergency
management. Examples of this latter kind of approach
includes: proposed syntaxes for ease of management of
emergency Tweets [32], systems that automate the
harvesting and classification of social media data using
NLP and machine learning algorithms [34], and
applications that provide visual representations and
interactions with social media data for emergency
management practitioners [23]. This shift towards a
prescriptive and design-oriented approach is also the
result of the progressive understanding of the value
that social media has in this particular application
domain and the specific challenges that need to be
addressed, such as: large volumes of data, ill-structured
and incomplete data, source trustability,
multiple/unstable ontologies used by social media
analysts in emergency management, and the need for
real-time processing and analysis of streaming social
media data.
2.2. Visual analytics of social media for
emergency management
A more specialized sub-field that relates to the
topic of this paper is the one found in the intersection
between visual analytics and social media for
emergency management. Independently both fields are
relatively recent and, so far, there are not too many
developments on studies and applications of visual
analytics for the analysis of crisis-related social media
data. However, the challenges presented by the
analysis of this particular data/domain and what visual
analytics can offer to leverage on these challenges have
a natural correspondence.
In a recent review of visual analysis of social media
data conducted by Schreck and Keim [31], the authors
highlighted the challenges this kind of data and
analytics present. Among them: large volume of data,
high frequency data streams, multimodality,
ambiguous content, rapidly changing contents and
communication patterns, data types complex to
process, and analytic tasks highly context- and user-
dependent [31]. These challenges, according to the
authors, cannot be matched solely by fully automated
analysis methods. What is needed is a combination of
computer-driven data processing/pattern search with
human-driven reasoning mediated by interactions with
visual interfaces [31]. Recent studies have supported
this argument by demonstrating how visual analytics
can be put to work for social media analysis,
specifically in emergency management.
Some of the research reported in visual analytics
for social media is not explicit in its application to
emergency management, but its potentiality can be
inferred from uses in other domains. For example,
Dörk et al. [11] developed a visual analytics system to
get an overview of evolving and changing topics on
Twitter to increase the level of meaningful
participation during meetings and events. Their
strategy was to use “Visual Backchannel,” an
interactive interface with four main views: a
temporally adjustable ThemeRiver that visualizes
topics over time, a linked visualization of the most
active participants, a chronologically ordered list of
posts, and an image cloud representing the popularity
of event photos by size [11].
Kumar et al. [22] and MacEachren et al. [23]
provide some of the first explicit attempts to develop
visual analytics tools for the analysis of social media
data in this domain. In 2011, Kumar et al. [22]
introduced “TweetTracker,” a visual analytics tool
targeted to help humanitarian aid and disaster relief
(HADR) respondents increase their situational
awareness and gain insights from microblogging data
[22]. TweetTracker monitors and extracts location and
keyword specific tweets with near real-time trending,
data reduction, historical review, and integrated data
mining tools. Interactions include filters to focus on
1365
tweets of interest and playback of streaming data.
Linked heat maps and reTweet network graphs allow
analysts to drill down into the specifics of the data. The
tool was designed to support situational awareness,
planning, and coordination tasks and it was used by the
HADR organization Humanitarian Road when
Hurricane Sandy hit the Northeastern United States in
Oct. 2012. Also in 2011, MacEachren et al. introduced
SensePlace2 [23]. SensePlace2 is a map and web-
based, visual analytics tool designed to support sense-
making of crisis-relevant information harvested from
Twitter. Developed using a user-centered design
approach, SensePlace2’s main goal is to increase
emergency management practitioners situational
awareness during disasters. In addition to visual
analytics features that support analysis, such as the
time filter that combines a range slide with a heat-bar
to help analysts choose a strategic range of tweets over
time, key innovations of SensePlace2 were: the
automated extraction of location information from
textual content to increase the number of tweets that
can be plotted on a map, visual distinctions between
tweets generated from the crisis location and about the
crisis location, and explicit support for reasoning
processes such as recall of past search parameters [23].
The current version of SensePlace2, however, uses pre-
harvested social media data sets and does not support
real-time analysis of streaming data. For our design
study, we follow Kumar et al. approach of visual
analytics of streaming social media data, but we also
build upon the user-centered design approach followed
by MacEachren et al.
More recently Chae et al. [8] have explicitly extended
work on social media event detection and topic
extraction in visual analytics to applications in
emergency management. Their focus was on increasing
situational awareness by detecting and exploring
abnormal topics and events in social media data that
are relevant to crisis management. Rather than ranking
events by volume [11] or topics by immediate novelty,
topics are ranked by their lack of correspondence with
global and seasonal trends, in other words, by their
anomalous character [8]. The analyst interacts with the
parameters of the topic extraction and seasonal
trending algorithms to determine levels of granularity
in topic extraction, thresholds for seasonal trends, and
selection of specific topic for cross reference with
other social media datasets. The author’s workbench
ScatterBlogs extracts and pre-process streaming
Twitter data. A key innovation of ScatterBlogs is the
combination of computer-driven semi-automated
processes and human-driven setting of parameters to
refine the selection of data. Similar to MacEachren et
al.’s, Chae et al.’s design study emphasizes novel
applications of algorithms and interactions for the
visual analysis of social media data in emergency
management, however neither of them conducted
experimental studies to test the effectiveness of their
visualizations or interactions on leveraging human
cognition during their design process. We explicitly
address this aspect in our research. We also focus on
one kind of analysis that has recently emerged as
central in studies of visual analysis of social media data
for emergency management, namely that of real-time
affective content or sentiment analysis [34, 21, 12].
2.3. Real-time visual analytics
2.3.1. Visualizations. One of the main challenges of
exploring real-time series is to track changing data
streams without a-priori search targets to identify
general structure and patterns. For this reason, using a
visual representation constitutes an advantage for
monitoring what is happening and allowing fast access
to the information [13].
Visualizing temporal datasets starts with the basic
principle of a time series chart. Depending on the
analysis requirements, time can be better represented
on spiral visualizations [40] or pixel-like visualizations
[20]. Weber argues that besides the representation,
including the ability to parameterize scales, intervals
and cycle lengths, as in the case of the spiral, leads to
better outcomes during the exploration of data [40].
However, most of these techniques require
computation over the whole dataset, which represents a
restriction when information is dynamic as is the case
with real-time streams. Additionally, Aigner et al. [1]
highlight that when dealing with time-oriented data,
the notion of time varies between problems, and more
consistency is required in order to aid designers and
developers in their endeavors. From different
taxonomies of time, Aigner et al. selected Frank’s [14]
taxonomy to categorize time along the following
dimensions: 1) linear vs. cyclic time; 2) time points vs.
time intervals; and 3) ordered time, branching time,
and time with multiple perspectives. Understanding
these differences is important for both, the designer
and the analyst, since an effective visualization is the
result of appropriate combinations of visualization and
interaction techniques, consistent with the
characteristics of the time domain represented by data.
In streams of social media data generated during
emergency situations, time is linear and ordered.
However, especially during the response and recovery
stages, the analysis of aggregated intervals could be as
relevant as the analysis of specific time points.
Accordingly, Krstajic et al. [21] described two basic
requirements for processing real-time streaming data:
1) a time interval in which data is considered relevant
has to be defined, and 2) the size of the memory pool
that contains streamed data objects has to be
1366
determined [21]. Considering this, they designed a
visualization for the analysis of news streams that
represents data as soon as it arrives without
aggregation or clustering. In a second view, they
represented visually the aggregation of feeds into
topic-threads according to pre-established categories,
time of creation, and relevance. They also supported
the adaptive change of the time interval in which the
analysis is performed, just as SensePlace2 does it [23].
The value in computing and presenting aggregation is
the possibility to detect relationships between data
items, distinguish relevance, and optimize performance
for the visualization.
Aigner et al. [1] referred to ThemeRiver to
illustrate the representation of an ordered-time
problem, and they considered that most of time-related
problems would fall into this class. Originally, the
system was introduced as a technique to analyze the
evolution of dynamic themes within the exploration of
documents [18]. Later work by Byron and Wattenberg
[7] built up on the ThemeRiver concept, explored the
same representational goal (multiple time series or
categories) and discussed the importance of legibility
of labels and aesthetic characteristics. The resultant
technique is known as a streamgraph [7]. We built
upon ThemeRiver for our visual encoding of streaming
social data since ThemeRiver allows for the
simultaneous exploration of multiple dimensions of
data at the same time while depicting the temporal
behavior of each one. However, different than
ThemeRiver we switch from a solely categorical
perspective to a combined categorical-ordinal
perspective, supported by an ordered set of affective
content polarities. This allows us to take advantage of
the representation of volume of data and the ordered
position of representations of data among categories.
2.3.2. Motion and Animation. Considering the
dynamic nature of streams of data, linear time
visualizations have relied on the use of animation
(whether interactive or not) to represent the time
update, especially when the information space is too
large. This has been usually done by animating time
from right to left on the X-axis of a 2D representation,
where the rightmost limit represents the present and
old data gets discarded from the view at the leftmost
side [1, 18].
The design challenge when using animation for
representing change in streaming data is to leverage on
humans pre-attentive visual processing in order to
reduce cognitive workload that could be placed by
attentive visual processing. All examples mentioned in
2.3.1 use animation, as the illusion of movement in
screen, to visualize updates in time streams. We are
interested in investigating if animation can be used
more actively to depict patterns rather than being used
only as the necessary illusion of time updating.
Visualization researchers have often used motion as
a visual variable, which means using the perceptual
properties of movement to map data attributes as much
as we would do with other visual attributes, such as
color. One of the theoretical foundations to argue such
an approach is the Gestalt principle of Common Fate
for perceptual grouping. As Ware [39] presents it,
Common Fate is a principle that states that elements
moving together are perceived pre-attentively as a
group. Such idea has been used as a design guideline
when the purpose is to draw our attention towards a
specific group of objects from a larger set. An example
of a visual interface recurring to this principle is the
Trendalyzer software for animation of statistics [28].
The system uses color, size, 2D space and time
animation as visual variables to present comparisons of
multidimensional data in the form of an animated
bubble chart. The movement of groups of bubbles,
representing for example countries, depicts the
evolution in time of variables plotted in XY axes.
A comparative study of Trendalyzer versus small
multiples for the analysis of trends over time revealed
that, for analytical purposes, the use of static images in
the form of small multiples is more effective for the
discovery of patterns [27]. However, when information
is dynamic, as is the case with real-time updates and
social media data, using small multiples is impractical
given that the time interval is not fixed. In such case,
animation of objects is still a promising solution.
Animation and its cognitive limitations and
advantages have been subject of study for researchers
in diverse areas. Geographic information systems,
interactive map design and research on the value of
animation for learning are some few other examples
[36, 17]. Tversky et al. conducted a thorough survey on
the role of animations teaching complex systems (like
mechanical, or biological). They critically argue
against many experiments conducted to evaluate the
effectiveness of animation when learning abstract
concepts and systems. Their conclusions, similarly to
Robertson et al.’s work [27], suggest that such studies
are overstated rationalizations of the benefits of
animation and do not consider the comparison to static
images. However, such conclusions are restricted to
those situations. They briefly note that other uses of
animation, especially in computer interfaces “have
perhaps passed the test” and foresee that “the most
promising uses of animation seem to be to convey real-
time changes” [36], which is the very issue of the work
presented in this paper.
Along with this idea, Albrecht-Buehler [2]
proposed as design guideline that if motion is to be
used in visualizations of data, it should be used to
1367
represent change in data, and objects that are
semantically related should move similarly. Bartram
[5] also advocated for the use of animation as it can
provide insights when the patterns can only be seen as
visual change. She argued for the use of animation as a
strategy to cope with visual fragmentation and
perceptual inference, common challenges found when
visualizing large datasets of multivariate data in a
single screen [5]. Moreover, Sirisack & Grimvall [29]
insisted in the use of animated bubble charts (as
Trendalizer) by arguing that groups of objects moving
together draw attention when they move in the same
direction and highlight outliers moving in completely
different directions. However, neither Sirisack &
Grimvall’s nor any other reported study have evaluated
directly the use of the Common Fate principle in
visualizations of streaming data or the use of animation
on real-time series plotted as 2D charts moving along
the X-axis.
3. Study Design
In this section, we present the results of the first
phase of our design study. We start describing the
target problem and the design requirements that
resulted from a six-day fieldwork with emergency
management practitioners from the cities of Richmond
and Vancouver in British Columbia. We then proceed
to present the visual encoding that we designed to
fulfill the requirements and the experimental results.
3.1. Social media data analysis in emergency
operations centers in Richmond and
Vancouver
A series of non-participant observations, surveys,
and interviews were conducted with emergency
management staff of the cities of Richmond and
Vancouver between May and December 2011. The
observations included: a total of five full days of
regular administrative operations in Richmond, a 1-day
session with Richmond staff operating an emergency
notification system, a 1-day tabletop interagency
exercise that included the activation of the Richmond
EOC, a 2-day multi-jurisdictional table-top exercise
that activated Emergency Operation Centres (EOCs) in
Richmond and Vancouver, and a 2 days of a real-life
activation at the Vancouver EOC. The main goal of
this fieldwork was to understand and model the
workflow and current information flows in order to
find leverage points for visual analytics support [3].
One of the identified leverage points was the analysis
of social media data for emergency response [3].
The observations conducted at the Richmond and
Vancouver EOC located social media analysis during
crisis response as a side-activity performed by the
communications officer in order to monitor and
identify the emotional state of the general population
with respect to ongoing crisis situations and with
respect to responses to official press releases.
Monitoring of social media content was restricted to
Twitter and it was manually conducted by reading
individual tweets aggregated by HootSuite. During the
observations, practitioners did not incorporate insights
from social media analysis into the workflow of other
EOC areas, such as planning, which manages
situational boards and representations. Interviews
conducted with the Richmond staff noted their interest
to streamline the monitoring of social media data rather
than manually having to go through the time-
consuming task of reading individual tweets. They
expressed their desire for a heat-map that could
quickly visualize the distribution of the emotional
states of citizens during crises. A greater operational
impact of social media analysis on increasing
situational awareness and integration of social media
analysis in operational activities was considered
impractical by practitioners, according to a survey
conducted [10], mainly due to the intense workload
associated to social media analysis, their lack of human
resources to dedicate to this task, and the lack of
mechanisms to verify the validity of social media
content [10].
At the Vancouver EOC, social media data analysis
was observed differently during one tabletop exercise
that tested Ushahidi [24], an application that provides a
visual representation of semi-structured social media
data. This pilot study tested how an automated tool and
a visual representation could aid practitioners
overcome some of the obstacles identified in social
media analysis by emergency management
practitioners. During the exercise, which simulated an
earthquake in Metro Vancouver, student-volunteers on
the street used the Ushahidi app from their
smartphones to report on disaster impacts at different
points of the city. The Ushahidi format used required
users to specify the kind of event being reported using
a pre-established ontology. The interface provided a
basic geo-visualization of the posts, a detailed listing of
the posts, and basic filtering according to the pre-
established ontology.
Operationally, the City of Vancouver moved the
analysis from the communications officer to the
emergency line unit (311). There, dispatchers
monitored the Ushahidi website (while attending the
emergency line), read and analyzed individual posts,
wrote summaries of the situation on their internal
system and sent these messages to the EOC Director.
1368
The main result of this pilot study was that automated
tools and visualizations of social media data proved
useful in moving social media analysis from public
communication management towards operational tasks
at the EOC. It was also demonstrated that social media
data could increase the situational awareness of
emergency managers by adding a citizen-created
channel of situation reports. However, the specific
implementation of the interactive visualization for the
pilot increased the cognitive workload of dispatchers,
who split their attention between responding to
emergency lines and manually reading posts on the
Ushahidi interface. The visualization did not speed up
the analysis of the data either, since dispatchers
resorted to manually reading individual feeds and
manually writing summaries of the posts on the
internal emergency management information system.
Considering the amount of data created during the
exercise, the manual tasks of reading posts/writing
summaries were doable, however during a real
emergency, when thousands of posts can be created,
performing these tasks without automated support
becomes unfeasible and impractical.
In order to address the gaps identified during the
fieldwork and in collaboration with the practitioners,
we defined the following domain-specific
requirements: 1) Automated support for affective
content analysis of social media data during disasters
to detect general patterns of sentiment and detailed and
extreme levels of distress to inform operational
activities; 2) Visualizations designed for quick
detection of extreme values, outliers, and patterns from
large volumes of streaming social media data during
disasters; 3) Given the incompleteness of geolocated
information in social media, provide for alternative
dimensions of analysis for such data.
3.2. Design and evaluation
The first phase of the design solution consisted in
exploring the representation of sentiment trends using a
streamgraph-like visual encoding. We investigated the
use of animation, informed by the Gestalt principle of
Common Fate, to enhance the perception of patterns
within ordinal categories of sentiment values.
By doing this we explicitly addressed the domain-
specific requirements for visualizing the stream of
social media data and the need to empirically test how
the animation of such stream could better be integrated
in the visualization to support cognitive processes. Our
interest was to investigate if movement of perceptually
grouped objects could augment the perception of
patterns when visualizing series over time using
streamgraphs.
We designed and evaluated three versions of an
animated visualization of the sentiment values of the
tweet stream. The first version used the traditional
animation of ThemeRiver (Fig.1.a). The second
version used a group of colored circles (blobs) moving
along the Y-axis at the rightmost border of the stream
(Fig.1.b). The third version combined the first and the
second version (Fig.1.c). Each blob was placed at the
right border of each of the contours corresponding to
each of the values being represented at the present
time. Although inspired in the categories of themes
originally used by ThemeRiver, our visual encoding
represents the flow of the stream tweets categorized by
an ordinal value of sentiment content rather than by
categorical values or themes.
Figure 1. Screenshots from 3 animations used to
represent a stream of emergency-related tweets
classified by sentiment magnitude.
Within sentiment analysis, the purpose is to extract
affective polarities (positive or negative) from
unstructured text. There are multiple algorithms for the
mining process but there are not many tools to explore
the sentiment values extracted as result [16]. The
ordinal values of sentiment content map how positive
or negative is the content of tweets. We selected
SentiStrength as algorithm to calculate such values. It
assigns a positive and negative valence with a
corresponding magnitude to every tweet [35].
Magnitude ranges from 1 to 5 with 1 being lowly
charged and 5 being very charged. Thus, the sentiment
analysis of a tweet results in both, a positive valence
magnitude (1-to-5) and a negative valence magnitude
(1-to-5). For example, for a person worried about her
family, who tweets the following text: I’m worried
about my friends family on the east coast. Hopefully
Sandy will take it easy on them”, the sentiment values
assigned by the algorithm are (-4, 2), meaning that the
affective content is ranked with a magnitude of 4 in the
negative valence, and with a magnitude of 2 in the
positive valence. Instead of representing both values in
the screen, we calculated a single sentiment value as
the arithmetic addition of the negative and positive
magnitudes. Hence, we visualize 9 possible ordinal
1369
sentiment values between -4 and +4. For example, for
the tweet presented before, the sentiment value would
be a total of -2 (-4+2).
Each category was color-coded using a value of
warm orange for positive and a value of cold blue to
represent negative. Even though the use of red in
emergencies situations is associated with danger, for
the case of exploring streaming social media data,
practitioners described their interest on visualizing heat
maps of sentiment distress relating the metaphor to
temperature. Therefore we decided to use orange to
represent warm, positive values and blue to represent
cold, negative values. Difference in degree was
represented using a variation of hue. The scale of
colors was created with ColorBrewer [6]. Fig. 1.a
depicts a snippet of the stream of data, encoded as
sentiment categories and their color-coding.
3.2.1. Evaluation. We conducted an experimental
evaluation to determine the effect of the three versions
of the animated visualization of the stream on the
accuracy of perception of a specific set of patterns.
Each animation version (Fig.1.a, 1.b, 1.c) corresponded
to an experimental condition and presented the same
dataset of sentiment-annotated tweets collected during
the Hurricane Sandy strike in New York City in
October 2012. The data sample covered 30 minutes of
tweets with an approximate volume rate of 450 tweets
per minute collected on October 29th 2012. In total,
12,000 annotated tweets were represented for the
experimental trials. The X-axis of the graphs
represented time and each discrete data point
aggregated 5 seconds of data.
Considering the type of information and the target
analysts, we were interested in discovering 4 specific
patterns. Fig. 2 depicts each of them as: a) the stream
of information is turning positive, which should depict
a decrease of the blue areas and an increase of the
orange ones. Similarly, b) the information turning
negative was reproduced as a decrease of oranges and
an increase of blue areas. The last two patterns
corresponded with the polarities moving in the same
direction either c) turning neutral: both converging
towards zero, or d) polarizing: both sentiment valences
diverge towards opposite values.
21 participants from a convenient sampling
selection took part in the experiment. The group
consisted of 9 males and 12 females, undergraduate
and graduate students, from an art and technology
school. The experiment followed a 3X3 between-
subjects repeated measures design having animation
and trial as factors. Participants were randomly
assigned to one of the three conditions and each of
them completed 3 trials. Each trial consisted of 10
minutes of data (2 real minutes of interaction as we
speeded up the animation since the purpose was
distinguishing patterns as soon as they would appear).
Each individual was asked to mark, using a specific
key from the keyboard, as soon as she would notice
one of the patterns, as quick as possible. The
measurement under analysis was “accuracy score”
contrasting the participants’ answers with a target
vector (a truth dataset).
Figure 2. Patterns of interest for the evaluation.
We hypothesized than the group using the
combined representation of stream areas and animated
blobs, would achieve higher scores on finding patterns.
We also expected that for all the groups, there would
be a learning effect, described as higher scores over
time.
3.2.2. Results. We conducted a 3X3 repeated measures
MANOVA analysis on the accuracy scores of
recognzing the 4 patterns within the target dataset.
Mean accuracy scores were low (accuracy < 18% of
correct answers). The between-subjects or group effect
was not significant p>0.05. Wilk’s Lambda Test did
not show significance for the time by group
(animation) interaction p>0.05. Finally, the time effect
was found significant F(2,17)=4.94, p=0.02. Further
post-hoc test for contrasts revealed a significance
difference between trial 2 (T2) and trial 3 (T3),
F(1,18)=9.05, p=0.007, as well as borderline
significance for (T1) being significantly different to
(T3) F(1,18)=3.40, p=0.08, which represented our
assumption of learning over time. Figure 3 summarizes
the mean scores of the groups for each trial.
We also conducted qualitative analysis of the
perceived difficulty of the task and the animation
reported by participants. Emerging topics from a
thematic analysis resulted in three categories of themes
1370
related to either our hypotheses or to the experimental
design as: 1) issues with learning the keyboard controls
2) self-perception of improved accuracy and 3)
attention strategies to cope with the task.
Figure 3. Mean scores per trial for the 3 different
animation groups.
Regarding the controls, participants commented on
the difficulty to map what they were seeing with the
key they were pressing. We were aware of such issue
during initial pilots, hence, we labeled the keyboard
keys with colored arrows indicating the patterns as in
Fig. 2. However, none of the participants looked back
at the keys during the execution of the trials.
More important, the perception of self-accuracy
suggested to us that participants were confident on
their performance and the disparity with the measured
scores could be explained by technical difficulties with
the controls. Participants reported their experience with
the patterns as “easy to learn” and 11 out of 21 made a
reference to self-improvement over time. Some of the
comments also referred to the ability to anticipate.
Finally, participants also made comments about the
strategies they used to solve the task or to improve
their performance over time. Results from the group
assigned to the blobs-only condition (Fig. 1.b) suggest
that the moving blobs operated as an attention-target
that easily pop-out. In contrast, responses from the
group assigned to the condition of stacked areas only
(Fig. 1.a) included references to distraction and
difficulty to focus on the task: “I had a hard time to
focus on finding the pattern because my attention was
distracted by the whole wave”. From the group
assigned to the combined, third, condition (Fig. 1.c) we
found comments such as: “In the later trials, I began to
use the visualization following the dots to identify the
trend. For example, if the dots appeared to be moving
downwards, I would quickly look at the visualization to
see if there was indeed an increasing amount of blue”.
These reports of strategies of analysis are, for us, a
positive indicator to continue with the study of the use
of perceptual grouping of moving objects for
representing patterns of change over time in real-time
streamgraphs.
4. Limitations and Next Design Iterations
We have presented the first phase of our design
solution, as an approach to investigate the effect of
animation on the representation of social media
streams of data for emergency response.
From the visualization principles, the literature
review and the practitioners requirements we identified
two additional requirements for a real-time
visualization design: 1) the inclusion of interactive
capabilities to explore the streaming information space,
and 2) the ability to show more context of the
information space while depicting the stream as one of
the perspectives.
Interactivity is well known to support cognitive
performance. This work is well documented on
Tversky and Morrison survey review [36]. We have
determined that the next design phase includes the
development of details-on-demand interactions;
moreover, we are also aware of the importance of the
representation of geo-localized data when available
(critical to emergency managers), and the compiled
overview of events from the beginning of the
exploration in order to represent potential periodic
patterns. Even though our initial intention was to
compensate for the lack of explicit attributes such as
geo-localization, common in Twitter data, with the
representation of other attributes, such as affective
content, recent developments in visual analytics and
social media analysis are making the extraction of geo-
location from Twitter content more precise and easy to
integrate into real-time visual analytics [23, 24].
One of the limitations of this study consists of
subject matter experts’ evaluation. The design cycle
started with a fieldwork collection of requirements and
current practices of emergency management
practitioners hence, next phase evaluations should
include a qualitative and quantitative evaluation of the
interactive visualization with subject matter experts
using our Pair Analysis method [4].
5. Conclusions
We began this study by investigating how to make
better use of motion in screen (animation) considering
it as a common strategy to represent information over
time. For the specific case of streaming data, animation
is frequently used to keep the flow of information
updated, event though there are no specific cognitive-
supported design guidelines to use animation of time
along the X-axis for the discovery of patterns in the
data. We designed a streamgraph of ordinal categories
that represent sentiment values of Twitter posts. In
order to evaluate the benefit of animation for the
1371
discovery of patterns within such affective categories,
we conducted a controlled experiment investigating the
effect of implementing the Gestalt principle of
Common Fate on a group of blobs moving along the
contours of the streamgraph curves.
This study is the first towards our investigations,
even though it presents preliminary results, we find
important to continue investigating if animation,
implemented as the group of objects moving along the
contour of the curves, act as a pre-attentive processing
target that could be easily perceived and that could
increase accuracy on identifying patterns formed by the
comparison of the different sentiment categories
presented in the stream.
We highlight how a cognition-research approach
can be adopted for design research in visual analytics
of social media, something that we found lacking in
current developments. For example, Krstajic et al. [21]
have presented a promising set of requirements to
visualize real-time streams but the transitions over time
or the decay functions to update information have not
been evaluated on their effectiveness to perceptually
observe changes in their stream of categories of news.
Our design study addressed three domain-specific
requirements to support the analysis of social media
streams during emergency situations. First, our
information visualization, inspired by ThemeRiver,
addressed the information deluge challenge during
disaster response that prevents emergency practitioners
to manually analyze social media streams. It targeted
an analytical need considered critical by the emergency
management practitioners, namely that of identifying
levels of distress and emotional state of citizens during
crises or disasters, by providing a more effective way
to skim and filter large amounts of tweets. Second, our
experimental approach tested three versions of
animated time series of streaming data to address
another domain-specific requirement: the practitioners
need to quickly extract relevant features for sense-
making. Third, our design used the affective content
dimension in order to make sense of large amounts of
data that lack completeness of its geo-location
attributes, a common occurrence in Twitter data. This
satisfies the requirement of supporting an effective
analysis of social media streams even when some
critical attributes are absent. Although the literature
revealed other approaches to cope with similar
situations of incompleteness [2], such work has been
developed using multiple validations and queries to the
information source after analyzing the raw data. This is
a constraint when dealing with real-time streams.
Future work aims to improve on the current version
of the visualization by introducing interactions, by
providing overview of items for which location can be
extracted or approximated, and by supporting every
improvement with cognitive-supported evaluations that
leverages on human cognitive and reasoning strengths.
10. References
[1] W. Aigner, S. Miksch, W. Muller, H. Schumann, and C.
Tominski, “Visual Methods for Analyzing Time-Oriented
Data” IEEE Transactions on Visualization and Computer
Graphics, vol. 14, no. 1, pp. 4760, 2008.
[2] C. Albrecht-Buehler, B.Watson, and D.A. Shamma,
“Visualizing live text streams using motion and temporal
pooling” IEEE Computer Graphics and Applications, vol. 25,
no. 3, pp. 52 59, Jun. 2005.
[3] R. Arias-Hernandez and B. Fisher, “A Qualitative
Methodology for the Design of Visual Analytic Tools for
Emergency Operation Centers” Proc. HICSS 46, Maui, HI,
pp. 126-135, 2013.
[4] R. Arias-Hernandez, L. T. Kaastra, and B. Fisher, “Joint
Action Theory and Pair Analytics: In-vivo Studies of
Cognition and Social Interaction in Collaborative Visual
Analytics”, In Proc. 33rd Annual Conference of the
Cognitive Science Society. Austin TX, 2011.
[5] L. Bartram. “Enhancing Visualizations with Motion” In
Hot Topics: Information Visualization 1998, May 1998.
[6] C.A. Brewer, “Color Brewer” in
http://www.ColorBrewer.org. 2012.
[7] L. Byron and M. Wattenberg, “Stacked Graphs.
Geometry and Aesthetics” IEEE Transactions on
Visualization and Computer Graphics, vol. 14, no. 6, pp.
1245 1252, Dec. 2008.
[8] J. Chae, D. Thom, H. Bosch, Y. Jang, R. Maciejewski, D.
S. Ebert, and T. Ertl, “Spatiotemporal social media analytics
for abnormal event detection and examination using
seasonal-trend decomposition,” in Visual Analytics Science
and Technology (VAST), 2012, pp. 143–152.
[9] Canadian Red Cross, “Canadians tapped into social
networks, expect emergency responders to use social media”
in: http://www.redcross.ca/article.asp?id=44311&tid=001,
2012
[10] P. Diaz, I. Aedo, R. Arias-Hernandez, and D. Diaz,
“Towards Emergency 2.0: Social Media and Civil
Engagement in Emergency Management” Proc. 10th
International Conference on the Design of Cooperative
Systems (May 30 - June 1, Marseille, France), 2012.
[11] M. Dörk, D. Gruen, C. Williamson, and S. Carpendale,
A “Visual Backchannel for Large-Scale Events” IEEE Trans.
on Visualization and Computer Graphics, vol. 16, no. 6,
pp.1129-1138, 2010.
[12] D. Duan, W. Qian, S. Pan, L. Shi, and C. Lin, “VISA: a
visual sentiment analysis system,” in Proceedings of the 5th
International Symposium on Visual Information
Communication and Interaction, NY, USA, 2012.
1372
[13] J.D. Fekete, J.J. van Wijk, J.T. Stasko, and C. North,
“The Value of Information Visualization” Information
Visualization: Human-Centered Issues and Perspectives, A.
Kerren, J.T. Stasko, J.D. Fekete, and C. North, eds., Berlin:
Springer, pp.1-18, 2008.
[14] A.U. Frank, “Different Types of Times in GIS, Spatial
and Temporal Reasoning in Geographic Information
Systems” M.J. Egenhofer and R.G. Golledge, eds., Oxford
Univ. Press, 1998.
[15] T. Green, W. Ribarsky and B. Fisher. “Visual analytics
for complex concepts using a human cognition model.” In
Visual Analytics Science and Technology (VAST), October,
2008.
[16] M.L. Gregory, D. Payne, D. Mccolgin, N. Cramer, and
D. Love, “Visual Analysis of Weblog Content” Proc. of
ICWSM2007 (Boulder, Colorado, USA), 2007.
[17] M. Harrower, “The Cognitive Limits of Animated
Maps,” Cartographica: The International Journal for
Geographic Information and Geovisualization, vol. 42, no. 4,
pp. 349–357, Jan. 2007.
[18] S. Havre, B. Hetzler, and L. Nowell, “ThemeRiver:
Visualizing Theme Changes over Time” Proc. of the IEEE
Symposium on Information Visualization, Washington, DC,
USA, 2000.
[19] M.E. Keim and E. Noji, “Emergent use of social media:
A new age of opportunity for disaster resilience” American
Journal of Disaster Medicine, vol. 6, no. 1, pp. 47-54, 2011
[20] D.A. Keim, H.P. Kriegel, and M. Ankerst, “Recursive
pattern: a technique for visualizing very large amounts of
data” Proc. IEEE Conference on Visualization, 1995.
[21] M. Krstaji, E. Bertini, F. Mansmann, and D. A. Keim,
“Visual analysis of news streams with article threads” in
Proceedings of the First International Workshop on Novel
Data Stream Pattern Mining Techniques, New York, NY,
USA, 2010.
[22] S. Kumar, G. Barbier, M.A. Abbasi, and H. Liu,
“TweetTracker: An Analysis Tool for Humanitarian and
Disaster Relief” Proceedings of the 5th International AAAI
Conference on Weblogs and Social Media, 2012.
[23] A.M. MacEachren, A. Jaiswal, A.C. Robinson, S.
Pezanowski, A. Savelyev, P. Mitra, X. Zhang, and J.
Blanford, “SensePlace2: GeoTwitter analytics support for
situational awareness” In Visual Analytics Science and
Technology (VAST). Providence, RI, 2011.
[24] S. McClendon and A.C. Robinson, “Leveraging
Geospatially-Oriented Social Media Communications in
Disaster Response” Proc. 9th ISCRAM Conference. 2012.
[25] A. Nagy and J. Stamberger, “Crowd Sentiment
Detection during Disasters and Crises” Proceedings of the
9th ISCRAM Conference, Vancouver, Canada, 2012.
[26] Pew Research, “Hurricane Sandy and Twitter” in:
http://www.journalism.org/index_report/hurricane_sandy_an
d_twitter, 2012.
[27] G. Robertson, R. Fernandez, D. Fisher, B. Lee, and J.
Stasko, “Effectiveness of Animation in Trend Visualization”
IEEE Transactions on Visualization and Computer Graphics,
vol. 14, no. 6, pp. 1325 1332, Dec. 2008.
[28] H. Rosling, “The Gapminder” in
http://www.gapminder.org. 2012.
[29] S. Sirisackand, A. Grimvall, “Visual Detection of
Change Points and Trends Using Animated Bubble Charts”
In Environmental Monitoring, E. Ekundayo, Ed. InTech,
2011.
[30] D.H. Slater, N. Keiko, and L. Kindstrand, L. “Social
Media in Disaster Japan, Natural Disaster and Nuclear Crisis
in Japan” J. Kingston, ed., New York: Routledge, pp. 94-108,
2012.
[31] T. Schreck, and D. Keim, “Visual Analytics of Social
Media Data” Computer , vol.46, no.5, pp.68,75, May 2013
[32] K. Starbird and J. Stamberger, “Tweak the Tweet:
Leveraging Microblogging Proliferation with a Prescriptive
Syntax to Support Citizen Reporting” Proceedings of the 7th
ISCRAM Conference, Seattle Washington USA, 2010.
[33] P. Sung-Yueh, M. Bscher, R. Halvorsrud, L. Wood, M.
Stiso, L. Ramirez, “Peripheral response: Microblogging
during the 22/7/2011 Norway attacks” Proceedings of the 9th
ISCRAM Conference, Vancouver, Canada. 2012.
[34] T. Terpstra, A. de Vries, R. Stronkman, and G. Paradies,
“Towards a realtime Twitter analysis during crises for
operational crisis management” Proceedings of the 9th
ISCRAM Conference, Vancouver, Canada. 2012.
[35] M. Thelwall, K. Buckley, and G. Paltoglou, “Sentiment
strength detection for the social web” Journal of the
American Society for Information Science and Technology,
vol. 63, no. 1, pp. 163173, Jan. 2012.
[36] B. Tversky, J. B. Morrison, and M. Betrancourt,
“Animation: can it facilitate?,” International Journal of
Human-Computer Studies, vol. 57, no. 4, pp. 247–262, Oct.
2002.
[37] S. Vieweg, L. Palen, S.B. Liu, A. Hughes and J. Sutton,
“Collective Intelligence in Disaster: Examination of the
Phenomenon in the Aftermath of the 2007 Virginia Tech
Shooting” Proc. 5th ISCRAM Conference, Washington, DC,
USA. 2008.
[38] S.Vieweg, L.Palen, S.B.Liu, A.Hughes and J.Sutton,
“Backchannels on the Front Lines: Emergent Uses of Social
Media in the 2007 Southern California Wildfires” Proc. 5th
ISCRAM Conference, Washington, DC, USA. 2008.
[39] C. Ware, “Visual Thinking: for Design” 1st ed. Morgan
Kaufmann, 2008.
[40] M. Weber, M. Alexa, and W. Muller, “Visualizing
Time-Series on Spirals” Proc. IEEE Information
Visualization (Oct. 7-14, 2001), pp. 7-14, 2001
1373
... effectiveness, usability, etc.). Second, SP3 is quite complex and requires the use of multiple interactive components in close coordination with each other, which might make the system challenging to learn and master and cause an increase in the analysts' workload both known challenges for system adoption in geovisual analytics (Calderon, Arias-Hernandez, & Fisher, 2014;Thom & Ertl, 2015). The design of our proof-of-concept user study accommodates both of these properties, with specific methodological decisions informed by our methodological 'check list'. ...
... This focused review is based on the descriptions of 16 user studies of actual geovisual analytics tools for social media analysis collected across 14 papers (two of the papers contain two studies each). Thirteen of these studies involve human subjects and are summarized below (Brooks, Robinson, Torkildson, & Aragon, 2014;Calderon et al., 2014;Diakopoulos, De Choudhury, & Naaman, 2012;Diakopoulos, Naaman, Yazdani, & Kivran-Swaine, 2011;Francalanci & Hussain, 2015;Ghani, Kwon, Lee, Yi, & Elmqvist, 2013;Gomez, Guo, Ziemkiewicz, & Laidlaw, 2014;Krstajic, Najm-Araghi, Mansmann, & Keim, 2013;Lu et al., 2014;Malik et al., 2013;Thom & Ertl, 2015;Zhao, Gou, Wang, & Zhou, 2014). The remaining three are case studies (Chae et al., 2012;Kraft et al., 2013;Krstajic et al., 2013) that do not have much detail concerning user study design but are instead referred to later in the paper as part of discussion on user study data analysis methods. ...
... We found further supporting evidence for this approach in a study by Thom and Ertl (2015), who documented that study participants are likely to either limit themselves to working with a subset of all available tools or will simply forget how certain tools are meant to be used soon after the introduction is over, unless an alternative approach to participant instruction is implemented. Some studies take a proactive approach and attempt to actively design the system to be less complex in nature and more accessible to study participants without specialized training (Calderon et al., 2014;Diakopoulos et al., 2012;Ghani et al., 2013). It is, however, unclear if this option is feasible when dealing with complex or uncommon metaphors for data analysis and visualization. ...
Article
This paper advances the state-of-the-art in methodology design for empirical evaluation of (geo)visual analytics software. Specifically, we describe the process of design, development and application of a prototypical user study tailored to the evaluation of complex geovisual analytics tools that focus on social media analysis. We fist perform a synthesis of existing theory and best practices for software evaluation of comparable systems. We then demonstrate how the product of said synthesis – a methodological ‘check list’ – can be used to inform a proof-of-concept user study of an actual geovisual analytics software system. The resulting user study design accommodates for the use of real geographic social media datasets, the complexity of the intended analytical process, and for the learning challenges faced by the participants working with a fully-functional and mature geovisual analytics application, and is likely representative of a wide range of evaluation scenarios in (geo)visual analytics. A complete summary of all the study instruments is included to encourage their scrutiny, reuse and modification by others. Finally, we have discovered that participants’ curiosity and desire for autonomy played a noticeable role in the evaluation process – something not previously reported.
... Even though this framework was developed based on the identified gaps in the published previous research, it needs to be extended to handle more data types and provide a detailed analysis of patterns derived from the analysis of these data types. An important number of real-time visual analytics solutions were dedicated to text and social media which increased significantly in the recent years [41], [16], [76], [75] [31], [13]. Geospatial data comes with many applications of real time visual analytics [99], [47], [80], [23], [54]. ...
Article
This paper is focused on real-time visual analytics for home-based rehabilitation dedicated for brain stroke survivors. This research is at the intersection of three main domains: visual analytics for time-oriented data and dynamic visual analytics with specific focus on data analytics for rehabilitation systems. This study has emphasized the analysis of the most important research works in these domains. The studies included in this review are published between January 2008 and December 2020 that met eligibility criteria. From 243 papers retrieved from research including the Google Scholar database and manual research, 69 papers were finally included. This paper presents a classification of the reviewed research based on key features required by the visual analytics for real-time monitoring of patients. The findings suggested that real-time monitoring visual analytics for biodata captured during the rehabilitation sessions was not sufficiently addressed by previous research. To provide real-time monitoring visual analytics of biodata, the concept of a unified framework that combines the processing of batch and stream data in a distributed architecture is proposed. The system is currently under development; its validation will be carried out by an experimental study and the evaluation of the system performance.
... By using the development of information and communication technology, the delivery of information is carried out by using the media in the form of animated explainer videos. The purpose of using animation media is to maximize the visual effect so that it increases understanding of the information conveyed and has the ability to explain something complex with pictures and words [8]. ...
... Practitioners appreciate reporting and visualization mechanisms that present insights from social media data in a cohesive and understandable format tailored to the requirements of decision-making and subsequent actions [38]. Visual analytics, situation reports, and collaborative map displays are the preferred features to summarize findings from social media and other digital services in practice [12]. ...
Chapter
The benefits of using social media data as a source of information are recognized by both practice and research in crisis management. However, the existing understanding on the matter is fragmented, it oscillates between techno-determinisms and socio-determinisms, which does not provide a holistic picture. In this paper we argue that to better adapt social media data use practices, an ecosystem perspective is needed. In doing so, we conducted a systematic literature review and identified the various entities and their interrelationships that configure the practices of social media listening for crisis management. Then, we summarize our findings by proposing a conceptual ecosystem of practice. Finally, we suggest its implications for future research and practice.
... Social media data has been used for warning individuals of emergency information and raising disaster relief funds for humanitarian organizations and governments in disasters including Hurricane Irene (Mandel et al. 2012), Genoa flooding (Buscaldi and Hernández-Farias 2015), and Ebola outbreak (Odlum and Yoon 2015). These studies successfully assisted humanitarian organizations in tracking, analyzing, and monitoring social media data related to disaster response (Calderon et al. 2014). Besides, Wang and Zhuang (2018) analyzed the rumor awareness and response behaviors of Twitter users during natural disasters. ...
Article
Full-text available
Social media has become an essential channel for posting disaster-related information, which provides governments and relief agencies real-time data for better disaster management. However, research in this field has not received sufficient attention, and extracting useful information is still challenging. This paper aims to improve disaster relief efficiency via mining and analyzing social media data like public attitudes toward disaster response and public demands for targeted relief supplies during different types of disasters. We focus on different natural disasters based on properties such as types, durations, and damages, which contains a total of 41,993 tweets. In this paper, public perception is assessed qualitatively by manually classified tweets, which contain information like the demand for targeted relief supplies, satisfactions of disaster response, and public fear. Public attitudes to natural disasters are studied via a quantitative analysis using eight machine learning models. To better provide decision-makers with the appropriate model, the comparison of machine learning models based on computational time and prediction accuracy is conducted. The change of public opinion during different natural disasters and the evolution of peoples’ behavior of using social media for disaster relief in the face of the identical type of natural disasters as Twitter continues to evolve are studied. The results in this paper demonstrate the feasibility and validation of the proposed research approach and provide relief agencies with insights into better disaster management.
... Besides, social media data has been used for warning individuals of emergency information, updating first responders with victims' requirements, and raising disaster relief funds for humanitarian organizations and governments in disasters including Hurricane Irene (Mandel et al. 2012), Genoa flooding (Buscadi and Hermandez 2015), and Ebola outbreak (Odlum and Yoon 2015). These studies successfully helped humanitarian organizations to track, analyze, and monitor social media data related to disaster response (Calderon et al. 2014). ...
Preprint
Full-text available
With the development of the Internet, social media has become an essential channel for posting disaster-related information. Analyzing attitudes hidden in these texts, known as sentiment analysis, is crucial for the government or relief agencies to improve disaster response efficiency, but it has not received sufficient attention. This paper aims to fill this gap by focusing on investigating public attitudes towards disaster response and analyzing targeted relief supplies during disaster relief. The research comprises four steps. First, this paper implements Python in grasping Twitter data, and then, we assess public perceptron quantitatively by these opinioned texts, which contain information like the demand for targeted relief supplies, satisfactions of disaster response and fear of the public. A natural disaster dataset with sentiment labels is created, which contains 49,816 Twitter data about natural disasters in the United States. Second, this paper proposes eight machine learning models for sentiment prediction, which are the most popular models used in classification problems. Third, the comparison of these models is conducted via various metrics, and this paper also discusses the optimization method of these models from the perspective of model parameters and input data structures. Finally, a set of real-world instances are studied from the perspective of analyzing changes of public opinion during different natural disasters and understanding the relationship between the same hazard and time series. Results in this paper demonstrate the feasibility and validation of the proposed research approach and provide relief agencies with insights into better disaster response.
Book
This volume constitutes the proceedings of the 20th IFIP WG 6.11 Conference on e-Business, e-Services, and e-Society, I3E 2021, held in Galway, Ireland, in September 2021.* The total of 57 full and 8 short papers presented in these volumes were carefully reviewed and selected from 141 submissions. The papers are organized in the following topical sections: AI for Digital Transformation and Public Good; AI & Analytics Decision Making; AI Philosophy, Ethics & Governance; Privacy & Transparency in a Digitized Society; Digital Enabled Sustainable Organizations and Societies; Digital Technologies and Organizational Capabilities; Digitized Supply Chains; Customer Behavior and E-business; Blockchain; Information Systems Development; Social Media & Analytics; and Teaching & Learning. *The conference was held virtually due to the COVID-19 pandemic.
Article
Full-text available
From the outbreak of a novel COronaVIrus Disease (COVID-19) in Wuhan to the first COVID-19 case in the Philippines, Filipinos have been enthusiastically engaging on Twitter to convey their sentiments. As such, this paper aims to identify the public opinion of Filipino twitter users concerning COVID-19 in three different timelines. Toward this goal, a total of 65,396 tweets related to COVID-19 were sent to data analysis using R Statistical Software. Results show that “mask”, “health”, “lockdown”, “outbreak”, “test”, “kit”, “university”, “alcohol”, and “suspension” were some of the most frequently occurring words in the tweets. The study further investigates Filipinos’ emotions regarding COVID-19 by calculating text polarity of the dataset. To date, this is the first paper to perform sentiment analysis on tweets pertaining to COVID-19 not only in the Filipino context but worldwide as well.
Article
Full-text available
The Emergency Management Information System (EMIS) field has an established tradition of user-centered methodological approaches for design and evaluation research. However, visual analytics, a new field that is starting to intersect with EMIS, is barely using such approaches. Thus an opportunity has emerged to expand these user-centered approaches from EMIS towards visual analytics via the design of visual analytics tools for emergency management. In this article, the authors present a qualitative methodology for design research that takes on this opportunity. This specific methodology is characterized by using non-participant observation and interviews as methods and by being theoretically informed by the multidisciplinary framework of visual analytics. The authors also include a detailed application of the methodology to the design of visual analytic tools for Emergency Operation Centers in Vancouver, Canada as well as the corresponding results: contextual knowledge for design, informed requirements for four design projects and evaluation criteria for these designs.
Article
Full-text available
Microblogs are an opportunity for scavenging critical information such as sentiments. This information can be used to detect rapidly the sentiment of the crowd towards crises or disasters. It can be used as an effective tool to inform humanitarian efforts, and improve the ways in which informative messages are crafted for the crowd regarding an event. Unique characteristics of microblogs (lack of context, use of jargon etc) in Tweets expressed by a message-sharing social network during a disaster response require special handling to identify sentiment. We present a systematic evaluation of approaches to accurately and precisely identify sentiment in these Tweets. This paper describes sentiment detection expressed in 3698 Tweets, collected during the September 2010, San Bruno, California gas explosion and resulting fires. The data collected was manually coded to benchmark our techniques. We start by using a library of words with annotated sentiment, SentiWordNet 3.0, to detect the basic sentiment of each Tweet. We complemented that technique by adding a comprehensive list of emoticons, a sentiment based dictionary and a list of out-of-vocabulary words that are popular in brief, online text communications such as lol, wow, etc. Our technique performed 27% better than Bayesian Networks alone, and the combination of Bayesian networks with annotated lists provided marginal improvements in sentiment detection than various combinations of lists.
Article
Full-text available
Herbert H. Clark's Joint Action Theory (JAT) has been groundbreaking for understanding the social and cognitive mechanisms that allow people to effectively coordinate joint actions in conversational, face-to-face settings. Using a method we call "Pair Analytics," we have extended the application of JAT to the study of analytical reasoning in computer-mediated, human-to-human interactions. Pair analytics (PA) sets a naturalistic scenario in which the social and cognitive role of human-human and human-computer interactions can be studied. In this paper, we support the claim that coupling JAT and PA is an effective research strategy to capture and study three socio-cognitive phenomena in collaborative visual analytics: (1) structuring and navigation of joint analysis; (2) management of joint attention; (3) and signaling of cognitively demanding tasks.
Article
Full-text available
Geospatially-oriented social media communications have emerged as a common information resource to support crisis management. Our research compares the capabilities of two popular systems used to collect and visualize such information -Project Epic's Tweak the Tweet (TtT) and Ushahidi. Our research uses geospatially-oriented social media gathered by both projects during recent disasters to compare and contrast the frequency, content, and location components of contributed information to both systems. We compare how data was gathered and filtered, how spatial information was extracted and mapped, and the mechanisms by which the resulting synthesized information was shared with response and recovery organizations. In addition, we categorize the degree to which each platform in each disaster led to actions by first responders and emergency managers. Based on the results of our comparisons we identify key design considerations for future social media mapping tools to support crisis management.
Conference Paper
Full-text available
In this article, we present a qualitative methodology for design research in the domain of visual analytics for emergency management. This specific methodology is characterized by using non-participant observation and interviews as methods and by being theoretically informed by the multidisciplinary framework of visual analytics. The methodology collects data about: (1) workflow, (2) informational environment, and (3) information practices and flows. Our main argument is that this methodology addresses a current gap in visual analytics, which lacks user-centered methodological approaches for design and evaluation research.
Conference Paper
Full-text available
Recent advances in technology have enabled social media services to support space-time indexed data, and internet users from all over the world have created a large volume of time-stamped, geo-located data. Such spatiotemporal data has immense value for increasing situational awareness of local events, providing insights for investigations and understanding the extent of incidents, their severity, and consequences, as well as their time-evolving nature. In analyzing social media data, researchers have mainly focused on finding temporal trends according to volume-based importance. Hence, a relatively small volume of relevant messages may easily be obscured by a huge data set indicating normal situations. In this paper, we present a visual analytics approach that provides users with scalable and interactive social media data analysis and visualization including the exploration and examination of abnormal topics and events within various social media data sources, such as Twitter, Flickr and YouTube. In order to find and understand abnormal events, the analyst can first extract major topics from a set of selected messages and rank them probabilistically using Latent Dirichlet Allocation. He can then apply seasonal trend decomposition together with traditional control chart methods to find unusual peaks and outliers within topic time series. Our case studies show that situational awareness can be improved by incorporating the anomaly and trend examination techniques into a highly interactive visual analysis process.
Article
Sentiment plays a critical role in many information-centric business scenarios. The opinion mining methods proposed in the recent decade have formed a solid foundation to investigate the sentiment analysis tasks, but are often too complicated and scattered to serve the needs of real customers. We introduce the VISA system in this paper, which applies the visualization technology to synthesize the sentiment analysis results and present to the end user in an interactive manner. VISA builds on the generic sentiment tuple based data model and consumes the different facets of sentiment data with coordinated multiple views, hence is scalable to work with most of existing sentiment analysis engines on various application domains. We showcase the usage of VISA in a real world example and demonstrate the system's effectiveness through the user trail in finding an appropriate hotel for his family trip.
Article
The application of visual analytics, which combines the advantages of computational knowledge discovery and interactive visualization, to social media data highlights the many benefits of this integrated approach. The Web extra at http://youtu.be/nhoq71gqyXE is a video demonstrating a prototype system for visual-interactive analysis of large georeferenced microblog datasets, describing the design of the system, and detailing its application to the VAST 2011 Challenge dataset. The dataset models an epidemic outbreak in a fictitious metropolitan area. The video shows how the system can detect the epidemic and analyze its development over time. The system was implemented by Juri Buchmueller, Fabian Maass, Stephan Sellien, Florian Stoffel, and Matthias Zieker at the University of Konstanz (they also produced this video). Further information on the system and the VAST challenge dataset can be found in E. Bertini et al., "Visual Analytics of Terrorist Activities Related to Epidemics," Proc. IEEE Conf. Visual Analytics Science and Technology (VAST 11), IEEE CS, pp. 329-330, 2011.