Conference PaperPDF Available

When David Meets Goliath: Combining Smartwatches with a Large Vertical Display for Visual Data Exploration

Authors:

Abstract and Figures

We explore the combination of smartwatches and a large interactive display to support visual data analysis. These two extremes of interactive surfaces are increasingly popular, but feature different characteristics-display and input modalities, personal/public use, performance, and portability. In this paper, we first identify possible roles for both devices and the interplay between them through an example scenario. We then propose a conceptual framework to enable analysts to explore data items, track interaction histories, and alter visualization configurations through mechanisms using both devices in combination. We validate an implementation of our framework through a formative evaluation and a user study. The results show that this device combination, compared to just a large display, allows users to develop complex insights more fluidly by leveraging the roles of the two devices. Finally, we report on the interaction patterns and interplay between the devices for visual exploration as observed during our study.
Content may be subject to copyright.
When David Meets Goliath: Combining Smartwatches with
a Large Vertical Display for Visual Data Exploration
Tom Horak,1Sriram Karthik Badam,2Niklas Elmqvist,2and Raimund Dachselt1
1Interactive Media Lab, Technische Universität Dresden, Germany
2University of Maryland, College Park, MD, USA
horakt@acm.org, sbadam@umd.edu, elm@umd.edu, dachselt@acm.org
The first two authors contributed equally to this work.
CA B
Figure 1. Visual data analysis using large displays and smartwatches together. Cross-device interaction workflows discussed in our conceptual frame-
work allow for an interplay between these two types of devices. For instance, multiple analysts can extract data from views on a large display (left) to
their smartwatches (middle) and compare the data on other visualizations distributed over the large display by physical movements followed by direct
touch (right) or remote interaction. This pull/preview/push interaction metaphor can be extended to many visualization tasks. The watch enhances the
large display by acting as a user-specific storage, a mediator, and a remote control, and further aids multiple users working in concert or by themselves.
ABSTRACT
We explore the combination of smartwatches and a large in-
teractive display to support visual data analysis. These two
extremes of interactive surfaces are increasingly popular, but
feature different characteristics—display and input modalities,
personal/public use, performance, and portability. In this pa-
per, we first identify possible roles for both devices and the
interplay between them through an example scenario. We then
propose a conceptual framework to enable analysts to explore
data items, track interaction histories, and alter visualization
configurations through mechanisms using both devices in com-
bination. We validate an implementation of our framework
through a formative evaluation and a user study. The results
show that this device combination, compared to just a large
display, allows users to develop complex insights more fluidly
by leveraging the roles of the two devices. Finally, we report
on the interaction patterns and interplay between the devices
for visual exploration as observed during our study.
Author Keywords
Cross-device interaction; visual analysis; data exploration;
multi-display environment; large display; smartwatch.
ACM Classification Keywords
H.5.2. Information Interfaces and Presentation: UI.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from permissions@acm.org.
CHI 2018, April 21–26, 2018, Montreal, QC, Canada
© 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ISBN 978-1-4503-5620-6/18/04. . . $15.00
DOI: https://doi.org/10.1145/3173574.3173593
INTRODUCTION
Large interactive displays are increasingly being used for data
exploration due to increased availability and exciting new
possibilities for both user interaction and information visu-
alization. Such displays can show more information than
traditional displays by enlarging, combining, or coordinating
multiple visualization views [1, 26], incorporating physical
navigation [3, 7, 34], and supporting multiple users working
at the same time [3, 34]. However, for all their advantages,
large displays also yield new challenges. Tools and menus
can clutter the interface and obscure important information
as well as be out of reach for a user and, thus, forcing physi-
cal movement that can lead to fatigue. Furthermore, parallel
exploration by multiple users requires personalized visualiza-
tions and interactions, while avoiding conflicts and supporting
coordination among users [52]. Finally, this is all exacerbated
by the complex nature of visual sensemaking tasks [13, 58].
Given these challenges, we propose to utilize personal de-
vices in combination with a large display to support the users’
tasks during sensemaking. While this general approach has
been studied in the past [19, 33, 51, 53], we focus here on
smartwatches because they feature multiple advantages over
traditional hand-held devices. Beyond being lightweight and
non-intrusive, their key advantage is that they are wearable.
This not only frees the user’s hands to interact with the large
display, it also provides anytime access without the need for
persistent hand-held usage while leveraging proprioception
for eyes-free, on-body interaction [2, 47]. This characteristic
also applies the other way around: established and familiar-
ized workflows on the large display are in no way affected;
instead the smartwatch offers the possibility to enhance these
workflows in an unobtrusive way. Given these advantages,
CHI 2018 Honourable Mention
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 19
Page 1
their combination with large displays is compelling, yet this
idea has so far not been explored in the literature.
In this paper, we combine smartwatches with large displays
to allow the watch to serve as a personalized analysis tool-
box. In this function, the watch supports the multivariate data
exploration on a large display interface containing multiple
views (cf. coordinated and multiple views [48]). The devices
represent two extremes—like David and Goliath—of interac-
tive surfaces in many ways (e.g., small vs. large, private vs.
public, mobile vs. stationary), which yields several fundamen-
tal design challenges for their combination. To tackle these,
we first derive the basic roles of the two devices by drawing
on the literature as well as an example data analysis scenario.
Based on these considerations, we propose a conceptual frame-
work defining the specific interplay between the smartwatch
and the large display for a single-user. Within this framework,
users can interact with the large display alone, and also benefit
from the watch as a container to store and preview content
of interest from the visualizations, and manipulate view con-
figurations (Figure 1). While collaboration is not explicitly
considered yet (cf. group awareness [22], communication [40],
and coordination [42]), the concepts allow for simultaneous
(parallel) work from multiple users during visual exploration.
We evaluated the prototype implementation of our conceptual
framework through, (1) a formative evaluation to guide the
design process, and (2) a follow-up user study to understand
the interaction patterns compared to a standalone large display
interface. Overall, our contributions include the following:
1.
Generalized design considerations for combining two dis-
tinct device types—smartwatches and large displays—based
on the literature and an example visual analysis scenario;
2.
Aconceptual framework and a web-based implementation
incorporating smartwatches in visual analysis tasks with a
large interactive display during visual sensemaking;
3.
Feedback from a formative evaluation illustrating the utility
of our concepts and guiding our interaction design; and
4.
Results from a user study that reveal more fluid interac-
tion patterns—flexibility and ease in developing complex
insights—when using our specific device combination.
RELATED WORK
Our literature review spans (1) the use of large displays for
visual analysis and (2) research on smartwatches in general as
well as their use for visual data analysis.
Visualization on Large Interactive Displays
Large displays have long been of special interest to the visu-
alization and visual analytics community, presumably due to
their large screen real estate and the potential for collabora-
tive analysis [1]. The size of such displays allows for using
physical navigation to support the classic visual information
seeking mantra [50]: get an overview of the data from a dis-
tance, and move closer to the display to access more details [1,
10, 21, 26]. This general characteristic has motivated work
explicitly focusing on physical navigation and spatial memory:
Ball and North [6] as well as Ball et al. [7] showed that physi-
cal navigation is an efficient alternative to virtual navigation;
however, the effects depend on the actual setup, interface, and
tasks [28, 30, 39]. Especially in multi-user scenarios, prox-
emics [8, 23, 41] is used to provide personalized views or
lenses. The general design space of such lenses was explored
through BodyLenses by Kister et al. [34], whereas Badam et
al. [3] focused on the combined use of proxemics and mid-air
gestures to support multi-user lenses for visual exploration. In
general, large displays can be beneficial for co-located collab-
orative scenarios [27], especially as they can promote different
collaboration styles [29, 45] and benefit from physical naviga-
tion [29]. However, challenges regarding territoriality [12, 29],
coordination costs [45], and privacy [15] must be considered.
Some of these challenges can be tackled by adding additional
devices, thus creating multi-device environments (MDEs). Per-
sonal devices such as smartphones and tablets are well-suited
for these combinations. While these devices can also create
MDEs on their own [24, 35], the combination with a large
display allows to separate shared and private information more
easily and enables users to switch between working in concert
and working alone [42]. A key operation in a MDE is the abil-
ity to transfer content from one device to another; Langner et
al. [36] investigated this for a spatially-aware smartphone and
a large display, Chung et al. [19] presented concepts for using
a tablet as a document container for sensemaking tasks, and
Badam and Elmqvist [4] elicited interactions for transferring
visualizations between a large display and a hand-held device.
Focusing on general interaction with a wall-sized display, Cha-
puis et al. [16] proposed to use a tablet as storage for multiple
cursors and content items, while Liu et al. [38] investigated
collaborative touch gestures for content manipulation. Specif-
ically for data exploration, Spindler et al. [51] incorporated
hand-held displays above a tabletop as graspable views to pro-
vide altered perspectives onto visualizations. Recently, Kister
et al. [33] investigated how analysts can use spatially-aware
mobiles in front of a display wall as personal views onto a
graph visualization. While these approaches all successfully
address challenges with large displays, they require the user
to hold an additional device in their hand, which diminishes
some of the benefits of a large touch display.
Smartwatch meets Large Interactive Displays
Instead of using a hand-held device to establish contact to a
large display, von Zadow et al. [53] used a arm-mounted (mo-
bile) device to allow users to have their hands free and reduce
attention switches. In general, arm-mounted devices have al-
ready been investigated for some time; e.g., Rekimoto [47] and
Ashbrook et al. [2] explored their advantages as unobtrusive
on-body input devices. For smartwatches specifically, most
research focused on how to overcome the limitations of these
devices, i.e., the limited input and output possibilities. For the
latter, haptic feedback [44], mid-air visuals [56], and on-body
projections [37] have been proposed. The input to a smart-
watch can be expanded with physical controls (e.g., a rotatable
bezel [59]), mid-air gestures [32], and spatial movements [31].
Furthermore, the watch’s native inertial sensors can be used to
enrich touch input on other devices such as smartphones [17]
or tablets [57] with pressure and posture information.
The combination of smartwatches with large displays, espe-
cially for visual analysis, is underexplored. The CurationSpace
CHI 2018 Honourable Mention
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 19
Page 2
of Brudy et al. [14] already utilizes a smartwatch for selecting,
adjusting, and applying instruments as well as for providing
personalized feedback and content. However, because of the
different application case (content curation) and setup (table-
top instead of a vertical display), the presented interaction
techniques do not cover important aspects supported by our
framework (e.g., distant interaction) and also cannot be applied
generally to our domain (visual data analysis). In informa-
tion visualization, smartwatches are now beginning to be used
alone for personal analytics (e.g., tracking daily activity) [18].
However, in general, the lack of research on utilizing smart-
watches in MDEs for data exploration is noticeable. Our work
in this paper is, to our knowledge, the first of its kind that ex-
plores how to best integrate smartwatches with large displays
for data analysis, both for individuals and groups.
SCENARIO: ANALYZING CRIME DATA
To better understand the requirements of visual data explo-
ration, as well as to illustrate and validate our interaction
concepts, we consider an application scenario of a law enforce-
ment department planning patrol routes within a city. Thanks
to an open data initiative, we are able to build on a real dataset
of crimes in Baltimore.
1
Here, we will describe the scenario
and its involved users, goals, the setup, and challenges.
Consider two police analysts trying to build a tentative plan
for patrol routes based on historical crime data within the city.
Their goal is to design routes that cover as much as possible of
the high-crime areas while still maintaining a police presence
throughout the entire city. The analysts meet in an office space
that has a large digital whiteboard featuring a high-resolution
display and multi-touch support, as seen in Figure 1a. Such
rooms are increasingly popular for visual sensemaking scenar-
ios since they enable analysts to work in concert or on their
own, view the data from a distance or up close, as well as
leave the room and continue their exploration later [4, 12, 34].
In this scenario, the analysts use standard visual analysis tech-
niques [48] to construct an interactive dashboard on the large
display capturing the attributes in crime data using different
visualizations (e.g., line charts, histograms, scatterplots).
To actually create the patrol plan, the analysts need to observe
the crime distributions in these different visualizations. Now,
to identify in-depth characteristics of the city’s crimes, analysts
need to investigate multiple hypotheses over different crime
patterns of interest. For instance, to evaluate effects of crime
prevention measures in certain districts they must visually
verify if downward tendencies are present. These tendencies
could exist in an overall trend, but also only for a few districts,
crime types, or certain time periods. This sensemaking task by
itself involves multiple visual exploration tasks [13, 50, 58]:
selecting data items (i.e., crimes) of interest, filtering them, ac-
cessing more details about these crimes (elaborate), encoding
them on visualizations for other attributes, connecting them
across visualizations, and comparing multiple collections of
crimes. This exemplifies how, similar to other visual analy-
sis scenarios, crime analysis is also centered around working
with data items—collections of crimes—of analyst’s interest.
1https://data.baltimorecity.gov/
During sensemaking, multiple such collections have to be con-
sidered in parallel threads of visual analysis and by groups of
analysts in collaborative scenarios.
The large display can provide multiple views on the shared
large screen real estate to support multiple visual perspectives
and help users utilize the space. However, this is not enough;
analysts need to deal with two types of challenges. (1)
Display
space management:
when interactively exploring the crime
records on the large display, analysts need to develop spatial
memories of visualized information when seeing or comparing
multiple parts of the large display. Also, adding further views
for comparison is not possible when the amount of space is
fixed and already taken by other views. (2)
Interaction man-
agement:
at the same time, they also need to keep track of the
visualizations for multiple crime collections over time to fully
develop their insights. Beyond this, the users should be able
to manage their personal focus (views of interest), view points
of interest within the focus, and access interactions to explore
these points, while not affecting other users. Further, these
interactions should not be bound to the display, instead they
should be accessible from both close and distant proximity,
e.g., to examine visualizations from an overview distance.
COMBINING DAVID AND GOLIATH: FUNDAMENTALS
To support the outlined scenario, we need a platform to view
data records, store them as separate groups, and compare
groups to each other. Further, the platform should support
modifying visualization properties to make comparisons more
effective. To answer these challenges, we use secondary de-
vices to augment visualization components, enhance user in-
teractions, and ease the visual exploration. For example, as
demonstrated by VisTiles [35], this can extend, reconfigure, ab-
stract/elaborate, and connect visualizations between devices—
smartphones and tablets in their case. By taking the advan-
tages of wearables into account [2, 47, 53], we address the
challenges in using a large interactive display by adding per-
sonal smartwatches to the environment. Here we explore the
design space of combining smartwatches and large displays to
allow for cross-device interaction in visual analysis.
Roles of the Devices
Each device in our cross-device setup—smartwatch and large
display—has a specific role during visual analysis:
Large Display: By virtue of its size and affordance, the large
display serves as the primary display that provides multiple
visualizations of a multivariate dataset. Consistent with ex-
isting work on touch interaction for visualization [9, 20, 49],
analysts are able to interact with these visualizations: data
elements can be selected by tapping them, the axes can be
used to offer additional functionality (e.g., to sort the data),
and layouts can be changed by dragging. Bimanual interaction
is also possible [49], e.g., to scale visual elements, to span a
selection area, and to trigger mode switches. Thanks to its
size, the large display can also be used by multiple analysts
in parallel, thus serving as a public and shared display that is
visible and accessible to everyone.
Smartwatch: In contrast, the smartwatch is a personal—and
significantly smaller—device only used by its owner. Con-
CHI 2018 Honourable Mention
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 19
Page 3
sequently, the watch is suitable as a secondary display, but
can take on different roles. Given the challenges of using the
large display in the crime analysis scenario, the secondary
device should keep track of the user’s interaction activities and
corresponding data items. The device can therefore act as a
user-specific storage—a container for points of interests or pa-
rameter settings—that can be easily accessed at any time. This
role can further be extended by allowing the user to manage
the stored content on the watch itself (e.g., combining, manip-
ulating, or deleting content items). In the interest of managing
the available display space while supporting multiple users,
the secondary device enhances the interaction capabilities to
support a wide range of exploration tasks. The smartwatch
can serve as a mediator (cf. Brudy et al. [14]), i.e., defining
or altering system reactions when interacting with the large
display. This mediation can happen in both an active and pas-
sive way: either the watch is used to switch modes, or it offers
additional functionalities based on the interaction context and
the user. Finally, to flexibly use the space in front of the large
display, the smartwatch can also take on the role of a remote
control by allowing the user to interact from a distance.
Elementary Interaction Principles on the Smartwatch
Generally, the smartwatch supports four types of input: simple
touch, touch gestures, physical controls, and spatial move-
ments. As the analysts mainly focus on the large display
during exploration, the input on the watch should be limited
to simple, clearly distinguishable interactions that can also be
performed eyes-free to reduce attention switches (cf. Pasquero
et al. [44], von Zadow et al. [53]). Therefore, we propose to
primarily use three interactions on the watch (see Figure 2a-c):
swiping horizontally (i.e., left or right), swiping vertically (i.e.,
upwards or downwards), and, if available, rotating a physical
control of the smartwatch [59] as, e.g., the rotatable bezel of
the Samsung Gear or the crown of the Apple Watch. For more
advanced functionality, long taps as well as simple menus and
widgets can be used. Finally, using the internal sensors of the
watch, the users’ arm movements or poses (Figure 2d) can be
used to support pointing or detect different states [31, 47, 57].
A B DC
Figure 2. Primary smartwatch interactions: (a) swiping horizontally,
i.e., along the arm axis for transferring content; (b) swiping vertically or
(c) rotating a physical control for scrolling through stored content; and
(d) moving the arm for pointing interaction.
When the smartwatch takes the role of user-specific storage,
we assume that users have a mental model of two directions for
transferring content; towards the smartwatch or towards the
large display. Based on this, a specific axis of the smartwatch
can be derived: The proximodistal axis (i.e., along the arm) is
suitable for transferring content; swiping towards the shoul-
der (i.e., left or right depending on the arm on which the user
wears the watch; Figure 2a) can pull content from the large dis-
play onto the smartwatch. Vice versa, swiping from the wrist
towards the hand, i.e., towards the large display, can allow to
push content back to the visualizations. Additionally, the axial
axis (i.e. orthogonal to the arm) can be defined as a second
axis (cf. von Zadow et al. [53]). We suggest scrolling through
the stored content by either swiping vertically (Figure 2b) or
rotating the bezel or crown of the watch (Figure 2c).
Zones of Cross-Device Interaction
In general, the cross-device interaction can happen in three
zones: either at the large display using direct touch, in close
proximity to the display but without touching it, or from inter-
mediate and even far distance (Figure 3a). We expect analysts
to work directly at the large display most of the time, thus the
touch-based connection is primarily used. As the users’ in-
tended interaction goal is expressed in the touch position, i.e.,
defining on which visualization (part) the analyst is focusing,
the smartwatch—acting as a mediator—should incorporate
this knowledge to offer or apply functionalities. In contrast,
the remote interaction enables the analysts to work without
touching the display, possibly even from an overview distance
or while sitting. As the contextual information of the touch
on the large display is missing, the user has to perform an
additional step to select the view of interest (e.g., by pointing).
direct touch
close proximity
far distance
A B
Figure 3. (a) Cross-device interaction can happen with direct touch, in
close proximity, or from intermediate or far distance; (b) the scope of
user interactions is limited to the views in focus.
As related work on physical navigation illustrates [3, 7, 29,
34], working from an overview distance, close proximity, or
directly at the large display is not an either-or decision. There
is always an interplay between the three: analysts interact in
front of the large display to focus on details, step back to orient
themselves, and again move closer to continue exploration.
Consequently, the cross-device interaction should bridge these
zones. For instance, an analyst may first work near the large
display and perform interactions incorporating the watch (e.g.,
store data selections). She then steps back to continue explo-
ration from a more convenient position to analyze other views
on the large display based on the stored data. This workflow
could be further enhanced using proxemic interaction [8, 41].
Scope of Interactions in Multi-User Setups
In common coordinated multiple view (CMV) applica-
tions [48], changes in one visualization (e.g., selection, filter,
encoding) have global impact, i.e., they are applied to all dis-
played views. As discussed in our motivating scenario, this
behavior may lead to interference between analysts working
in parallel [42]. To avoid this issue, the effects of an interac-
tion should by default only be applied to the visualization(s)
currently in focus of the analyst (Figure 3b). Further, we also
propose to constrain the scope of an interaction mediated by
the smartwatch to a short time period. More specifically, on
touching a visualization to apply a selected adaptation from the
smartwatch, the resulting change is only visible for a few sec-
onds or as long as the touch interaction lasts. At the same time,
CHI 2018 Honourable Mention
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 19
Page 4
focus CA pull manipulate focus CA preview push
Connect [58]
select marks, pull them, focus other visualizations, preview/push set to them
Filter [13,58]
select marks, pull them, apply filter
Select [13,58]
select marks, pull them
Navigate [13], Explore [58]
physical navigation, select marks, details-on-demand
Record [13]
select origin, pull visualization
Aggregate [13]
Abstract/Elaborate [58]
configure and combine sets
Change [13], Encode [58]
select origin, choose color scheme, push
Arrange [13]
select origin, choose stored visualization, push
Reconfigure [58]
select axis, choose axis dimension, push
[13] - Brehmer & Munzner. 2013. A multi-level typology of abstract visualization tasks. (how part).
[58] - Yi et al. 2007. Toward a Deeper Understanding of the Role of Interaction in Information Visualization.
conceptual framework
Figure 4. Our framework addresses a wide range of tasks, here illustrated by mapping two established task classifications [13, 58] onto interaction
sequences that are enabled by our framework (examples in italics). For some tasks, certain aspects are also still supported by the large display itself,
e.g., zooming and panning from abstract/elaborate and explore [58]. Regarding the typology by Brehmer and Munzner [13], we focus on their how part.
From this part, a few tasks (encode,annotate,import,derive) are not considered as they are going beyond the scope of this paper. CA: Connective Area.
there also exist situations where changes should be applied per-
manently, i.e., merged back into the shared visualization [42].
Therefore, it must be possible to push these adaptations to the
large display and keep the altered data visualization.
CONCEPTUAL FRAMEWORK
By incorporating the different roles of the smartwatch and the
large display, our conceptual framework supports a multitude
of tasks during visual exploration [13, 58]. In the role of user-
specific storage, the smartwatch provides access to the data,
i.e., points of interest. Both the shared large display and the
smartwatch (as remote control) determine or define the context
of an interaction. Regarding the task topology from Brehmer
and Munzner [13], the combination of these two aspects—data
and context—represents the what of an interaction, and en-
ables the smartwatch to act as mediator defining the how. This
mediation enables the analyst to solve a given task coming
from questions raised in the scenario (why). Our framework
provides components that blend together into specific interac-
tion sequences and address the various task classes (Figure 4).
In the following, we will introduce these components and de-
scribe their interplay. We will also reference the matching
tasks from Figure 4 in small caps (EXA MP LE).
Item Sets & Connective Areas
The primary role of the smartwatch is to act as a personalized
storage of sets. We define sets as a generalized term for a
collection of multiple entities of a certain type. In our frame-
work, we currently consider two different set types: data items
and configuration properties (e.g., axis dimension, chart type).
These sets can also be predefined; for instance, for each exist-
ing axis dimension, a corresponding set is generated. On the
smartwatch, the stored sets are provided as a list. As shown in
Figure 5. Sets are represented by labels and a miniature: for sets with
data items, the miniature is based on the view where it was created (left);
for sets containing configuration items an iconic figure is shown (right).
Figure 5, each set is represented by a description, a miniature
view or icon, and further details (e.g., value range). Consistent
with the set notion, sets of the same type can be combined
using set operations (i.e., union, intersection, complement).
Finally, to allow managing sets over time, they are grouped
per session. Former sessions can be accessed using the watch.
During the data exploration, the region that a user interacts
with can provide a valuable indication of the user’s intent. We
therefore define four zones for each visualization—called con-
nective areas (CA)—that will provide the context (what) of
an interaction: the marks, canvas, axes, as well as a special
element close to the origin. Connective areas define the set
type (Figure 6) and control the functionalities accessed on the
two devices. To
focus on a CA
, the interaction comprises
of, in the simplest case, tapping or circling marks (i.e., data
points) for selection. For other CAs, user can set the focus
in two ways: by performing a touch-and-hold (long touch),
the focus is set onto the respective area underneath the touch
point but stays only active for the duration of the hold; by per-
forming a double tap, the focus is kept as long as not actively
changed. Setting the focus activates suitable functionalities
for the specific connective area on the watch. On focus, stored
set content can also be previewed on the large display.
While we consider working in close proximity to the large
display as the primary mode of interaction, certain situations
exist where this is not appropriate or preferred. For instance,
a common behavior when working with large displays is to
physically step back to gain a better overview of the provided
content. To remotely switch the focus onto a different view
or connective area, the user can perform a double tap on the
smartwatch to enable distant interaction and enter a coarse
pointing mode. Similar to Katsuragawa et al. [31], the pointing
origin
access available
chart properties
axes
access available
axes properties
canvas
access stored
sets of data items
marks
create selections
from a visualization
Figure 6. Connective Areas (CA) represent semantic components of a
visualization that have a specific interaction context with respect to a
secondary device (a smartwatch in our case).
CHI 2018 Honourable Mention
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 19
Page 5
can be realized by detecting the movements of the watch using
its built-in accelerometer. Alternatively, it is also possible to
scroll through the visualizations instead of moving the arm. In
both cases, the current focus is represented as a colored border
around the corresponding view on the large display. After
confirming the focus, the analyst can select the desired connec-
tive area within the focused visualization in a second step and
then access and preview stored sets. This remote interaction
provides the same functionality as the direct touch interaction.
Users can explicitly switch between interaction based on di-
rect touch or on remote access from both close proximity and
far distance. This transition could also be extended by using
proxemics (cf. proxemic interaction [8, 41]).
Creating & Managing Sets for Visual Exploration
To develop insights through visual exploration, the interac-
tions in our framework are focused on selecting, manipulating,
and previewing data points of interest, as well as applying the
previews permanently to a visualization. These interactions
are mediated by the smartwatch based on context of the user.
The concepts enabling these four functionalities also define
the how of the analyst’s task. To
pull
(i.e., create) a set, the
analyst first selects marks in the visualization on the large
display by tapping or lasso selection, and then swipes towards
herself on the watch (SEL ECT ). The resulting set is stored on
the smartwatch. Now, by again switching the focus to another
view on the large display (i.e., by holding, double tapping, or
pointing), the set currently in focus on the watch gets instantly
previewed
on the target visualization. The preview is only
shown for a few seconds, or, in the case of holding, for the du-
ration of the hold. Depending on the visualization type and the
encoding strategy (aggregated vs. individual points), the items
are inserted as separate elements or highlighted (Figure 7a,b).
As the focus is set on a connective area, the smartwatch can
still be used for further exploration. For instance, by swiping
vertically on the watch or rotating its bezel, the user can switch
through the list of stored sets and preview others for compar-
ison. Again, the preview is shown only for a few seconds.
To permanently
push
the changes to the view on the large
display, a horizontal swipe towards the large display, i.e., the
visualization, can be performed on the watch (CON NEC T). As
push is considered a concluding interaction, the system then
switches back to a neutral state by defocusing the view.
Besides data items, visualization properties can also be ac-
cessed and adapted. Based on the connective areas, we dis-
tinguish between axis properties (e.g., dimension, scale) and
chart properties (e.g., chart type, encoding). These configura-
tion sets are mostly predefined, as only a limited number of
A B C D E
Set #13
District
Axis Dimension
Set #1
9:00 - 15:00
Crime Time
Set #1
9:00 - 15:00
Crime Time
B CA
Figure 7. Previewing stored sets results in (a+b) inserting or highlight-
ing the containing data points in the visualization, or (c) adapting the
visualization to the respective configuration item (here: axis dimension).
Year
1998
2006
A CB
Southern
Crime Time
2,345
Figure 8. The smartwatch allows (a) applying filters to data item sets; (b)
deleting sets by wiping; and (c) displaying additional details-on-demand.
possible values/configurations exist. For instance, when tap-
ping on an axis, all dimensions as well as scales are offered as
individual configuration sets on the watch. As with data items,
scrolling through this list of sets results in instantly preview-
ing the sets, e.g., the marks would automatically rearrange
accordingly to the changed dimension or scale (Figure 7c). By
performing a push gesture, this adaptation is permanently ap-
plied to the visualization on the large display (CHANGE, ENCODE,
REC ONFI GU RE). Naturally, more possibilities for visualization
configuration may exist; however, covering all of them is be-
yond the scope of this work. In addition to single configuration
properties, the origin can also provide access to the visualiza-
tion in its entirety, i.e., a set containing all active properties
at once. This allows storing a visualization for later use, or
moving it to another spot in the interface (ARRANGE, RECORD).
As an extension to storing sets, the smartwatch also offers the
possibility to
manipulate
and combine sets on the watch. By
performing a long tap on a set, these operations are shown
in a menu. For all set types, this involves the possibility to
combine sets based on a chosen set operation (e.g., union or
intersection), which results in a new set (AGGR EG ATE). For sets
containing data items, sets can also be bundled; previewing
or pushing such a bundle shows all the contained sets as sepa-
rated overlays at once; thus, merging them on the view itself.
Furthermore, it is possible to create new filters and change the
set representation on the watch. The filter option allows the
analyst to select a property first and then to define the filter
condition (e.g., crime date in July 2015). For numeric filter
options, sliders are provided (Figure 8a). To delete a set on
the watch, a wipe gesture can be performed (Figure 8b).
All in all, the set metaphor is ideal for visually comparing
multiple regions of interest on the large display because data
items can be extracted from the views, manipulated or com-
bined on the watch, and then previewed on multiple target
visualizations (CON NE CT). The ephemeral nature of our pro-
posed preview techniques enables analysts to explore aspects
without worrying about reverting to the original state of a visu-
alization. In addition, the set storage further acts as a history
of user interactions, to undo, replay, or progressively refine the
interactions [50] (RECORD). During the exploration, the watch
can also be used for tasks not involving sets. For instance,
existing details-on-demand mechanisms on the large display
(e.g., displaying a specific value for a mark) can be extended
by displaying further details on the watch, e.g., an alternative
representation or related data items (Figure 8c; NAVIG ATE).
Feedback Mechanisms
For cross-device setups, it is important to consider feedback
mechanisms in the context of the interplay between devices,
especially to avoid forced attention switches. In our setup,
CHI 2018 Honourable Mention
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 19
Page 6
we are able to use three different feedback channels: visual
feedback on the large display and on the smartwatch, as well
as haptic feedback via the watch. On the large display, the
feedback equals the system reaction on user interactions, e.g.,
previewing content. To further ease the exploration of different
sets, a small overlay on the large display indicates the set
currently in focus when scrolling through the list on the watch,
thus reducing gaze switches between the two devices. The
colored border around a view indicates if a connective area is
focused and thus the watch can act as a mediator.
We use haptic feedback, i.e., vibrations of the smartwatch, for
confirmation. When successfully performing an interaction,
e.g., pulling a set onto the watch or pushing it to a visualization,
the watch confirms this by vibrating. Alongside with the
small overlays described above, this behavior also supports
eyes-free interaction with the smartwatch. Further, the watch
also vibrates to indicate that additional information or tools
are available on the watch: While moving the finger over a
visualization, the watch briefly vibrates when a new element
is hit to indicate that details-on-demand or more functionality
are available. To some degree, this also enables to “feel” the
visualization, e.g., through multiple vibrations when moving
across a cluster of data points in a scatterplot.
APPLYING CONCEPTS: ENHANCED CRIME ANALYSIS
In the following, we present an interaction walkthrough for
the motivating crime data scenario that illustrates the utility of
combining smartwatches and a large display.
A B C
Figure 9. (a) Pulling, (b) previewing, and (c) pushing of sets.
The first question that one of the police analysts has is whether
there are specific high-crime regions within the city over time.
She starts by selecting multiple bars representing different
types of assaults in a bar chart and saves them into her user-
specific storage on the watch by performing a swipe on the
watch towards herself (Figure 9a). The watch immediately
creates a set and represents it with a miniature of the original
bar chart and the selected bar. Further, she also selects the
corresponding bar of burglaries, and creates another set. As
she can carry the sets, she investigates how the assaults occur
in various districts. Triggered by double tapping on other
visualizations, the smartwatch mediates the interaction and
induces the large display to show a preview of the analyst’s
set in these views. By rotating the bezel of the watch back and
forth, she switches between the previews of the two stored sets
and compares their distribution on the large display (Figure 9b).
She notices that assaults have happened in neighborhoods
surrounding Downtown, while burglaries happened more often
in specific suburbs. In order to investigate patterns of assaults
during daytime, she taps on a line chart to focus on this view
and swipes towards the large display. As a result, the current
set on the watch is pushed to the focused chart (Figure 9c). She
continues this process for other crime types (e.g., robberies)
by identifying data items and previewing them on other views,
while tracking the multiple sets on her smartwatch.
A second analyst wants to evaluate the effects of measures
taken in a neighborhood. First, he restores a set of crimes
for this neighborhood from a former session via the watch
menu. By selecting the crimes for the neighborhood on the
large display and pulling them, he creates a set similar to the
restored one with current data. To compare them, he pushes
both sets onto a weapons histogram and recognizes a drop of
crimes with firearms but not for crimes with knifes. By double
tapping the axis of the histogram, the smartwatch displays the
list of available dimensions, and the analyst switches from
weapons to crime types (cf. Figure 10a). This allows him to
quickly validate his assumption that the drop in firearms is
caused by a reduced number of assaults, while the number of
robberies is almost unchanged. He can now conclude that the
introduced measures only affected assaults.
A B
DistrictDescription
Figure 10. (a) Changing the axis dimensions, and (b) remote control
from a distance to set the focus onto a specific visualization view.
Afterwards, both analysts start discussing their insights and
step back to get a better overview of all visualizations consid-
ered before. The first analyst pushes her stored set remotely
to the histogram used by the second analyst. She performs
a double tap on the smartwatch, moves the pointer onto the
visualization by moving her arm, confirms the focus by tap,
selects the canvas (connective area) on the watch, and applies
her set (push). They recognize that the patterns are opposed,
i.e., assaults dropped in the one neighborhood but raised in
the other. With this insight, one analyst leaves to report their
observations while the other continues the exploration.
PROTOTYPE IMPLEMENTATION
We developed a web-based prototype to instantiate our con-
ceptual framework for demonstrating and evaluating our
ideas. For deployment, we used two different large display
setups in our respective universities (U1, U2)—a 84-inch
Promethean ActivePanel (U1) and a 55-inch Microsoft Per-
ceptive Pixel (U2). Both setups used the Samsung Gear S2
smartwatch. The watch features a rotatable bezel that can act
as an input modality. All devices connect to a Python server
that serves the front-end files, handles communication, and
performs data operations based on interaction. The server
also stores the created sets and manages the sessions. Visual-
izations are developed with D3.js [11]. The dataset contains
roughly 250,000 crimes in Baltimore, MD, USA between 2011
and 2016. Each crime within this dataset is characterized by
location, date, time, type, weapon, and geographical district.
In the current version, we focused on the interaction with data
points and sets to test the core principles of our framework.
The large display shows bar charts, line charts, scatterplots,
CHI 2018 Honourable Mention
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 19
Page 7
and a map to visualize different dataset attributes. In each
view, users can select marks by touch. On the smartwatch, it
is possible to pull a set from the large display onto the watch
as well as preview and push it onto other views on the large
display (Figure 9). Currently, it is only possible to push one set
to a view; pushing a second set replaces the first one. Both pull
and push are confirmed by vibration feedback on the watch.
Furthermore, the watch allows to combine sets and to remotely
selecting views by scrolling through the displayed views on
the large display. The current implementation is not able to
distinguish multiple users and does not support pointing as
well as changing visualization configurations, yet.
FORMATIVE EVALUATION: DESIGN FEEDBACK
We conducted a formative evaluation to receive feedback to
validate the fundamental principles of our conceptual frame-
work and inform the design iteration of the final techniques.
Participants.
Five unpaid researchers (4 Ph.D. students, 1
post-doc; age 30-48; 1 female; 4 male) from a HCI lab (thus,
experts in interaction design) at one of our universities (U1)
participated. Three participants focus on visual data analysis
in their research, all are familiar with large interactive displays,
and one uses a smartwatch on daily basis.
Apparatus and Dataset.
We used the setup and dataset as
described above. The prototype was an earlier version, thus
some of the interaction concepts differed from the framework
presented here. In this earlier version, the cross-device inter-
actions required the users to persistently touching the large
display (cf. Badam et al. [4]). For instance, to preview a
set and to perform a pull/push interaction it was required to
touch-and-hold the visualization at the same time.
Procedure.
In each session, we first introduced the partici-
pants to our application scenario—setup, users, and their tasks
and goals. Then, we presented our framework and sequentially
explained the different techniques in the prototype. We asked
participants to try the techniques on their own while stating
their thoughts. Afterwards, we illustrated further concepts of
our framework with figures and discussed their implications.
Feedback and Iteration of Concepts
Overall, all participants (P1-5) liked the idea of our proposed
setup for visual analysis: for instance, they commented that the
watch is a multi-purpose device personalized for a single user
and—in many cases—available ubiquitously (P1), allowing
access to content in different setups, e.g., first at a desktop for
preparation, and then later at the large display (P4). It could
even be integrated further, for instance to authenticate a person
when accessing confidential data (P3). Two participants (P1,
P4) also noted the advantage of having their hands free for,
e.g., performing pen and touch interaction or taking notes.
The feedback also helped us to iterate our concepts. The main
concern of the participants was the interface complexity, es-
pecially regarding the handling of sets. For instance, they
suggested to provide functionalities for grouping and sorting
of sets on the watch (P4), which we address now through
grouping sets by sessions. We also followed the recommen-
dation to provide an additional description instead of only
showing the miniature view for sets on the watch (P3). Re-
garding the reconfiguration of visualizations, one participant
stressed that the offered possibilities should be limited to a list
of presets (P2). Two participants (P3, P4) suggested to keep
menus for complex adaptations on the large display itself. In
general, participants cited our proposed mechanisms for adapt-
ing views as a good way to manage user-preferred settings (P1,
P3) and to support a dynamic view layout (P4).
Regarding the cross-device interactions, four participants (P1,
P2, P4, P5) positively commented that our techniques already
keep forced attention switches between the devices at a mini-
mum. Two of them also stressed the importance of interact-
ing from close proximity and their preference to avoid long
touches for the pull/preview/push interactions, as they felt
that it enables a more casual interaction (P1) and prevents
fatigue (P2). We considered these comments in our itera-
tion by easing and streamlining the transition between remote
and touch-based interaction. For the remote interaction, opin-
ions diverged whether pointing is adequate (P5) or scrolling
through the views with virtual controls is sufficient (P4), there-
fore we kept both options. One participant added (P3) that this
presumably depends on pointing precision and display size.
USER STUDY: INTERACTION PATTERNS
As illustrated in the interaction walkthrough, our conceptual
framework has the potential to ease visual exploration, how-
ever, the way the techniques are utilized during sensemaking—
and affect the developed observations from data—is not clear.
Therefore, we conducted a user study with our large display
and smartwatch combination (
LD+SW
), against an equivalent
large display only interface (
LD
) for visual analysis tasks. This
allows us to investigate the interaction patterns during visual
exploration, and especially how the context-aware smartwatch
and the different roles it takes alter these patterns.
Experiment Conditions.
The study comprised two condi-
tions: LD
+
SW and LD. The LD
+
SW interface allows par-
ticipants to: (1) pull data from the large display to create
sets (each set gets a unique color), (2) show a preview of sets
on target visualizations, (3) push sets to the large display, (4)
use the smartwatch as remote control to focus views on the
large display, and (5) combine sets on the smartwatch. Except
for the last two, equivalent capabilities were created on the
LD condition using an freely movable overlay menu with a
scrollable set list that appears on long touch. All participants
worked with both conditions; the order was counterbalanced.
Participants.
We recruited 10 participants (age 22-40; 5 fe-
male; 5 male) from our universities (U1: P1-P4, U2: P5-P10).
Participants were visualization literate with experience in us-
ing visualizations with tools such as Excel and Tableau; 4 of
them used visualizations for data analysis (for their course or
research work). Two of the participants had already taken part
in the formative evaluation (U1).
Apparatus and Dataset.
The study was conducted in two
setups as described in the Implementation section. They only
differed in the size of the large display (U1: 84-inch, U2:
55-inch); the smartwatch (Samsung Gear S2), the prototype
version, as well as dataset (Baltimore crime) were the same.
CHI 2018 Honourable Mention
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 19
Page 8
Tasks.
We used this dataset to develop user tasks that can be
controlled for the study purposes. Tasks contained three ques-
tion types: (
QT1
) finding specific values, (
QT2
) identifying
extrema, and (
QT3
) comparison of visualization states [3]. In
general, the complexity of a task results from the number of
sets and the target visualizations to be considered to answer it.
After pilot testing with two participants, we settled on a list of
questions with different complexities: for QT1 and QT2 the
number of targets was increased to create complex tasks, while
for QT3 both the number of sets and the target visualizations
were increased. Here are few sample questions used:
1.
How many auto thefts happened in Southern district? (
QT1
)
2.
What are two most frequent crime types in Central? (
QT2
)
3.
What are the differences between crimes in the Northern
and the Southern districts in terms of weapons used? (
QT3
)
4.
For the two crime types that use firearms the most, what are
the differences in crime time, district, and months? (QT3)
The task list contained 9 questions overall. Two comparable
lists were developed for the two conditions to enable a within-
subject study design. These tasks can promote engagement in
a cross-device workflow in LD
+
SW or effectively use the LD.
Procedure.
The experimenter first trained participants in the
assigned interface by demonstrating the visualizations and in-
teractions. The participants were then allowed to train on their
own on a set of training tasks. Following this, they worked
on the nine tasks, answering each question verbally. They
then moved on to the other condition and repeated the proce-
dure. Afterwards, they completed a survey on the perceived
usability of the two interface conditions, as well as on general
interaction design aspects. Sessions lasted one hour.
Data Collected.
Participants were asked to think out aloud to
collect qualitative feedback. Their accuracy for the tasks was
noted along with the participant’s interactions and movement
patterns as well as hand postures by the experimenter in both
conditions. All sessions were video recorded and used to
review the verbal feedback as well as noted observations.
Results
After analyzing the data collected, we found three main results:
LD+SW interface allows flexible visual analysis patterns.
Set management tends to be easier in LD
+
SW due to fewer
attention switches; thus, simplifying comparison tasks.
Participants rated the interactions within our LD
+
SW pro-
totype as seamless, intuitive, and more suited for the tasks.
Here, we explain these results in detail within their context.
Interaction patterns and observed workflows
As we expected, the interaction abilities of both devices in
LD
+
SW and the ability to work from any distance lead to
flexible workflows for visual analysis. Therefore, we focused
on observing when and how these workflows manifest in our
tasks. In simple QT1 and QT2 tasks, participants used the
basic touch interaction (long touch, double tap) to preview a
set on the target visualization (workflow
F1
). Eight partici-
pants used physical navigation to move from one part of the
display to the other to perform such tasks, while others did
this remotely with their watch. For most of them (7/10), the
long touch action was seen to be sufficient to quickly answer
these tasks when only a value or extrema must be determined.
For comparisons between two sets (QT3) on a target, eight
participants preferred to disconnect from the large display by
double tapping it and taking two or three steps back to gain
a full view of the target visualization (
F2
), while only two
remained close and used long touch. On the LD condition, it
was not possible to step back since participants had to stay
close to the display to switch between sets to compare them.
In more complex tasks where two or more targets were con-
sidered, participants in LD
+
SW further showed this need to
step back to get a better view of the large display. While eight
participants mostly performed these tasks by moving back-
and-forth in front of the display to collect sets and pick target
visualizations to make comparisons (
F2
), three participants (P7
did both) used remote controls to access target views to avoid
this movement to an extent (
F3
). To track the sets on their
smartwatch, four participants held their hand up to view both
displays at the same time, while the majority (seven) differen-
tiated sets based on their assigned color. This set awareness
was weaker in LD condition; the participants often shifted
their focus between the sets menu and the visualizations repet-
itively to achieve the same. Finally, five participants used the
combine option when related sets were already created for
previous tasks, avoiding large display interaction (F4).
Overall, we observed that participants followed the pattern of
interact, step back, and examine (F1, F2), as well as interact re-
motely from a distance (F3, F4). Further, they often interacted
eyes-free with the watch, although the prototype could be fur-
ther improved in that regard (e.g., by displaying set labels on
the large display as more sets are being previewed). The ro-
tatable bezel of the watch was exclusively used for switching
sets, thus played an important role acting as a tangible control.
Differences in Developed Insights
Workflows F1-F4 were observed for different tasks on the
LD
+
SW condition. Given these observations, we were inter-
ested in the differences in task answers from these workflows
compared to their LD counterpart. In QT1 and QT2 tasks,
participants answered accurately on both conditions. How-
ever, the LD condition was less preferred, e.g., participant P1
stated, the interaction in LD was a little complicated and felt
slower than with the watch. More nuanced patterns existed
in participant answers to visual comparison of two or more
sets in target visualizations: they made observations about
specific values, trend differences in the target, and relative dif-
ferences in specific data items. To begin with, all participants
mentioned specific value-based differences between the sets
in the target visualization. To observe trend and relative dif-
ferences more effectively in LD
+
SW, participants (following
workflows F2 and F3) made use of the possibility to step back
from the large display and to switch back-and-forth between
sets with the help of the rotatable bezel on the watch. In the
LD condition, participants tried to switch back-and-forth by
alternately tapping on the sets in the menu, however, this was
more error-prone due to the missing physical guidance. As a
result, this forced attention switches between set navigation
CHI 2018 Honourable Mention
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 19
Page 9
and visual comparison and required some participants to re-
peat the interaction multiple times to develop their answers.
For instance, one participant (P10; worked with LD
+
SW first)
answered a comparison task (QT3, three sets on two targets)
by rotating the bezel between the sets twice for each target,
while he switched between the sets five times for each target
to make a similar comparison on the LD condition.
Finally, in the two large display setups (84-inch vs. 55-inch),
the workflows differed slightly regarding the extent of physical
navigation (stepping back) and distant interaction (F2, F3),
while the answers given by the participants were similar.
LD+SW
LD
LD+SW
LD
LD+SW
LD
LD+SW
Sets suitable for visual exploration?
Sets are manageable?
Pull/preview/push intuitive?
Remote control intuitive?
-20 020 40 60 80 100
Strongly disagree Disagree Neither disagree nor agree Agree Strongly agree
% fraction of participants:
Figure 11. In LD+SW, sets were more suitable for exploration, and more
manageable. The interactions were also more intuitive in LD+SW.
Qualitative Feedback
After each session, participants rated the two conditions on a
Likert scale from 1 to 5 for two groups of metrics: (1) the over-
all efficiency, ease of use, and utility, as well as (2) suitability
of the devices for set-based tasks and the intuitiveness of the
specific interaction designs. Participants rated both conditions
to be similar in efficiency, ease of use, and utility for visual
exploration. This was expected as the LD condition supported
equivalent operations to the LD
+
SW. The one negative rating
of LD
+
SW was due to the perceived increase in interaction
cost with an additional device. For remaining questions, par-
ticipants found the LD
+
SW condition to be more suited for
set creation and management, and the interactions on LD
+
SW
to be more intuitive. In Figure 11, this pattern is visible with
more participants strongly agreeing to these questions in case
of LD
+
SW. As P6 says, The interactions correspond to the
[cognitive] actions: pull reads data in, and preview/push by
activating a focus visualization gives data back.
DISCUSSION
Hand-held devices are commonly used as secondary devices [5,
19, 33]. Kister et al. [33] studied the large display and mobile
tablet combination, and found workflows where users either
stayed at a certain distance or crisscrossed in front of the dis-
play wall. Their participants exhibited two distinct exploration
styles: distributed between the combined devices, or focused
on the mobile. This is in contrast with our user study, where
most participants focused on the large display while interacting
eyes-free on the watch. This captures the main distinction in
coupling with handheld vs. wearables. The role of a wearable
is to remain invisible [55] and seamlessly improve the user’s
primary task. In contrast, hand-held devices generally have
more screen space and can show alternate visual perspectives
to augment the large display. It therefore goes without saying
that neither of them is better than the other, but rather that
they have their specific roles and affordances during visual
exploration. Our work contributes to this space by considering
the novel combination of smartwatches and large displays.
Besides the aspects mentioned in both evaluations, with regard
to our framework and its implementation further challenges
remain. Regarding multi-user scenarios, current interactive
displays are generally not able to distinguish which user is
interacting, thus associating a touch point with one smartwatch
is not directly possible. As this issue is relevant to many inter-
active spaces, experimental solutions exist, for instance, recog-
nizing fingerprints via embedded high-resolution cameras [25]
or tracking users with Kinect cameras [43, 54]. Utilizing such
approaches, our prototype can naturally extend to multi-user
scenarios due to the proposed local scope of interactions on
the large display and the independent and ephemeral nature of
pull/preview mechanisms with personal smartwatches.
Limitations and Future Work.
While our study provides
evidence of the utility of our device combination for specific
tasks, an in-depth study of open-ended visual exploration (cf.
Reda et al. [46]) would broaden this to a larger group of tasks
covered in our framework. As a preliminary hypothesis, we
expect an increased number and complexity of insights when
adding the smartwatch. These aspects can also be investigated
for parallel multi-user interaction. Furthermore, our frame-
work should be extended by mechanisms to explicitly promote
collaboration during visual explorations by supporting, e.g.,
concurrent tasks, group awareness, communication, and coor-
dination. More questions remain to be answered: (1) which
visual analysis tasks can be enhanced by handhelds vs. wear-
ables, and (2) which visualizations and application scenarios
most benefit from such device combinations. While currently
outside the scope, these questions are part of our future work.
CONCLUSION
We presented a conceptual framework to support visual anal-
ysis tasks in a multi-device environment, combining two ex-
tremes of interactive surfaces: smartwatches and a large inter-
active display. In our framework, the devices fulfill different
roles based on their strengths: the large display provides a
multi-view interface, whereas the smartwatch augments and
mediates the functionalities by serving as a personalized tool-
box. In interplay with connective areas on the large display,
the smartwatch supports exploration based on sets of both
data items and visualization properties, which can be stored,
manipulated, previewed, as well as applied permanently. We
evaluated our prototype implementation to find interaction
patterns with increased movements as well as evidence of the
effectiveness of this specific device combination. With this
work, we provide a starting point for this promising new class
of multi-device environments, which we believe are strongly
beneficial for visual analysis tasks and also beyond.
ACKNOWLEDGMENTS
This work was supported by the DFG grant DA 1319/3-3 and
the U.S. NSF award IIS-1539534. Any opinions, findings, and
conclusions or recommendations expressed in this material are
those of the authors and do not necessarily reflect the views
of the funding agencies. We want to acknowledge Dagstuhl
seminar 16231 (Immersive Analytics) for inspiring this work.
CHI 2018 Honourable Mention
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 19
Page 10
REFERENCES
1. Christopher Andrews, Alex Endert, Beth Yost, and Chris
North. 2011. Information visualization on large,
high-resolution displays: Issues, challenges, and
opportunities. Information Visualization 10, 4 (2011),
341–355. DOI:
http://dx.doi.org/10.1177/1473871611415997
2. Daniel L. Ashbrook, James R. Clawson, Kent Lyons,
Thad E. Starner, and Nirmal Patel. 2008. Quickdraw: The
Impact of Mobility and On-body Placement on Device
Access Time. In Proceedings of the ACM Conference on
Human Factors in Computing Systems. ACM, 219–222.
DOI:http://dx.doi.org/10.1145/1357054.1357092
3. Sriram Karthik Badam, Fereshteh Amini, Niklas
Elmqvist, and Pourang Irani. 2016. Supporting Visual
Exploration for Multiple Users in Large Display
Environments. In Proceedings of the IEEE Conference on
Visual Analytics Science and Technology. IEEE, 1–10.
DOI:http://dx.doi.org/10.1109/vast.2016.7883506
4.
Sriram Karthik Badam and Niklas Elmqvist. 2017. Visfer:
Camera-based visual data transfer for cross-device
visualization. Information Visualization (2017). DOI:
http://dx.doi.org/10.1177/1473871617725907 in press.
5. Sriram Karthik Badam, Eli Fisher, and Niklas Elmqvist.
2015. Munin: A Peer-to-Peer Middleware for Ubiquitous
Analytics and Visualization Spaces. IEEE Transactions
on Visualization and Computer Graphics 21, 2 (Feb
2015), 215–228. DOI:
http://dx.doi.org/10.1109/TVCG.2014.2337337
6. Robert Ball and Chris North. 2007. Realizing embodied
interaction for visual analytics through large displays.
Computers & Graphics 31, 3 (2007), 380–400. DOI:
http://dx.doi.org/10.1016/j.cag.2007.01.029
7. Robert Ball, Chris North, and Doug A. Bowman. 2007.
Move to Improve: Promoting Physical Navigation to
Increase User Performance with Large Displays. In
Proceedings of the ACM Conference on Human Factors
in Computing Systems. ACM, 191–200. DOI:
http://dx.doi.org/10.1145/1240624.1240656
8. Till Ballendat, Nicolai Marquardt, and Saul Greenberg.
2010. Proxemic interaction: designing for a proximity
and orientation-aware environment. In ACM International
Conference on Interactive Tabletops and Surfaces. ACM,
121–130. DOI:
http://dx.doi.org/10.1145/1936652.1936676
9. Dominikus Baur, Bongshin Lee, and Sheelagh
Carpendale. 2012. TouchWave: Kinetic Multi-touch
Manipulation for Hierarchical Stacked Graphs. In
Proceedings of the ACM Conference on Interactive
Tabletops and Surfaces. ACM, 255–264. DOI:
http://dx.doi.org/10.1145/2396636.2396675
10. Anastasia Bezerianos and Petra Isenberg. 2012.
Perception of Visual Variables on Tiled Wall-Sized
Displays for Information Visualization Applications.
IEEE Transactions on Visualization and Computer
Graphics 18, 12 (2012), 2516–2525. DOI:
http://dx.doi.org/10.1109/tvcg.2012.251
11. Michael Bostock, Vadim Ogievetsky, and Jeffrey Heer.
2011. D3: Data-Driven Documents. IEEE Transactions
on Visualization and Computer Graphics 17, 12 (2011),
2301–2309. DOI:
http://dx.doi.org/10.1109/TVCG.2011.185
12. Lauren Bradel, Alex Endert, Kristen Koch, Christopher
Andrews, and Chris North. 2013. Large high resolution
displays for co-located collaborative sensemaking:
Display usage and territoriality. International Journal of
Human-Computer Studies 71, 11 (2013), 1078–1088.
DOI:http://dx.doi.org/10.1016/j.ijhcs.2013.07.004
13. Matthew Brehmer and Tamara Munzner. 2013. A
multi-level typology of abstract visualization tasks. IEEE
Transactions on Visualization and Computer Graphics 19,
12 (2013), 2376–2385. DOI:
http://dx.doi.org/10.1109/TVCG.2013.124
14. Frederik Brudy, Steven Houben, Nicolai Marquardt, and
Yvonne Rogers. 2016. CurationSpace: Cross-Device
Content Curation Using Instrumental Interaction. In
Proceedings of the ACM Conference on Interactive
Surfaces and Spaces. ACM, 159–168. DOI:
http://dx.doi.org/10.1145/2992154.2992175
15. Frederik Brudy, David Ledo, Saul Greenberg, and
Andreas Butz. 2014. Is Anyone Looking? Mitigating
Shoulder Surfing on Public Displays Through Awareness
and Protection. In Proceedings of the International
Symposium on Pervasive Displays. ACM, 1:1–1:6. DOI:
http://dx.doi.org/10.1145/2611009.2611028
16. Olivier Chapuis, Anastasia Bezerianos, and Stelios
Frantzeskakis. 2014. Smarties: an input system for wall
display development. In Proceedings of the ACM
Conference on Human Factors in Computing Systems.
ACM, 2763–2772. DOI:
http://dx.doi.org/10.1145/2556288.2556956
17. Xiang Chen, Tovi Grossman, Daniel J. Wigdor, and
George Fitzmaurice. 2014. Duet: exploring joint
interactions on a smart phone and a smart watch. In
Proceedings of the ACM Conference on Human Factors
in Computing Systems. ACM, 159–168. DOI:
http://dx.doi.org/10.1145/2556288.2556955
18.
Yang Chen. 2017. Visualizing Large Time-series Data on
Very Small Screens. In Short Paper Proceedings of the
IEEE VGTC/Eurographics Conference on Visualization.
The Eurographics Association, 37–41. DOI:
http://dx.doi.org/10.2312/eurovisshort.20171130
19. Haeyong Chung, Chris North, Jessica Zeitz Self,
Sharon Lynn Chu, and Francis K. H. Quek. 2014.
VisPorter: facilitating information sharing for
collaborative sensemaking on multiple displays. Personal
and Ubiquitous Computing 18, 5 (2014), 1169–1186.
DOI:http://dx.doi.org/10.1007/s00779-013- 0727-2
20. Steven M. Drucker, Danyel Fisher, Ramik Sadana,
Jessica Herron, and m.c. schraefel. 2013. TouchViz: A
CHI 2018 Honourable Mention
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 19
Page 11
Case Study Comparing Two Interfaces for Data Analytics
on Tablets. In Proceedings of the ACM Conference on
Human Factors in Computing Systems. ACM, 2301–2310.
DOI:http://dx.doi.org/10.1145/2470654.2481318
21. Alex Endert, Christopher Andrews, Yueh Hua Lee, and
Chris North. 2011. Visual encodings that support physical
navigation on large displays. In Proceedings of Graphics
Interface. Canadian Human-Computer Communications
Society, 103–110.
22.
Amir Hossein Hajizadeh, Melanie Tory, and Rock Leung.
2013. Supporting awareness through collaborative
brushing and linking of tabular data. IEEE transactions
on visualization and computer graphics 19, 12 (2013),
2189–2197. DOI:
http://dx.doi.org/10.1109/TVCG.2013.197
23. Edward T. Hall. 1966. The hidden dimension. (1966).
24. Peter Hamilton and Daniel J. Wigdor. 2014. Conductor:
enabling and understanding cross-device interaction. In
Proceedings of the ACM Conference on Human Factors
in Computing Systems. ACM, 2773–2782. DOI:
http://dx.doi.org/10.1145/2556288.2557170
25. Christian Holz and Patrick Baudisch. 2013. Fiberio: A
Touchscreen That Senses Fingerprints. In Proceedings of
the ACM Symposium on User Interface Software and
Technology. ACM, 41–50. DOI:
http://dx.doi.org/10.1145/2501988.2502021
26. Petra Isenberg, Pierre Dragicevic, Wesley Willett,
Anastasia Bezerianos, and Jean-Daniel Fekete. 2013.
Hybrid-Image Visualization for Large Viewing
Environments. IEEE Transactions on Visualization and
Computer Graphics 19, 12 (2013), 2346–2355. DOI:
http://dx.doi.org/10.1109/tvcg.2013.163
27. Petra Isenberg, Niklas Elmqvist, Jean Scholtz, Daniel
Cernea, Kwan-Liu Ma, and Hans Hagen. 2011.
Collaborative visualization: Definition, challenges, and
research agenda. Information Visualization 10, 4 (2011),
310–326. DOI:
http://dx.doi.org/10.1177/1473871611412817
28. Mikkel R. Jakobsen and Kasper Hornbæk. 2013.
Interactive Visualizations on Large and Small Displays:
The Interrelation of Display Size, Information Space, and
Scale. IEEE Transactions on Visualization and Computer
Graphics 19, 12 (2013), 2336–2345. DOI:
http://dx.doi.org/10.1109/tvcg.2013.170
29. Mikkel R. Jakobsen and Kasper Hornbæk. 2014. Up
close and personal. ACM Transactions on
Computer-Human Interaction 21, 2 (2014), 1–34. DOI:
http://dx.doi.org/10.1145/2576099
30. Mikkel R. Jakobsen and Kasper Hornbæk. 2015. Is
Moving Improving?: Some Effects of Locomotion in
Wall-Display Interaction. In Proceedings of the ACM
Conference on Human Factors in Computing Systems.
ACM, 4169–4178. DOI:
http://dx.doi.org/10.1145/2702123.2702312
31. Keiko Katsuragawa, Krzysztof Pietroszek, James R.
Wallace, and Edward Lank. 2016. Watchpoint: Freehand
Pointing with a Smartwatch in a Ubiquitous Display
Environment. In Proceedings of the ACM Conference on
Advanced Visual Interfaces. ACM, 128–135. DOI:
http://dx.doi.org/10.1145/2909132.2909263
32.
Jungsoo Kim, Jiasheng He, Kent Lyons, and Thad Starner.
2007. The Gesture Watch: A Wireless Contact-free
Gesture based Wrist Interface. In Proceedings of the
IEEE Symposium on Wearable Computers. IEEE, 15–22.
DOI:http://dx.doi.org/10.1109/iswc.2007.4373770
33. Ulrike Kister, Konstantin Klamka, Christian Tominski,
and Raimund Dachselt. 2017. GraSp: Combining
Spatially-aware Mobile Devices and a Display Wall for
Graph Visualization and Interaction. Computer Graphics
Forum 36, 3 (2017). DOI:
http://dx.doi.org/10.1111/cgf.13206
34.
Ulrike Kister, Patrick Reipschläger, Fabrice Matulic, and
Raimund Dachselt. 2015. BodyLenses: Embodied Magic
Lenses and Personal Territories for Wall Displays. In
Proceedings of the 2015 ACM International Conference
on Interactive Tabletops & Surfaces. ACM, 117–126.
DOI:http://dx.doi.org/10.1145/2817721.2817726
35. Ricardo Langner, Tom Horak, and Raimund Dachselt.
2017. VisTiles: Coordinating and Combining Co-located
Mobile Devices for Visual Data Exploration. IEEE
Transactions on Visualization and Computer Graphics 24,
1 (2017). DOI:
http://dx.doi.org/10.1109/tvcg.2017.2744019
36.
Ricardo Langner, Ulrich von Zadow, Tom Horak, Annett
Mitschick, and Raimund Dachselt. 2016. Content
Sharing Between Spatially-Aware Mobile Phones and
Large Vertical Displays Supporting Collaborative Work.
Springer International Publishing, 75–96. DOI:
http://dx.doi.org/10.1007/978-3- 319-45853- 3_5
37. Gierad Laput, Robert Xiao, Xiang ’Anthony’ Chen,
Scott E. Hudson, and Chris Harrison. 2014. Skin Buttons:
Cheap, Small, Low-powered and Clickable Fixed-icon
Laser Projectors. In Proc. of the ACM Symposium on
User Interface Software and Technology. 389–394. DOI:
http://dx.doi.org/10.1145/2642918.2647356
38. Can Liu, Olivier Chapuis, Michel Beaudouin-Lafon, and
Eric Lecolinet. 2017. CoReach: Cooperative Gestures for
Data Manipulation on Wall-sized Displays. In
Proceedings of the ACM Conference on Human Factors
in Computing Systems. ACM, 6730–6741. DOI:
http://dx.doi.org/10.1145/3025453.3025594
39.
Can Liu, Olivier Chapuis, Michel Beaudouin-Lafon, Eric
Lecolinet, and Wendy E. Mackay. 2014. Effects of
display size and navigation type on a classification task.
In Proceedings of the ACM Conference on Human
Factors in Computing Systems. ACM, 4147–4156. DOI:
http://dx.doi.org/10.1145/2556288.2557020
40. Narges Mahyar and Melanie Tory. 2014. Supporting
communication and coordination in collaborative
CHI 2018 Honourable Mention
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 19
Page 12
sensemaking. IEEE transactions on visualization and
computer graphics 20, 12 (2014), 1633–1642. DOI:
http://dx.doi.org/10.1109/TVCG.2014.2346573
41. Nicolai Marquardt and Saul Greenberg. 2012. Informing
the Design of Proxemic Interactions. IEEE Pervasive
Computing 11, 2 (2012), 14–23. DOI:
http://dx.doi.org/10.1109/mprv.2012.15
42. Will McGrath, Brian Bowman, David McCallum,
Juan David Hincapié-Ramos, Niklas Elmqvist, and
Pourang Irani. 2012. Branch-explore-merge: Facilitating
Real-time Revision Control in Collaborative Visual
Exploration. In Proceedings of the ACM Conference on
Interactive Tabletops and Surfaces. ACM, 235–244.
DOI:
http://dx.doi.org/10.1145/2396636.2396673
43. Sundar Murugappan, Vinayak, Niklas Elmqvist, and
Karthik Ramani. 2012. Extended multitouch: recovering
touch posture and differentiating users using a depth
camera. In Proceedings of the ACM Symposium on User
Interface Software and Technology. ACM, 487–496.
DOI:
http://dx.doi.org/10.1145/2380116.2380177
44. Jerome Pasquero, Scott J. Stobbe, and Noel Stonehouse.
2011. A Haptic Wristwatch for Eyes-free Interactions. In
Proceedings of the ACM Conference on Human Factors
in Computing Systems. ACM, 3257–3266. DOI:
http://dx.doi.org/10.1145/1978942.1979425
45. Arnaud Prouzeau, Anastasia Bezerianos, and Olivier
Chapuis. 2017. Evaluating Multi-User Selection for
Exploring Graph Topology on Wall-Displays. IEEE
Transactions on Visualization and Computer Graphics 23,
8 (2017), 1936–1951. DOI:
http://dx.doi.org/10.1109/tvcg.2016.2592906
46. Khairi Reda, Andrew E. Johnson, Michael E. Papka, and
Jason Leigh. 2015. Effects of Display Size and Resolution
on User Behavior and Insight Acquisition in Visual
Exploration. In Proceedings of the ACM Conference on
Human Factors in Computing Systems. ACM, 2759–2768.
DOI:http://dx.doi.org/10.1145/2702123.2702406
47. Jun Rekimoto. 2001. GestureWrist and GesturePad:
unobtrusive wearable interaction devices. In Proceedings
of the IEEE International Symposium on Wearable
Computers. IEEE, 21–27. DOI:
http://dx.doi.org/10.1109/iswc.2001.962092
48.
Jonathan C. Roberts. 2007. State of the Art: Coordinated
& Multiple Views in Exploratory Visualization. In
Proceedings of the International Conference on
Coordinated and Multiple Views in Exploratory
Visualization. IEEE, 61–71. DOI:
http://dx.doi.org/10.1109/cmv.2007.20
49. Ramik Sadana and John Stasko. 2016. Expanding
Selection for Information Visualization Systems on
Tablet Devices. In Proceedings of the ACM Conference
on Interactive Surfaces and Spaces. ACM, 149–158.
DOI:
http://dx.doi.org/10.1145/2992154.2992157
50. Ben Shneiderman. 1996. The Eyes Have It: A Task by
Data Type Taxonomy for Information Visualizations. In
Proceedings of the IEEE Symposium on Visual
Languages. IEEE, 336–343. DOI:
http://dx.doi.org/10.1016/b978-155860915- 0/50046-9
51.
Martin Spindler, Christian Tominski, Heidrun Schumann,
and Raimund Dachselt. 2010. Tangible views for
information visualization. In Proceedings of the ACM
Conference on Interactive Tabletops and Surfaces.
157–166. DOI:
http://dx.doi.org/10.1145/1936652.1936684
52. Matthew Tobiasz, Petra Isenberg, and Sheelagh
Carpendale. 2009. Lark: Coordinating Co-located
Collaboration with Information Visualization. IEEE
Transactions on Visualization and Computer Graphics 15,
6 (2009), 1065–1072. DOI:
http://dx.doi.org/10.1109/tvcg.2009.162
53. Ulrich von Zadow, Wolfgang Büschel, Ricardo Langner,
and Raimund Dachselt. 2014. SleeD: Using a Sleeve
Display to Interact with Touch-sensitive Display Walls. In
Proceedings of the ACM Conference on Interactive
Tabletops and Surfaces. ACM, 129–138. DOI:
http://dx.doi.org/10.1145/2669485.2669507
54. Ulrich von Zadow, Patrick Reipschläger, Daniel Bösel,
Anita Sellent, and Raimund Dachselt. 2016. YouTouch!
Low-Cost User Identification at an Interactive Display
Wall. In Proceedings of the ACM Conference on
Advanced Visual Interfaces. ACM, 144–151. DOI:
http://dx.doi.org/10.1145/2909132.2909258
55. Mark Weiser. 1991. The computer for the 21st Century.
Scientific American 265, 3 (1991), 94–104.
56. Dirk Wenig, Johannes Schöning, Alex Olwal, Mathias
Oben, and Rainer Malaka. 2017. WatchThru: Expanding
Smartwatch Displays with Mid-air Visuals and
Wrist-worn Augmented Reality. In Proceedings of the
ACM Conference on Human Factors in Computing
Systems. ACM, 716–721. DOI:
http://dx.doi.org/10.1145/3025453.3025852
57. Gerard Wilkinson, Ahmed Kharrufa, Jonathan Hook,
Bradley Pursglove, Gavin Wood, Hendrik Haeuser,
Nils Y. Hammerla, Steve Hodges, and Patrick Olivier.
2016. Expressy: Using a Wrist-worn Inertial
Measurement Unit to Add Expressiveness to Touch-based
Interactions. In Proceedings of the ACM Conference on
Human Factors in Computing Systems. ACM, 2832–2844.
DOI:http://dx.doi.org/10.1145/2858036.2858223
58.
Ji Soo Yi, Youn ah Kang, and John Stasko. 2007. Toward
a Deeper Understanding of the Role of Interaction in
Information Visualization. IEEE Transactions on
Visualization and Computer Graphics 13, 6 (2007),
1224–1231. DOI:
http://dx.doi.org/10.1109/tvcg.2007.70515
59. Xin Yi, Chun Yu, Weijie Xu, Xiaojun Bi, and Yuanchun
Shi. 2017. COMPASS: Rotational Keyboard on
Non-Touch Smartwatches. In Proceedings of the ACM
Conference on Human Factors in Computing Systems.
ACM, 705–715. DOI:
http://dx.doi.org/10.1145/3025453.3025454
CHI 2018 Honourable Mention
CHI 2018, April 21–26, 2018, Montréal, QC, Canada
Paper 19
Page 13
... The idea of making sense of visual data on a large display through commanding a smartwatch exploits the dimensions of wearable devices, thereby enabling microinteractions [12] and proxemic interactions [13]. To our knowledge, the study of using the smartwatch as an input field of interactions for visual analytics is as yet in its infancy, though see the work of Horak et al. [14]. Unlike interactions with smartwatch only and cross-device interactions involving the watch, a smartwatch used for visual analytics interactions should be the miniature version of large screen displaying the visualizations. ...
... The previous study closest to our research was carried out by Horak et al. [14], as mentioned above, which pioneered a new smartwatch-based interaction system for data exploration on the large display. The system is built on two basic design concepts: item sets and connective areas. ...
Article
Full-text available
The process of visual analytics is composed of the visual data exploration tasks supporting analytical reasoning. When performing analytical tasks with the interactive visual interfaces displayed by the large screen, physical discomforts such as gorilla-arm effect can be easily caused. To enrich the input space for analysts, there has been some researches concerning the cross-device analysis combining mobile devices with the large display. Although the effectiveness of expert-level designs has been demonstrated, little is known of the ordinary users’ preferences for using a mobile device to issue commands, especially the small one like smartwatch. We implement a three-stage study to investigate and validate these preferences. A total of 181 distinctive gestural inputs and 52 interface designs for 21 tasks were collected from analysts. Expert designers selected the best practices from these user-defined interactions. A performance test was subsequently developed to assess the selected interactions in terms of quantitative statistics and subjective ratings. Our work provides empirical support and proposes a set of design guidelines for optimizing watch-based interactions aimed at remote control of visual data on the large display. Through this research, we hope to advance the development of smartwatches as visual analytics tools and provide visual analysts with a better usage experience.
... While the display size and resolution facilitate sensemaking [25] and collaborative work [26], the extreme viewing angles up close can impact perception accuracy for certain data encodings [27], and users may have difficulty reaching some display areas [28]. Consequently, researchers have investigated multiple ways of interacting with these displays: direct manipulation through touch [28] or pen [29], gaze [30], using mobile devices, such as smartwatches [31], tablets [32], and augmented reality displays [33], through mid-air gestures [34] and body movements [3]. Other researchers, like Baudisch et al. [35], have proposed to apply focus and context techniques to visualize information at different resolution levels without the need for additional actions. ...
... Other researchers, like Baudisch et al. [35], have proposed to apply focus and context techniques to visualize information at different resolution levels without the need for additional actions. This diversity is reasonable as users often physically move in front of the screen [36]: they tend to stand far from it to get an overview and move closer to access the details [31]. As such, supporting interaction modalities that allow both close-up and distant interaction is crucial. ...
Article
Full-text available
We examined user preferences to combine multiple interaction modalities for collaborative interaction with data shown on large vertical displays. Large vertical displays facilitate visual data exploration and allow the use of diverse interaction modalities by multiple users at different distances from the screen. Yet, how to offer multiple interaction modalities is a non-trivial problem. We conducted an elicitation study with 20 participants that generated 1015 interaction proposals combining touch, speech, pen, and mid-air gestures. Given the opportunity to interact using these four modalities, participants preferred speech interaction in 10 of 15 low-level tasks and direct manipulation for straightforward tasks such as showing a tooltip or selecting. In contrast to previous work, participants most favored unimodal and personal interactions. We identified what we call collaborative synonyms among their interaction proposals and found that pairs of users collaborated either unimodally and simultaneously or multimodally and sequentially. We provide insights into how end-users associate visual exploration tasks with certain modalities and how they collaborate at different interaction distances using specific interaction modalities. The supplemental material is available at https://osf.io/m8zuh/?view only = 34bfd907d2ed43bbbe37027fdf46a3fa.
... Although eyes-free interaction has been employed on a variety of devices (e.g. phones [6,43,46], wearables [13,50,66]) and has shown to be benefcial when interacting with large data visualizations via a smartwatch [30], we found no prior work investigating eyes-free interaction with mobile devices for AR HMDs. ...
... Because our eyes-free interaction design represents only one of many possible alternatives, further studies are necessary to compare alternatives, such as touch gestures (e.g. [15,30]) for diferent actions or touch gestures to control radial menus (e.g. [7]) (I3). ...
Conference Paper
Full-text available
Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.
... Finally, interaction for collaborative data analysis is a hot research topic. Large high-resolution displays combined with small mobile displays appear to be particularly suited for collaboration (see Horak et al., 2018). Large displays naturally lend themselves to interactively exploring large time-oriented data. ...
Chapter
Full-text available
This chapter briefly summarizes the content of the book and describes practical concerns of visualizing time-oriented data in real-world data settings. Visual analytics is briefly outlined as a modern approach that combines visualization, interaction, and computational analysis more tightly to facilitate data analysis activities better. Finally, research opportunities for future work are discussed.
Article
In the past few years, large high‐resolution displays (LHRDs) have attracted considerable attention from researchers, industries and application areas that increasingly rely on data‐driven decision‐making. An up‐to‐date survey on the use of LHRDs for interactive data visualization seems warranted to summarize how new solutions meet the characteristics and requirements of LHRDs and take advantage of their unique benefits. In this survey, we start by defining LHRDs and outlining the consequence of LHRD environments on interactive visualizations in terms of more pixels, space, users and devices. Then, we review related literature along the four axes of visualization, interaction, evaluation studies and applications. With these four axes, our survey provides a unique perspective and covers a broad range of aspects being relevant when developing interactive visual data analysis solutions for LHRDs. We conclude this survey by reflecting on a number of opportunities for future research to help the community take up the still‐open challenges of interactive visualization on LHRDs.
Article
Mobile, ubiquitous, and immersive computing appear poised to transform visualization, data science, and data-driven decision making.
Article
Full-text available
Going beyond established desktop interfaces, researchers have begun re-thinking visualization approaches to make use of alternative display environments and more natural interaction modalities. In this paper, we investigate how spatially-aware mobile displays and a large display wall can be coupled to support graph visualization and interaction. For that purpose, we distribute typical visualization views of classic node-link and matrix representations between displays. The focus of our work lies in novel interaction techniques that enable users to work with personal mobile devices in combination with the wall. We devised and implemented a comprehensive interaction repertoire that supports basic and advanced graph exploration and manipulation tasks, including selection, details-on-demand, focus transitions, interactive lenses, and data editing. A qualitative study has been conducted to identify strengths and weaknesses of our techniques. Feedback showed that combining mobile devices and a wall-sized display is useful for diverse graph-related tasks. We also gained valuable insights regarding the distribution of visualization views and interactive tools among the combined displays.
Conference Paper
Full-text available
We introduce WatchThru, an interactive method for extended wrist-worn display on commercially-available smartwatches. To address the limited visual and interaction space, WatchThru expands the device into 3D through a transparent display. This enables novel interactions that leverage and extend smartwatch glanceability. We describe three novel interaction techniques, Pop-up Visuals, Second Perspective and Peek-through, and discuss how they can complement interaction on current devices. We also describe two types of prototypes that helped us to explore standalone interactions, as well as, proof-of-concept AR interfaces using our platform.
Article
Going beyond the desktop to leverage novel devices—such as smartphones, tablets, or large displays—for visual sensemaking typically requires supporting extraneous operations for device discovery, interaction sharing, and view management. Such operations can be time-consuming and tedious and distract the user from the actual analysis. Embodied interaction models in these multi-device environments can take advantage of the natural interaction and physicality afforded by multimodal devices and help effectively carry out these operations in visual sensemaking. In this article, we present cross-device interaction models for visualization spaces, that are embodied in nature, by conducting a user study to elicit actions from participants that could trigger a portrayed effect of sharing visualizations (and therefore information) across devices. We then explore one common interaction style from this design elicitation called Visfer, a technique for effortlessly sharing visualizations across devices using the visual medium. More specifically, this technique involves taking pictures of visualizations, or rather the QR codes augmenting them, on a display using the built-in camera on a handheld device. Our contributions include a conceptual framework for cross-device interaction and the Visfer technique itself, as well as transformation guidelines to exploit the capabilities of each specific device and a web framework for encoding visualization components into animated QR codes, which capture multiple frames of QR codes to embed more information. Beyond this, we also present the results from a performance evaluation for the visual data transfer enabled by Visfer. We end the article by presenting the application examples of our Visfer framework.
Article
We present VISTILES, a conceptual framework that uses a set of mobile devices to distribute and coordinate visualization views for the exploration of multivariate data. In contrast to desktop-based interfaces for information visualization, mobile devices offer the potential to provide a dynamic and user-defined interface supporting co-located collaborative data exploration with different individual workflows. As part of our framework, we contribute concepts that enable users to interact with coordinated & multiple views (CMV) that are distributed across several mobile devices. The major components of the framework are: (i) dynamic and flexible layouts for CMV focusing on the distribution of views and (ii) an interaction concept for smart adaptations and combinations of visualizations utilizing explicit side-by-side arrangements of devices. As a result, users can benefit from the possibility to combine devices and organize them in meaningful spatial layouts. Furthermore, we present a web-based prototype implementation as a specific instance of our concepts. This implementation provides a practical application case enabling users to explore a multivariate data collection. We also illustrate the design process including feedback from a preliminary user study, which informed the design of both the concepts and the final prototype.
Conference Paper
Multi-touch wall-sized displays afford collaborative exploration of large datasets and re-organization of digital content. However, standard touch interactions, such as dragging to move content, do not scale well to large surfaces and were not designed to support collaboration, such as passing an object around. This paper introduces CoReach, a set of collaborative gestures that combine input from multiple users in order to manipulate content, facilitate data exchange and support communication. We conducted an observational study to inform the design of CoReach, and a controlled study showing that it reduced physical fatigue and facilitated collaboration when compared with traditional multi-touch gestures. A final study assessed the value of also allowing input through a handheld tablet to manipulate content from a distance.
Conference Paper
Entering text is very challenging on smartwatches, especially on non-touch smartwatches where virtual keyboards are unavailable. In this paper, we designed and implemented COMPASS, a non-touch bezel-based text entry technique. COMPASS positions multiple cursors on a circular keyboard, with the location of each cursor dynamically optimized during typing to minimize rotational distance. To enter text, a user rotates the bezel to select keys with any nearby cursors. The design of COMPASS was justified by an iterative design process and user studies. Our evaluation showed that participants achieved a pick-up speed around 10 WPM and reached 12.5 WPM after 90-minute practice. COMPASS allows users to enter text on non-touch smartwatches, and also serves as an alternative for entering text on touch smartwatches when touch is unavailable (e.g., wearing gloves).
Chapter
Large vertical displays are increasingly widespread, and content sharing between them and personal mobile devices is central to many collaborative usage scenarios. In this chapter we present FlowTransfer, bidirectional transfer techniques which make use of the mobile phone’s position and orientation. We focus on three main aspects: multi-item transfer and layout, the dichotomy of casual versus precise interaction, and support for physical navigation. Our five techniques explore these aspects in addition to being contributions in their own right. They leverage physical navigation, allowing seamless transitions between different distances to the display, while also supporting arranging content and copying entire layouts within the transfer process. This is enabled by a novel distance-dependent pointing cursor that supports coarse pointing from distance as well as precise positioning at close range. We fully implemented all techniques and conducted a qualitative study documenting their benefits. Finally, based on a literature review and our holistic approach in designing the techniques, we also contribute an analysis of the underlying design space.
Conference Paper
For digital content curation of historical artefacts, curators collaboratively collect, analyze and edit documents, images, and other digital resources in order to display and share new representations of that information to an audience. Despite their increasing reliance on digital documents and tools, current technologies provide little support for these specific collaborative content curation activities. We introduce CurationSpace – a novel cross-device system – to provide more expressive tools for curating and composing digital historical artefacts. Based on the concept of Instrumental Interaction, CurationSpace allows users to interact with digital curation artefacts on shared interactive surfaces using personal smartwatches as selectors for instruments or modifiers (applied to either the whole curation space, individual documents, or fragments). We introduce a range of novel interaction techniques that allow individuals or groups of curators to more easily create, navigate and share resources during content curation. We report insights from our user study about people’s use of instruments and modifiers for curation activities.
Conference Paper
Selection is a fundamental operation in interactive visualization applications. Although techniques such as clicking and lassoing items of interest are sufficient for basic selections, a more sophisticated interaction mechanism is required for expressing complex queries to modify or generalize existing selections. The ability to perform these advanced selections is critical for effective analysis within visualization systems. On touch-based devices such as tablets, however, expressing advanced selections is difficult due to the absence of a cursor and modifier keys. In this work, we address this limitation by presenting new interaction techniques that leverage a person's non-dominant hand. We use these techniques for advanced selection operations such as expanding, modifying, and replicating existing selections. Further, we introduce a method for performing generalized selection on tablet devices that provides a fluid mechanism to control the attributes and parameters of selection.