Figure 3 - uploaded by Niklas Elmqvist
Content may be subject to copyright.
(a) Cross-device interaction can happen with direct touch, in close proximity, or from intermediate or far distance; (b) the scope of user interactions is limited to the views in focus. 

(a) Cross-device interaction can happen with direct touch, in close proximity, or from intermediate or far distance; (b) the scope of user interactions is limited to the views in focus. 

Source publication
Conference Paper
Full-text available
We explore the combination of smartwatches and a large interactive display to support visual data analysis. These two extremes of interactive surfaces are increasingly popular, but feature different characteristics-display and input modalities, personal/public use, performance, and portability. In this paper, we first identify possible roles for bo...

Contexts in source publication

Context 1
... general, the cross-device interaction can happen in three zones: either at the large display using direct touch, in close proximity to the display but without touching it, or from inter- mediate and even far distance (Figure 3a). We expect analysts to work directly at the large display most of the time, thus the touch-based connection is primarily used. As the users' in- tended interaction goal is expressed in the touch position, i.e., defining on which visualization (part) the analyst is focusing, the smartwatch-acting as a mediator-should incorporate this knowledge to offer or apply functionalities. In contrast, the remote interaction enables the analysts to work without touching the display, possibly even from an overview distance or while sitting. As the contextual information of the touch on the large display is missing, the user has to perform an additional step to select the view of interest (e.g., by pointing). As related work on physical navigation illustrates [3,7,29,34], working from an overview distance, close proximity, or directly at the large display is not an either-or decision. There is always an interplay between the three: analysts interact in front of the large display to focus on details, step back to orient themselves, and again move closer to continue exploration. Consequently, the cross-device interaction should bridge these zones. For instance, an analyst may first work near the large display and perform interactions incorporating the watch (e.g., store data selections). She then steps back to continue explo- ration from a more convenient position to analyze other views on the large display based on the stored data. This workflow could be further enhanced using proxemic interaction ...
Context 2
... common coordinated multiple view (CMV) applica- tions [48], changes in one visualization (e.g., selection, filter, encoding) have global impact, i.e., they are applied to all dis- played views. As discussed in our motivating scenario, this behavior may lead to interference between analysts working in parallel [42]. To avoid this issue, the effects of an interac- tion should by default only be applied to the visualization(s) currently in focus of the analyst (Figure 3b). Further, we also propose to constrain the scope of an interaction mediated by the smartwatch to a short time period. More specifically, on touching a visualization to apply a selected adaptation from the smartwatch, the resulting change is only visible for a few sec- onds or as long as the touch interaction lasts. At the same time, Figure 4. Our framework addresses a wide range of tasks, here illustrated by mapping two established task classifications [13,58] onto interaction sequences that are enabled by our framework (examples in italics). For some tasks, certain aspects are also still supported by the large display itself, e.g., zooming and panning from abstract/elaborate and explore [58]. Regarding the typology by Brehmer and Munzner [13], we focus on their how part. From this part, a few tasks (encode, annotate, import, derive) are not considered as they are going beyond the scope of this paper. CA: Connective ...
Context 3
... general, the cross-device interaction can happen in three zones: either at the large display using direct touch, in close proximity to the display but without touching it, or from inter- mediate and even far distance (Figure 3a). We expect analysts to work directly at the large display most of the time, thus the touch-based connection is primarily used. ...
Context 4
... discussed in our motivating scenario, this behavior may lead to interference between analysts working in parallel [42]. To avoid this issue, the effects of an interac- tion should by default only be applied to the visualization(s) currently in focus of the analyst (Figure 3b). Further, we also propose to constrain the scope of an interaction mediated by the smartwatch to a short time period. ...

Similar publications

Article
Full-text available
Este artigo busca compreender como opera uma iniciativa ativista e política no trabalho visual de David Wojnarowicz (1954-1992), identificado como um dos artistas visuais envolvidos com a luta pelo reconhecimento do HIV e assistência à população soropositiva, a partir de uma relação desse trabalho com a materialidade do espaço e a busca por uma esp...

Citations

... The idea of making sense of visual data on a large display through commanding a smartwatch exploits the dimensions of wearable devices, thereby enabling microinteractions [12] and proxemic interactions [13]. To our knowledge, the study of using the smartwatch as an input field of interactions for visual analytics is as yet in its infancy, though see the work of Horak et al. [14]. Unlike interactions with smartwatch only and cross-device interactions involving the watch, a smartwatch used for visual analytics interactions should be the miniature version of large screen displaying the visualizations. ...
... The previous study closest to our research was carried out by Horak et al. [14], as mentioned above, which pioneered a new smartwatch-based interaction system for data exploration on the large display. The system is built on two basic design concepts: item sets and connective areas. ...
Article
Full-text available
The process of visual analytics is composed of the visual data exploration tasks supporting analytical reasoning. When performing analytical tasks with the interactive visual interfaces displayed by the large screen, physical discomforts such as gorilla-arm effect can be easily caused. To enrich the input space for analysts, there has been some researches concerning the cross-device analysis combining mobile devices with the large display. Although the effectiveness of expert-level designs has been demonstrated, little is known of the ordinary users’ preferences for using a mobile device to issue commands, especially the small one like smartwatch. We implement a three-stage study to investigate and validate these preferences. A total of 181 distinctive gestural inputs and 52 interface designs for 21 tasks were collected from analysts. Expert designers selected the best practices from these user-defined interactions. A performance test was subsequently developed to assess the selected interactions in terms of quantitative statistics and subjective ratings. Our work provides empirical support and proposes a set of design guidelines for optimizing watch-based interactions aimed at remote control of visual data on the large display. Through this research, we hope to advance the development of smartwatches as visual analytics tools and provide visual analysts with a better usage experience.
... Finally, interaction for collaborative data analysis is a hot research topic. Large high-resolution displays combined with small mobile displays appear to be particularly suited for collaboration (see Horak et al., 2018). Large displays naturally lend themselves to interactively exploring large time-oriented data. ...
Chapter
Full-text available
This chapter briefly summarizes the content of the book and describes practical concerns of visualizing time-oriented data in real-world data settings. Visual analytics is briefly outlined as a modern approach that combines visualization, interaction, and computational analysis more tightly to facilitate data analysis activities better. Finally, research opportunities for future work are discussed.
... While the display size and resolution facilitate sensemaking [25] and collaborative work [26], the extreme viewing angles up close can impact perception accuracy for certain data encodings [27], and users may have difficulty reaching some display areas [28]. Consequently, researchers have investigated multiple ways of interacting with these displays: direct manipulation through touch [28] or pen [29], gaze [30], using mobile devices, such as smartwatches [31], tablets [32], and augmented reality displays [33], through mid-air gestures [34] and body movements [3]. Other researchers, like Baudisch et al. [35], have proposed to apply focus and context techniques to visualize information at different resolution levels without the need for additional actions. ...
... Other researchers, like Baudisch et al. [35], have proposed to apply focus and context techniques to visualize information at different resolution levels without the need for additional actions. This diversity is reasonable as users often physically move in front of the screen [36]: they tend to stand far from it to get an overview and move closer to access the details [31]. As such, supporting interaction modalities that allow both close-up and distant interaction is crucial. ...
Article
Full-text available
We examined user preferences to combine multiple interaction modalities for collaborative interaction with data shown on large vertical displays. Large vertical displays facilitate visual data exploration and allow the use of diverse interaction modalities by multiple users at different distances from the screen. Yet, how to offer multiple interaction modalities is a non-trivial problem. We conducted an elicitation study with 20 participants that generated 1015 interaction proposals combining touch, speech, pen, and mid-air gestures. Given the opportunity to interact using these four modalities, participants preferred speech interaction in 10 of 15 low-level tasks and direct manipulation for straightforward tasks such as showing a tooltip or selecting. In contrast to previous work, participants most favored unimodal and personal interactions. We identified what we call collaborative synonyms among their interaction proposals and found that pairs of users collaborated either unimodally and simultaneously or multimodally and sequentially. We provide insights into how end-users associate visual exploration tasks with certain modalities and how they collaborate at different interaction distances using specific interaction modalities. The supplemental material is available at https://osf.io/m8zuh/?view only = 34bfd907d2ed43bbbe37027fdf46a3fa.
... It is particularly problematic when collaborating across the reality-virtuality continuum (RVC) [15]-such as across desktop and virtual reality (VR) head-mounted devices. At the same, engaging multiple device platforms in collaboration is often beneficial [27], especially when combining traditional 2D views with immersive 3D views [28,59]. With the rise of immersive analytics [38], virtual and mixed reality (MR) has seen increasing use as a platform for visualization and visual analytics, including for graph visualization [11], multidimensional data [10], and even economical analysis [4]. ...
... Cross-platform collaborative visualization utilizes several devices across the reality-virtuality continuum (RVC) [40] to conduct visual analytics synchronously or asynchronously with multiple users. Examples of existing literature exploring this area include work utilizing a mixture of mobile and large-scale displays [27], AR and tabletop displays [49], and VR and desktop displays [28]. Fröhler et al. [15] survey and categorize these works as cross-virtuality analytics (XVA)-or more generally cross-virtuality (XV) [2]. ...
... Core to the philosophy of XVA is the opportunity to provide the optimal techniques, encoding, interactions, and view of data using tailored visual metaphors with various devices depending on the task at hand [15,43]. In particular, combining different devices in XVA has been shown to enable complementary use, where devices mutually scaffold each other's weaknesses [27]. Fröhler et al. [15] further categorize XVA works into four categories: spatially agnostic (simultaneous use of devices), augmented (displays extended and spatially orientated), networked (collaborative), and transient (switching between realities). ...
... It is particularly problematic when collaborating across the reality-virtuality continuum (RVC) [15]-such as across desktop and virtual reality (VR) head-mounted devices. At the same, engaging multiple device platforms in collaboration is often beneficial [27], especially when combining traditional 2D views with immersive 3D views [28,59]. With the rise of immersive analytics [38], virtual and mixed reality (MR) has seen increasing use as a platform for visualization and visual analytics, including for graph visualization [11], multidimensional data [10], and even economical analysis [4]. ...
... Cross-platform collaborative visualization utilizes several devices across the reality-virtuality continuum (RVC) [40] to conduct visual analytics synchronously or asynchronously with multiple users. Examples of existing literature exploring this area include work utilizing a mixture of mobile and large-scale displays [27], AR and tabletop displays [49], and VR and desktop displays [28]. Fröhler et al. [15] survey and categorize these works as cross-virtuality analytics (XVA)-or more generally cross-virtuality (XV) [2]. ...
... Core to the philosophy of XVA is the opportunity to provide the optimal techniques, encoding, interactions, and view of data using tailored visual metaphors with various devices depending on the task at hand [15,43]. In particular, combining different devices in XVA has been shown to enable complementary use, where devices mutually scaffold each other's weaknesses [27]. Fröhler et al. [15] further categorize XVA works into four categories: spatially agnostic (simultaneous use of devices), augmented (displays extended and spatially orientated), networked (collaborative), and transient (switching between realities). ...
Preprint
Full-text available
Many collaborative data analysis situations benefit from collaborators utilizing different platforms. However, maintaining group awareness between team members using diverging devices is difficult, not least because common ground diminishes. A person using head-mounted VR cannot physically see a user on a desktop computer even while co-located, and the desktop user cannot easily relate to the VR user’s 3D workspace. To address this, we propose the “eyes-and-shoes” principles for group awareness and abstract them into four levels of techniques. Furthermore, we evaluate these principles with a qualitative user study of 6 participant pairs synchronously collaborating across distributed desktop and VR head-mounted devices. In this study, we vary the group awareness techniques between participants and explore two visualization contexts within participants. The results of this study indicate that the more visual metaphors and views of participants diverge, the greater the level of group awareness is needed.
... More output devices lead to the benefits in terms of display space and high-resolution described earlier. LHRDs can also be used with additional smaller displays to complement the large display space [KKTD17,HBED18]. In smart rooms, for example, people can bring their own devices, which are then integrated seamlessly into the environment [RTNS15]. ...
... LHRDs have been combined with other devices, e.g., stereoscopic glasses to visualize 3D objects, e.g., ball-and-stick molecular models [RFK * 13] (see Figure 6 right) or terrain models [JLMVK06,CNF13]. Handheld devices are also used as secondary displays in complement to LHRDs, e.g., to visualize the details of a selected subset of data whereas the wall display shows the context [SvZP * 16, HBED18]. Yet having to switch attention between the wall display and a handheld personal display may hamper user performance [TC03]. ...
... Yet having to switch attention between the wall display and a handheld personal display may hamper user performance [TC03]. More research is needed to elicit workflows and tasks where a secondary personal display is helpful [HBED18]. ...
Preprint
Full-text available
In the past few years, large high-resolution displays (LHRDs) have attracted considerable attention from researchers, industries, and application areas that increasingly rely on data-driven decision-making. An up-to-date survey on the use of LHRDs for interactive data visualization seems warranted to summarize how new solutions meet the characteristics and requirements of LHRDs and take advantage of their unique benefits. In this survey, we start by defining LHRDs and outlining the consequence of LHRD environments on interactive visualizations in terms of more pixels, space, users, and devices. Then, we review related literature along the four axes of visualization, interaction, evaluation studies, and applications. With these four axes, our survey provides a unique perspective and covers a broad range of aspects being relevant when developing interactive visual data analysis solutions for LHRDs. We conclude this survey by reflecting on a number of opportunities for future research to help the community take up the still open challenges of interactive visualization on LHRDs.
... Multimodal interactions with visualizations have been actively explored with a focus on using touch and pen [WLJ * 12], body movement in front of a large display [AEYN11], gestures [BAEI16], and coordinating between large displays and smartwatches [HBED18]. However, none of these works considered natural language as an input modality. ...
Article
Full-text available
Information visualizations such as bar charts and line charts are very common for analyzing data and discovering critical insights. Often people analyze charts to answer questions that they have in mind. Answering such questions can be challenging as they often require a significant amount of perceptual and cognitive effort. Chart Question Answering (CQA) systems typically take a chart and a natural language question as input and automatically generate the answer to facilitate visual data analysis. Over the last few years, there has been a growing body of literature on the task of CQA. In this survey, we systematically review the current state‐of‐the‐art research focusing on the problem of chart question answering. We provide a taxonomy by identifying several important dimensions of the problem domain including possible inputs and outputs of the task and discuss the advantages and limitations of proposed solutions. We then summarize various evaluation techniques used in the surveyed papers. Finally, we outline the open challenges and future research opportunities related to chart question answering.
... Multimodal interactions with visualizations has been actively explored with a focus on using touch and pen [WLJ * 12], body movement in front of large display [AEYN11], gestures [BAEI16], and coordinating between large displays and smartwatches [HBED18]. However, none of these works considered natural language as an input modality. ...
Preprint
Full-text available
Information visualizations such as bar charts and line charts are very common for analyzing data and discovering critical insights. Often people analyze charts to answer questions that they have in mind. Answering such questions can be challenging as they often require a significant amount of perceptual and cognitive effort. Chart Question Answering (CQA) systems typically take a chart and a natural language question as input and automatically generate the answer to facilitate visual data analysis. Over the last few years, there has been a growing body of literature on the task of CQA. In this survey, we systematically review the current state-of-the-art research focusing on the problem of chart question answering. We provide a taxonomy by identifying several important dimensions of the problem domain including possible inputs and outputs of the task and discuss the advantages and limitations of proposed solutions. We then summarize various evaluation techniques used in the surveyed papers. Finally, we outline the open challenges and future research opportunities related to chart question answering.
... Study [30] adopted a Fitbit bracelet to collect and visualise data doubles. Study [26] familiarised mobile applications and bracelets to enhance and explore users experiences to collect and visualise their data, and study [31] introduced smartwatch as a controller when visualising data on a large screen. Study [32] presented various types of wearable devices such as Samsung smartwatch, Fitbit HR, spire devise and mood metric ring. ...
... For example, a spatial map is utilised to show users photos based on a timeline and enable them to remember facts that happened on a specific day [3]. There is a limited number of tasks attempted to interact with personal information such as sharing information [16], [14], adjusting visualisation [14], filtering [1], [31] , elaborating [14] , selecting [14], and configuring data [14]. Furthermore, identify and compare trends [29] adopted to compare between two visualisation types performance in mobile devices. ...
Preprint
Full-text available
Personal data cover multiple aspects of our daily life and activities, including health, finance, social, Internet, Etc. Personal data visualisations aim to improve the user experience when exploring these large amounts of personal data and potentially provide insights to assist individuals in their decision making and achieving goals. People with different backgrounds, gender and ages usually need to access their data on their mobile devices. Although there are many personal tracking apps, the user experience when using these apps and visualisations is not evaluated yet. There are publications on personal data visualisation in the literature. Still, no systematic literature review investigated the gaps in this area to assist in developing new personal data visualisation techniques focusing on user experience. In this systematic literature review, we considered studies published between 2010 and 2020 in three online databases. We screened 195 studies and identified 29 papers that met our inclusion criteria. Our key findings are various types of personal data, and users have been addressed well in the found papers, including health, sport, diet, Driving habits, lifelogging, productivity, Etc. The user types range from naive users to expert and developers users based on the experiment's target. However, mobile device capabilities and limitations regarding data visualisation tasks have not been well addressed. There are no studies on the best practices of personal data visualisation on mobile devices, assessment frameworks for data visualisation, or design frameworks for personal data visualisations
... These characteristics make different type of devices more suited to certain collections of tasks and interactions, and are often combined as an ecosystem for complex tasks and knowledge work [15]. Smaller devices offer better mobility, portability and privacy [29], are often used at a personal scale and can mediate the interactions with large displays in multi-surface environments; larger displays, by contrast, allow simultaneous access and can present more information, often acting as a shared sensemaking space [5,12,33]. ...
Article
Affinity diagramming is widely applied to analyze qualitative data such as interview transcripts. It involves multiple analytic processes and is often performed collaboratively. Drawing on interviews with three practitioners and upon our own experience, we show how practitioners combine multiple analytic processes and adopt different artifacts to help them analyze their data. Current tools, however, fail to adequately support mixing analytic processes, devices, and collaboration styles. We present a vision and prototype ADQDA, a cross-device, collaborative affinity diagramming tool for qualitative data analysis, implemented using distributed web technologies. We show how this approach enables analysts to appropriate available pertinent digital devices as they fluidly migrate between analytic phases or adopt different methods and representations, all while preserving consistent analysis artifacts. We validate this approach through a set of application scenarios that explore how it enables new ways of analyzing qualitative data that better align with identified analytic practices.