Citations

... Many participants said that the visual cues made driving harder as you have to concentrate on the navigation instructions, which may explain the frustration. The results are in line with a study [28] that reported faster response times and lower frustration in target acquisition and robot navigation tasks with tactile vs. visual display. ...
Conference Paper
Full-text available
Navigation systems usually require visual or auditory attention. Providing the user with haptic cues could potentially decrease cognitive demand in navigation. This study is investigating the use of haptic eyeglasses in navigation. We conducted an experiment comparing directional haptic cues to visual cueing in a car navigation task. Participants (N=12) drove the Lane Change Test simulator with visual text cues, haptic cues given by the eyeglasses and haptic cues given by a car seat. The participants were asked to confirm the recognition of a directional cue (left or right) by pressing an arrow on a tablet screen and by navigating to the corresponding lane. Reaction times and errors were measured. The participants filled in the NASA-TLX questionnaire and were also interviewed about the different cues. The results showed that in comparison to the visual text cues the haptic cues were reacted to significantly faster. Haptic cueing was also evaluated as less frustrating than visual cueing. The haptic eyeglasses fared slightly, although not significantly, better than the haptic seat in subjective and objective evaluations. The paper suggests that haptic eyeglasses can decrease cognitive demand in navigation and have many possible applications.
... The visual icon was also readily seen, always in view, very easily comprehended (i.e., a moving map), and was also associated with better performance. In contrast, [42] and [48] required the operator to look at multiple screens. This forced operators to divide attention, searching for incoming information on additional screens. ...
Article
Full-text available
Many studies have investigated the effect of vibrotactile cues on task performance, but a wide range of cue and task types have made findings difficult to interpret without a quantitative synthesis. This report addresses that need by reviewing the effectiveness of vibrotactile cues in a meta-analysis of 45 studies. When added to a baseline task or to existing visual cues, vibrotactile cues enhanced task performance. When vibrotactile cues replaced visual cues; however, some effects were attenuated and others moderated by cue information complexity. To summarize such moderating effects, vibrotactile alerts are an effective replacement for visual alerts, but vibrotactile direction cues are not effective when replacing visual direction cues. This meta-analysis of vibrotactile applications underscores the benefits of vibrotactile and multimodal displays, highlights conditions in which vibrotactile cues are particularly effective, and identifies areas in need of further investigation.
... V vs. T studies compared tactile cues to visual cues presenting the same information for the same purpose. For studies in this review, visual and tactile alerts or direction cues have been compared for a variety of tasks, including simple reaction time or visual search tasks (e.g., [18]) simulated driving and/or targeting tasks ( [12]; [19]; [38], [39]; [48]; [47], complex cockpit, UAV, or command simulations ( [5]; [20; 22; 28; 32; 40; 41] communicating information [4], localization from dense multi-tactor displays [10] land navigation in the field [15,16] and orienting in virtual environments [2]. ...
Conference Paper
Full-text available
The literature is replete with studies that investigated the effectiveness of vibrotactile displays; however, individual studies in this area often yield discrepant findings that are difficult to synthesize. In this paper, we provide an overview of a comprehensive review of the literature and meta-analyses that organized studies to enable comparisons of visual and tactile presentations of information, to yield information useful to researchers and designers. Over six hundred studies were initially reviewed and coded along numerous criteria that determined appropriateness for meta-analysis categories. Comparisons were made between conditions that compared (a) adding a tactile cue to a baseline condition, (b) a visual cue with a multimodal (visual and tactile) presentation, and (c) a visual cue with a tactile cue. In addition, we further categorized within these comparisons with regard to type of information, that ranged from simple alerts and single direction cues to more complex tactile patterns representing spatial orientation or short communications.
Book
This book focuses on contemporary human factors issues within the design of soldier systems and describes how they are currently being investigated and addressed by the U.S. Army to enhance soldier performance and effectiveness. Designing Soldier Systems approaches human factors issues from three main perspectives. In the first section, Chapters 1-5 focus on complexity introduced by technology, its impact on human performance, and how issues are being addressed to reduce cognitive workload. In the second section, Chapters 6-10 concentrate on obstacles imposed by operational and environmental conditions on the battlefield and how they are being mitigated through the use of technology. The third section, Chapters 11-21, is dedicated to system design and evaluation including the tools, techniques and technologies used by researchers who design soldier systems to overcome human physical and cognitive performance limitations as well as the obstacles imposed by environmental and operations conditions that are encountered by soldiers. The book will appeal to an international multidisciplinary audience interested in the design and development of systems for military use, including defense contractors, program management offices, human factors engineers, human system integrators, system engineers, and computer scientists. Relevant programs of study include those in human factors, cognitive science, neuroscience, neuroergonomics, psychology, training and education, and engineering. © Pamela Savage-Knepshield, John Martin, John Lockett III, Laurel Allender and the contributors 2012. All rights reserved.
Article
Full-text available
The expected air traffic growth will introduce new tasks and automation technologies. As a result, the amount of mostly visual cockpit information will increase significantly, leading to more interruptions and risk of data overload. One promising means of addressing this challenge is through the use of multimodal interfaces which distribute information across sensory channels. To inform the design of such interfaces, a meta-analysis was conducted on the effectiveness and performance effects of auditory versus tactile interruption signals. From the 23 studies, ratio scores were computed to compare performance between the two modalities. The impact of 6 moderator variables was also examined. Overall, this analysis shows faster responses to tactile interruptions. However, more complex and very urgent interruption signals are better presented via the auditory modality. The findings add to our knowledge base in multimodal information processing and can inform modality choices in display design for complex data-rich domains.
Article
Full-text available
The aim of this study was to integrate empirical data showing the effects of interrupting task modality on the performance of an ongoing visual-manual task and the interrupting task itself. The goal is to support interruption management and the design of multimodal interfaces. Multimodal interfaces have been proposed as a promising means to support interruption management.To ensure the effectiveness of this approach, their design needs to be based on an analysis of empirical data concerning the effectiveness of individual and redundant channels of information presentation. Three meta-analyses were conducted to contrast performance on an ongoing visual task and interrupting tasks as a function of interrupting task modality (auditory vs. tactile, auditory vs. visual, and single modality vs. redundant auditory-visual). In total, 68 studies were included and six moderator variables were considered. The main findings from the meta-analyses are that response times are faster for tactile interrupting tasks in case of low-urgency messages.Accuracy is higher with tactile interrupting tasks for low-complexity signals but higher with auditory interrupting tasks for high-complexity signals. Redundant auditory-visual combinations are preferable for communication tasks during high workload and with a small visual angle of separation. The three meta-analyses contribute to the knowledge base in multimodal information processing and design. They highlight the importance of moderator variables in predicting the effects of interruption task modality on ongoing and interrupting task performance. The findings from this research will help inform the design of multimodal interfaces in data-rich, event-driven domains.
Article
Full-text available
The expected air traffic growth will introduce new tasks and automation technologies. As a result, the amount of mostly visual cockpit information will increase significantly, leading to more interruptions and risk of data overload. One promising means of addressing this challenge is through the use of multimodal interfaces which distribute information across sensory channels. To inform the design of such interfaces, a meta-analysis was conducted on the effectiveness and performance effects of auditory versus tactile interruption signals. From the 23 studies, ratio scores were computed to compare performance between the two modalities. The impact of 6 moderator variables was also examined. Overall, this analysis shows faster responses to tactile interruptions. However, more complex and very urgent interruption signals are better presented via the auditory modality. The findings add to our knowledge base in multimodal information processing and can inform modality choices in display design for complex data-rich domains.
Technical Report
Full-text available
The U.S. Army Research Laboratory's (ARL's) Human Research and Engineering Directorate conducts a broad-based program of scientific research and technology development directed into two focus areas: (1) enhancing the effectiveness of Soldier performance and Soldier-machine interactions in mission contexts and (2) providing the U.S. Army and ARL with human factors integration leadership to ensure that Soldier performance requirements are adequately considered in technology development and system design. This document provides an overview of the following thrust areas: human robot interaction, human system integration, neuroscience, and Soldier performance.
Article
Full-text available
This report attempts to fuse Army needs, specific to threat detection, with available evidence from academia and military sources. The report provides viable routes for short-term enhancement of threat detection training and long-term goals of a research program dedicated to improving the Army's understanding of threat detection. This review found two major avenues of research, visual attention and visual memory that would benefit research and understanding of attention and threat detection for current and future operational environments. Based on the review, at least three sequential skills are discussed as necessary for understanding and improving threat detection: attentiveness, recognition, and action. These skills orient and guide the Soldier in operational settings from the basic perceptual process at the attentiveness stage up through higher-order reasoning at the action stage. Training formats are explored including still images and high-fidelity simulations, all of which could be scaffolded upon a deliberate practice framework.