Brenden Keyes's research while affiliated with MITRE and other places

What is this page?


This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

Publications (13)


Fig. 2. Screenshot and guide to gestures (left) allowed the user (right) to activate interface features and autonomy modes.
Improving Human-Robot Interaction through Interface Evolution
  • Chapter
  • Full-text available

February 2010

·

266 Reads

·

42 Citations

Brenden Keyes

·

Mark Micire

·

·

Holly A. Yanco

Through our iterative design and testing process, we succeeded in providing a useful surroundings awareness panel that displays accurate data to the user in an easy-to-interpret manner. In the testing for Version 3, the current distance panel was proven to provide faster run times, with fewer collisions than the previous two versions. Our results support the usefulness of the guidelines we followed in the creation of the interface. For example, we fused sensor information to lower the cognitive load on the user. Having the laser and sonar sensor values being displayed in the same distance panel provided users a single interface through which to access distance information. Through an iterative process, we gradually improved the distance panel. This panel rotates when the operator pans the camera, which allows the user to line up the obstacle they see in the video with where it is represented in the distance panel, to help reduce cognitive load. The distance panel also functions as a camera pan indicator. To provide redundant cueing in a location where operators will be naturally focused much of the time, crosshairs are overlaid on the video screen to show the current pan/tilt position of the main camera.

Download
Share

Figure 1. Interface overview with description of interaction gestures.
Multi-touch interaction for robot control

February 2009

·

452 Reads

·

38 Citations

Recent developments in multi-touch technologies have exposed fertile ground for research in enriched human- robot interaction. Although multi-touch technologies have been used for virtual 3D applications, to the authors' knowledge, ours is the first study to explore the use of a multi-touch table with a physical robot agent. This baseline study explores the control of a single agent with a multi- touch table using an adapted, previously studied, joystick- based interface. We performed a detailed analysis of users' interaction styles with two complex functions of the multi- touch interface and isolated mismatches between user expectations and interaction functionality. Author Keywords


Evolving interface design for robot search tasks

August 2007

·

51 Reads

·

37 Citations

Journal of Field Robotics

Holly A. Yanco

·

Brenden Keyes

·

·

[...]

·

This paper describes two steps in the evolution of human-robot interaction designs developed by the University of Massachusetts Lowell (UML) and the Idaho National Laboratory (INL) to support urban search and rescue tasks. We conducted usability tests to compare the two interfaces, one of which emphasized three-dimensional mapping while the other design emphasized the video feed. We found that participants desired a combination of the interface design approaches. As a result, we changed the UML system to augment its heavy emphasis on video with a map view of the area immediately around the robot. We tested the changes in a follow-on user study and the results from that experiment suggest that performance, as measured by the number of collisions with objects in the environment and time on task, is better with the new interaction techniques. Throughout the paper, we describe how we applied human-computer interaction principles and techniques to benefit the evolution of the human-robot interaction designs. While the design work is situated in the urban search and rescue domain, we feel the results can be generalized to domains that involve other search or monitoring tasks using remotely located robots.


Figure 1. System A’s Interface 
Figure 2. System B’s Interface 
LASSOing HRI: Analyzing situation awareness in map-centric and video-centric interfaces

March 2007

·

252 Reads

·

62 Citations

Good situation awareness (SA) is especially necessary when robots and their operators are not collocated, such as in urban search and rescue (USAR). This paper compares how SA is attained in two systems: one that has an emphasis on video and another that has an emphasis on a three-dimensional map. We performed a within-subjects study with eight USAR domain experts. To analyze the utterances made by the participants, we developed a SA analysis technique, called LASSO, which includes five awareness categories: location, activities, surroundings, status, and overall mission. Using our analysis technique, we show that a map-centric interface is more effective in providing good location and status awareness while a video- centric interface is more effective in providing good surroundings and activities awareness.


Fig. 1. (a) iRobot Magellan Pro with forward and overhead cameras. (b) Operator station.  
Fig. 2. (a) Forward camera and (b) overhead camera view of the same scene.  
Fig. 3. (a) iRobot ATRV-JR with front and rear cameras. (b) Operator station.  
Fig. 4. (a) The full interface designed for the USAR system. (b) The simplified interface with a single camera view. The interface looked like this for both the single camera and switchable two camera experiments. (c) The simplified nterface with two camera views. The camera displayed in the larger window can be switched with the camera displayed in the smaller window.  
Fig. 5. (a) Map provided to operators. (b) Overview of maze used for testing.  
Camera Placement and Multi-Camera Fusion for Remote Robot Operation

January 2006

·

612 Reads

·

44 Citations

Abstract, This paper studies the impact of camera location and multi-camera fusion with real robots in an urban search and rescue task through two sets of experiments. In the first, we compared a camera with an overhead view to a traditional forward looking camera. In the second, we compared the use of a single forward looking camera to the use of two cameras, one on the front of the robot and one on the rear. Our experiments show that an overhead view that includes the robot chassis significantly increases the situation awareness of the operator as measured by the number of collisions during a maze traversal. We also found that having two cameras, one forward-facing and one rear-facing, results in improved situation awareness. The addition of the rear-facing camera also eliminates many of the collisions that typically occur in the back of the robot when using a single camera.


Figure 1: The INL robot: an iRobot ATRV-Mini
Figure 3: The UML robot: an iRobot ATRV-JR 
Figure 4: The UML USAR interface 
Analysis of human-robot interaction for urban search and rescue

January 2006

·

417 Reads

·

53 Citations

This paper describes two robot systems designed for urban search and rescue (USAR). Usability tests were conducted to compare the two interfaces developed for human-robot interaction (HRI) in this domain, one of which emphasized three-dimensional mapping while the other design emphasized the video feed. We found that participants desired a combination of the interface design approaches. Additionally, participants desired a combination of the interface design approaches, however, we also observed that sometimes the preferences of the participants did not correlate with improved performance. The paper concludes with recommendations from participants for a new interface to be used for urban search and rescue.


Improving Human-Robot Interaction for Remote Robot Operation

January 2005

·

25 Reads

·

9 Citations

We have been investigating ways to improve human-robot interaction (HRI) and situation awareness (SA) in urban search and rescue (USAR). In this task, a human directs the navigation of a remotely located robot using an interface that provides controls and status information. In this paper, we discuss the many facets of our work aimed at improving HRI for remote robot operation.



Figure 1: The UML USAR interface (left) is shown with a participant using the joystick configuration. This interface allows the user to operate the iRobot ATRV (right) though the NIST USAR course. 
Figure 3: 
Figure 4: 
Performance of Multi-Touch Table Interaction and Physically Situated Robot Agents

29 Reads

·

2 Citations

Recent developments in multi-touch technologies have ex-posed fertile ground for research in enriched human-robot interaction. Although the technologies have been used for virtual 3D applications, to the authors' knowledge, ours is the first study to explore the use of a multi-touch table with a physical robot agent. This baseline study explores the control of a single agent with a multi-touch table using an adapted, previously studied, joystick-based interface. The field test shows that multi-touch interaction does not in any way impair the performance of the user in a navigation and search task. In fact, our results show an increase in learn-ability over the original design using joystick and keyboard-based control mechanisms. Further, we analyzed users' in-teraction styles with the multi-touch interface in detail to isolate mismatches between user expectations and interac-tion functionality.


Good Wheel Hunting: UMass Lowell's Scavenger Hunt Robot System

14 Reads

·

5 Citations

This paper describes the UMass Lowell entry into the Scavenger Hunt Competition at the AAAI-2005 Robot Competition and Exhibition. The scavenger hunt entry was built on top of the system we have been developing for urban search and rescue (USAR) research. The system includes new behaviors and behavior sequencing, vision algorithms and sensor processing algorithms, all used to locate the objects in the scavenger hunt.


Citations (11)


... None of the articles actually defined usability, even if their research clearly approaches the topic. However, the authors often used the words efficiency and effectiveness to describe usability goals (e.g., References [12,28,59,71]) as well as terms such as ease of use and ease to understand (e.g., References [12,25,34,44]). Interestingly, only Reference [41] addresses the term accessibility, which is a concept usually associated with usability (i.e., "accessibility as usability for people with the widest range of capabilities" [7]). Very few authors clearly described design principles or goals for their visualization schemes. ...

Reference:

Systematic Literature Review on Usability of Firewall Configuration
Developing Multidimensional Firewall Configuration Visualizations

... Position and orientation are helpful to provide navigation reference to the human operator, but bandwidth limitations and communication delays could introduce critical mismatches in the frames of reference (Goodrich and Schultz, 2007). Based on key elements mentioned above, telepresence interfaces are often broken down in two main categories: map-centric and video-centric (Keyes, 2007). In a map-centric interface, the map represents the most important input for the operator to supervise the navigation. ...

EVOLUTION OF A TELEPRESENCE ROBOT INTERFACE

... This increases the mental load of the operator because attention has to be switched between views [2]. In practice, operators mainly focus on the primary camera and often miss changes in auxiliary views [3]. 3) A wide-spread alternative is a mechanical pan-tilt sensor head that points a mounted camera in the desired viewing direction. ...

Camera Placement and Multi-Camera Fusion for Remote Robot Operation

... The authors conclude that the scavenger hunt has had a positive impact on the retention of students, which was 4.6% higher than the previous five-year historical average. Finally, the American Association for Artificial Intelligence (AAAI) has incorporated a scavenger hunt in their annual conference [2,6,19]. In this scavenger hunt, participants create a robot that must be able to identify and record the location of items on a predefined list. ...

Good Wheel Hunting: UMass Lowell's Scavenger Hunt Robot System
  • Citing Article

... Few of these robots exhibit superior usability in terms of movement and camera operations. In emergencies, it is important to provide the operators of teleoperated robots with a human-machine interface (HMI) that filters input information effectively [1][2][3][4][5]. However, there are few reports of teleoperated robots that reflect this knowledge. ...

Analysis of human-robot interaction for urban search and rescue

... These visualization techniques have also been incorporated into our GUI, for example on how to develop and maintain awareness trough a minimap. Dury [11] presents an iterative design process for developing a user interface that enhances operator situation awareness and control in remote robot operations. She discusses the evolution through multiple versions of the interface, highlighting improvements in design and functionality based on user feedback and evaluation results. ...

Improving Human-Robot Interaction through Interface Evolution

... There are many ways to control a robot remotely [59][60][61], although a widespread interaction approach is to control it using a graphical user interface, GUI. In addition, teleoperation user interfaces should provide features that can increase Content courtesy of Springer Nature, terms of use apply. ...

Multi-touch interaction for robot control

... The majority of remote robot interaction research focuses on either how to communicate with a robot in a remote environment (Mackey et al., 2020;Sierra et al., 2019) or teleoperationhow to control the robot from a remote location or by using augmented reality (AR) or other simulated environments (Yanco et al., 2005;Qian et al., 2013;Nagy et al., 2015;Chiou et al., 2016;Chivarov et al., 2019;Xue et al., 2020). Some studies have looked at how to improve HRI outcomes by using the common ground principle to improve communication between humans and robots. ...

Improving Human-Robot Interaction for Remote Robot Operation
  • Citing Conference Paper
  • January 2005

... Some of them even fused the 2D camera image into the 3D map [7,17]. In-depth levels of visualization in such interfaces are to include the sensing, perception, prediction, planning and execution information for showing the internal state and goals of the intelligent agents [2,10,15]. ...

Evolving interface design for robot search tasks
  • Citing Article
  • August 2007

Journal of Field Robotics