Figure 1 - uploaded by Michael Coovert
Content may be subject to copyright.
Telemedicine mobile robot with intention displayed on the floor (A), manipulator with intention displayed with a LASER projector (B).

Telemedicine mobile robot with intention displayed on the floor (A), manipulator with intention displayed with a LASER projector (B).

Source publication
Article
Full-text available
This paper presents a novel exploration on how to enable a robot to express its intention so that the humans and robot can form a synergic relationship. A systematic design approach is proposed to obtain a set of possible intentions for a given robot from three levels of intentions. A visual intention expression system approach is developed to visu...

Contexts in source publication

Context 1
... intention ex- pression, the system is equipped a Microvision SHOWWX+ 848 x 480 Scanning LASER projector for visual expression. The projector is rigidly mounted on the frame and pointing to the floor to render the robot's intention ( Figure 1A). ...
Context 2
... the robot moves, the long-term and mid-term inten- tions can change if the robot computes a new path and trajectory with new perception data. For our studies, we displayed the intention on the ground and aligned the di- rection of the displayed map with a real-world environment to require less mental computation from human coworkers compared to displaying on a monitor ( Figure 1A). ...
Context 3
... also have designed and implemented a visual expres- sion system on our robotic manipulator -a 6-DOF Fanuc L200IC robotic arm. The same visual display element -a Microvision LASER projector -is attached to a stationary camera to form a camera-projector system (as shown in Fig- ure 1B). The camera was originally setup for the manipu- lator as a part of a hand-eye system to automate manip- ulation tasks. ...
Context 4
... person supervising the task can eas- ily judge if the planned motion is likely to be successful or not by simply glancing at the display. Figure 1B shows one example of the longer-term intention visual expression. ...
Context 5
... in the study interacted with the robot in real time. The robot was moving and projecting arrows on the ground in front of it ( Figure 1A). Two types of tests were performed: congru- ent (the projected arrows reflected the robot's movements) and non-congruent (the projected arrows were random and did not reflect the robot's movement). ...

Similar publications

Conference Paper
Full-text available
This paper presents the results of the EDL and GNC optimization of a network of three Small Mars Landers. This study is performed in the frame of the ESA MREP (Mars Robotic Exploration Preparation) Programme, as preliminary activity towards the ESA INSPIRE mission which is planned to be launched in the 2022-2024 time frame.

Citations

... This approach allows the user to understand different movement directions for the actual control and the suggested DoF combinations. To streamline understanding the control methods, one of our primary approaches is the usage of arrows ś a straightforward and common visualization technique to communicate motion intent [51,52,60]. ...
Preprint
Full-text available
With the ongoing efforts to empower people with mobility impairments and the increase in technological acceptance by the general public, assistive technologies, such as collaborative robotic arms, are gaining popularity. Yet, their widespread success is limited by usability issues, specifically the disparity between user input and software control along the autonomy continuum. To address this, shared control concepts provide opportunities to combine the targeted increase of user autonomy with a certain level of computer assistance. This paper presents the free and open-source AdaptiX XR framework for developing and evaluating shared control applications in a high-resolution simulation environment. The initial framework consists of a simulated robotic arm with an example scenario in Virtual Reality (VR), multiple standard control interfaces, and a specialized recording/replay system. AdaptiX can easily be extended for specific research needs, allowing Human-Robot Interaction (HRI) researchers to rapidly design and test novel interaction methods, intervention strategies, and multi-modal feedback techniques, without requiring an actual physical robotic arm during the early phases of ideation, prototyping, and evaluation. Also, a Robot Operating System (ROS) integration enables the controlling of a real robotic arm in a PhysicalTwin approach without any simulation-reality gap. Here, we review the capabilities and limitations of AdaptiX in detail and present three bodies of research based on the framework. AdaptiX can be accessed at https://adaptix.robot-research.de.
... This allows the user to understand different movement directions for the actual control and the suggested DoF combinations. To simplify understanding, we use arrows, a straightforward and common visualization technique to communicate motion intent [9], [17], [18]. ...
Preprint
Full-text available
Robotic solutions, in particular robotic arms, are becoming more frequently deployed for close collaboration with humans, for example in manufacturing or domestic care environments. These robotic arms require the user to control several Degrees-of-Freedom (DoFs) to perform tasks, primarily involving grasping and manipulating objects. Standard input devices predominantly have two DoFs, requiring time-consuming and cognitively demanding mode switches to select individual DoFs. Contemporary Adaptive DoF Mapping Controls (ADMCs) have shown to decrease the necessary number of mode switches but were up to now not able to significantly reduce the perceived workload. Users still bear the mental workload of incorporating abstract mode switching into their workflow. We address this by providing feed-forward multimodal feedback using updated recommendations of ADMC, allowing users to visually compare the current and the suggested mapping in real-time. We contrast the effectiveness of two new approaches that a) continuously recommend updated DoF combinations or b) use discrete thresholds between current robot movements and new recommendations. Both are compared in a Virtual Reality (VR) in-person study against a classic control method. Significant results for lowered task completion time, fewer mode switches, and reduced perceived workload conclusively establish that in combination with feedforward, ADMC methods can indeed outperform classic mode switching. A lack of apparent quantitative differences between Continuous and Threshold reveals the importance of user-centered customization options. Including these implications in the development process will improve usability, which is essential for successfully implementing robotic technologies with high user acceptance.
... This allows the user to understand different movement directions for the actual control and the suggested DoF combinations. To simplify understanding, we use arrows, a straightforward and common visualization technique to communicate motion intent [9], [17], [18]. ...
Preprint
Full-text available
Robotic solutions, in particular robotic arms, are becoming more frequently deployed for close collaboration with humans, for example in manufacturing or domestic care environments. These robotic arms require the user to control several Degrees-of-Freedom (DoFs) to perform tasks, primarily involving grasping and manipulating objects. Standard input devices predominantly have two DoFs, requiring time-consuming and cognitively demanding mode switches to select individual DoFs. Contemporary Adaptive DoF Mapping Controls (ADMCs) have shown to decrease the necessary number of mode switches but were up to now not able to significantly reduce the perceived workload. Users still bear the mental workload of incorporating abstract mode switching into their workflow. We address this by providing feed-forward multimodal feedback using updated recommendations of ADMC, allowing users to visually compare the current and the suggested mapping in real-time. We contrast the effectiveness of two new approaches that a) continuously recommend updated DoF combinations or b) use discrete thresholds between current robot movements and new recommendations. Both are compared in a Virtual Reality (VR) in-person study against a classic control method. Significant results for lowered task completion time, fewer mode switches, and reduced perceived workload conclusively establish that in combination with feedforward, ADMC methods can indeed outperform classic mode switching. A lack of apparent quantitative differences between Continuous and Threshold reveals the importance of user-centered customization options. Including these implications in the development process will improve usability, which is essential for successfully implementing robotic technologies with high user acceptance.
... However, having this visualization does not significantly affect the performance when executing tasks using the robot [23]. When using a visual representation of robot motion intent, the most prominent solution is to show the robot's movement using arrows [24][25][26]. In addition, most of these approaches rely on Augmented Reality to overlay the visual representation on the user's real environment. ...
Article
Full-text available
Robot arms are one of many assistive technologies used by people with motor impairments. Assistive robot arms can allow people to perform activities of daily living (ADL) involving grasping and manipulating objects in their environment without the assistance of caregivers. Suitable input devices (e.g., joysticks) mostly have two Degrees of Freedom (DoF), while most assistive robot arms have six or more. This results in time-consuming and cognitively demanding mode switches to change the mapping of DoFs to control the robot. One option to decrease the difficulty of controlling a high-DoF assistive robot arm using a low-DoF input device is to assign different combinations of movement-DoFs to the device’s input DoFs depending on the current situation (adaptive control). To explore this method of control, we designed two adaptive control methods for a realistic virtual 3D environment. We evaluated our methods against a commonly used non-adaptive control method that requires the user to switch controls manually. This was conducted in a simulated remote study that used Virtual Reality and involved 39 non-disabled participants. Our results show that the number of mode switches necessary to complete a simple pick-and-place task decreases significantl when using an adaptive control type. In contrast, the task completion time and workload stay the same. A thematic analysis of qualitative feedback of our participants suggests that a longer period of training could further improve the performance of adaptive control methods.
... Some groups have investigated projecting a ray of light on the ground to express a robot's intended path to nearby people [39,40,57]. Other work has investigated using projectors to communicate the path that a mobile robot intends to take [12,38,39,56,71]. Projection mapping has been used to communicate a robot's internal states, such as detected objects in the robot's environment [20] and the object that the robot plans to grasp [20,68]. ...
... Projection mapping has been used to communicate a robot's internal states, such as detected objects in the robot's environment [20] and the object that the robot plans to grasp [20,68]. Projected visual cues have also been used to communicate information such as the trajectory of industrial robot arms [56], the robot's workspace [3], as well as goal and task related information [3]. Some limitations of projectors include the need to be projected on flat surfaces and that they can only display two-dimensional (2D) positioning. ...
Article
Full-text available
Human–robot collaboration is becoming increasingly common in factories around the world; accordingly, we need to improve the interaction experiences between humans and robots working in these spaces. In this article, we report on a user study that investigated methods for providing information to a person about a robot’s intent to move when working together in a shared workspace through signals provided by the robot. In this case, the workspace was the surface of a tabletop. Our study tested the effectiveness of three motion-based and three light-based intent signals as well as the overall level of comfort participants felt while working with the robot to sort colored blocks on the tabletop. Although not significant, our findings suggest that the light signal located closest to the workspace—an LED bracelet located closest to the robot’s end effector—was the most noticeable and least confusing to participants. These findings can be leveraged to support human–robot collaborations in shared spaces.
... Some experiments on the industrial robot with LEDs supporting eating by the physically challenged is also presented (Ikeura et al., 2000). Recently there are several researches focusing on robot's attention and intention (Yamazaki et al., 2007;Mutlu et al., 2009;Kim et al., 2009;Jee et al., 2010;Park et al., 2010;Shindev et al., 2012;Hirota et al., 2012), but those mainly focus on informational aspect of human-robot interaction. ...
Chapter
Full-text available
[http://www.iconceptpress.com/download/paper/12050414244751.pdf]
Conference Paper
Full-text available
Robotic solutions, in particular robotic arms, are becoming more frequently deployed for close collaboration with humans, for example in manufacturing or domestic care environments. These robotic arms require the user to control several Degrees-of-Freedom (DoFs) to perform tasks, primarily involving grasping and manipulating objects. Standard input devices predominantly have two DoFs, requiring time-consuming and cognitively demanding mode switches to select individual DoFs. Contemporary Adaptive DoF Mapping Controls (ADMCs) have shown to decrease the necessary number of mode switches but were up to now not able to significantly reduce the perceived workload. Users still bear the mental workload of incorporating abstract mode switching into their workflow. We address this by providing feed-forward multimodal feedback using updated recommendations of ADMC, allowing users to visually compare the current and the suggested mapping in real-time. We contrast the effectiveness of two new approaches that a) continuously recommend updated DoF combinations or b) use discrete thresholds between current robot movements and new recommendations. Both are compared in a Virtual Reality (VR) in-person study against a classic control method. Significant results for lowered task completion time, fewer mode switches, and reduced perceived workload conclusively establish that in combination with feedforward, ADMC methods can indeed outperform classic mode switching. A lack of apparent quantitative differences between Continuous and Threshold reveals the importance of user-centered customization options. Including these implications in the development process will improve usability, which is essential for successfully implementing robotic technologies with high user acceptance.
Article
Full-text available
With the ongoing efforts to empower people with mobility impairments and the increase in technological acceptance by the general public, assistive technologies, such as collaborative robotic arms, are gaining popularity. Yet, their widespread success is limited by usability issues, specifically the disparity between user input and software control along the autonomy continuum. To address this, shared control concepts provide opportunities to combine the targeted increase of user autonomy with a certain level of computer assistance. This paper presents the free and open-source AdaptiX XR framework for developing and evaluating shared control applications in a high-resolution simulation environment. The initial framework consists of a simulated robotic arm with an example scenario in Virtual Reality (VR), multiple standard control interfaces, and a specialized recording/replay system. AdaptiX can easily be extended for specific research needs, allowing Human-Robot Interaction (HRI) researchers to rapidly design and test novel interaction methods, intervention strategies, and multi-modal feedback techniques, without requiring an actual physical robotic arm during the early phases of ideation, prototyping, and evaluation. Also, a Robot Operating System (ROS) integration enables the controlling of a real robotic arm in a PhysicalTwin approach without any simulation-reality gap. Here, we review the capabilities and limitations of AdaptiX in detail and present three bodies of research based on the framework. AdaptiX can be accessed at https://adaptix.robot-research.de.
Article
Full-text available
Recent applications of autonomous agents and robots have brought attention to crucial trust-related challenges associated with the current generation of artificial intelligence (AI) systems. AI systems based on the connectionist deep learning neural network approach lack capabilities of explaining their decisions and actions to others, despite their great successes. Without symbolic interpretation capabilities, they are ’black boxes’, which renders their choices or actions opaque, making it difficult to trust them in safety-critical applications. The recent stance on the explainability of AI systems has witnessed several approaches to eXplainable Artificial Intelligence (XAI); however, most of the studies have focused on data-driven XAI systems applied in computational sciences. Studies addressing the increasingly pervasive goal-driven agents and robots are sparse at this point in time. This paper reviews approaches on explainable goal-driven intelligent agents and robots, focusing on techniques for explaining and communicating agents’ perceptual functions (e.g., senses, vision) and cognitive reasoning (e.g., beliefs, desires, intention, plans, and goals) with humans in the loop. The review highlights key strategies that emphasize transparency, understandability, and continual learning for explainability. Finally, the paper presents requirements for explainability and suggests a road map for the possible realization of effective goal-driven explainable agents and robots.
Article
As development of robots with the ability to self-assess their proficiency for accomplishing tasks continues to grow, metrics are needed to evaluate the characteristics and performance of these robot systems and their interactions with humans. This proficiency-based human-robot interaction (HRI) use case can occur before, during, or after the performance of a task. This paper presents a set of metrics for this use case, driven by a four stage cyclical interaction flow: 1) robot self-assessment of proficiency (RSA), 2) robot communication of proficiency to the human (RCP), 3) human understanding of proficiency (HUP), and 4) robot perception of the human’s intentions, values, and assessments (RPH). This effort leverages work from related fields including explainability, transparency, and introspection, by repurposing metrics under the context of proficiency self-assessment. Considerations for temporal level ( a priori , in situ , and post hoc ) on the metrics are reviewed, as are the connections between metrics within or across stages in the proficiency-based interaction flow. This paper provides a common framework and language for metrics to enhance the development and measurement of HRI in the field of proficiency self-assessment.