Conference Paper

On Trust-aware Assistance-seeking in Human-Supervised Autonomy

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This distribution is conditioned on factors such as human actions, robot performance, and other contextual information relevant to the task. A subclass of such models is POMDP-based models, where the dynamics of trust are defined by the state transition functions conditioned on actions and contextual information, with observations treated as human actions [14]- [16], [19]- [22]. Such models have been applied for planning to minimize human interruption [14], [22]. ...
... A subclass of such models is POMDP-based models, where the dynamics of trust are defined by the state transition functions conditioned on actions and contextual information, with observations treated as human actions [14]- [16], [19]- [22]. Such models have been applied for planning to minimize human interruption [14], [22]. These models have also been used in tasks where it is needed to optimize the delivery of information to humans, such that efficiency in the collaboration is maintained [16], [20], [21]. ...
... Similarly, the estimated values for contexts associated with a positive experience, e.g., successful collections or when the robot requests assistance, are positive. Interestingly, as found in our previous study [22], asking for assistance can help increase human trust, which is also consistent with the estimated B T . In other words, the estimates suggest that successful collection increases and maintains trust; failed collections decrease it; and asking for assistance can help repair and increase trust. ...
Preprint
Using a dual-task paradigm, we explore how robot actions, performance, and the introduction of a secondary task influence human trust and engagement. In our study, a human supervisor simultaneously engages in a target-tracking task while supervising a mobile manipulator performing an object collection task. The robot can either autonomously collect the object or ask for human assistance. The human supervisor also has the choice to rely upon or interrupt the robot. Using data from initial experiments, we model the dynamics of human trust and engagement using a linear dynamical system (LDS). Furthermore, we develop a human action model to define the probability of human reliance on the robot. Our model suggests that participants are more likely to interrupt the robot when their trust and engagement are low during high-complexity collection tasks. Using Model Predictive Control (MPC), we design an optimal assistance-seeking policy. Evaluation experiments demonstrate the superior performance of the MPC policy over the baseline policy for most participants.
Article
Full-text available
Trust model is a topic that first gained interest in organizational studies and then human factors in automation. Thanks to recent advances in human-robot interaction (HRI) and human-autonomy teaming, human trust in robots has gained growing interest among researchers and practitioners. This article focuses on a survey of computational models of human-robot trust and their applications in robotics and robot controls. The motivation is to provide an overview of the state-of-the-art computational methods to quantify trust so as to provide feedback and situational awareness in HRI. Different from other existing survey papers on human-robot trust models, we seek to provide in-depth coverage of the trust model categorization, formulation, and analysis, with a focus on their utilization in robotics and robot controls. The paper starts with a discussion of the difference between human-robot trust with general agent-agent trust, interpersonal trust, and human trust in automation and machines. A list of impacting factors for human-robot trust and different trust measurement approaches, and their corresponding scales are summarized. We then review existing computational human-robot trust models and discuss the pros and cons of each category of models. These include performance-centric algebraic, time-series, Markov decision process (MDP)/Partially Observable MDP (POMDP)-based, Gaussian-based, and dynamic Bayesian network (DBN)-based trust models. Following the summary of each computational human-robot trust model, we examine its utilization in robot control applications, if any. We also enumerate the main limitations and open questions in this field and discuss potential future research directions.
Article
Full-text available
A human–robot hybrid cell is developed for flexible assembly in manufacturing through the collaboration between a human and a robot. The selected task is to assemble a few LEGO blocks (parts) into a final product following specified sequence and instructions. The task is divided into several subtasks. A two-level feedforward optimization strategy is developed that determines optimum subtask allocation between the human and the robot before the assembly starts. Human's trust in robot and robot's trust in human are considered, computational models of the trust are derived and real-time trust measurement and display methods are developed. A feedback approach is integrated into the feedforward subtask allocation in the form of subtask re-allocation if trust levels reduce to below specified thresholds. It is hypothesized that subtask re-allocation may help regain trust and maintain satisfactory performance. Experiment results prove that (i) the integrated (feedforward + feedback) optimum subtask allocation is effective to maintain satisfactory trust levels of human and robot that result in satisfactory human–robot interactions (HRI) and assembly performance, and (ii) consideration of two-way trust (human's trust in robot and robot's trust in human) produces better HRI and assembly performance than that produced when one-way trust (human's trust in robot) or no trust is considered.
Article
Full-text available
Trust is essential for human-robot collaboration and user adoption of autonomous systems, such as robot assistants. This paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human behaviors, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants). The results show that the trust-POMDP improves human-robot team performance in this task. They further suggest that maximizing trust in itself may not improve team performance.
Conference Paper
Full-text available
In an increasingly automated world, trust between humans and autonomous systems is critical for successful integration of these systems into our daily lives. In particular, for autonomous systems to work cooperatively with humans, they must be able to sense and respond to the trust of the human. This inherently requires a control-oriented model of dynamic human trust behavior. In this paper, we describe a gray-box modeling approach for a linear third-order model that captures the dynamic variations of human trust in an obstacle detection sensor. The model is parameterized based on data collected from 581 human subjects, and the goodness of fit is approximately 80% for a general population. We also discuss the effect of demographics, such as national culture and gender, on trust behavior by re-parameterizing our model for subpopulations of data. These demographic-based models can be used to help autonomous systems further predict variations in human trust dynamics.
Conference Paper
Full-text available
Existing research assessing human operators' trust in automation and robots has primarily examined trust as a steady-state variable, with little emphasis on the evolution of trust over time. With the goal of addressing this research gap, we present a study exploring the dynamic nature of trust. We defined trust of entirety as a measure that accounts for trust across a human's entire interactive experience with automation, and first identified alternatives to quantify it using real-time measurements of trust. Second, we provided a novel model that attempts to explain how trust of entirety evolves as a user interacts repeatedly with automation. Lastly, we investigated the effects of automation transparency on momentary changes of trust. Our results indicated that trust of entirety is better quantified by the average measure of "area under the trust curve" than the traditional post-experiment trust measure. In addition, we found that trust of entirety evolves and eventually stabilizes as an operator repeatedly interacts with a technology. Finally, we observed that a higher level of automation transparency may mitigate the "cry wolf" effect -- wherein human operators begin to reject an automated system due to repeated false alarms.
Article
Full-text available
Mutual trust is a key factor in human–human collaboration. Inspired by this social interaction, we analyse human–agent mutual trust in the collaboration of human and (semi)autonomous multi-agent systems. Human–agent mutual trust should be bidirectional and determines the human’s acceptance and hence use of autonomous agents as well as agents’ willingness to take human’s command. It is especially important when a human collaborates with multiple agents concurrently. In this paper, we derive time-series human–agent mutual trust models based on results from human factors engineering. To avoid both ‘over-trust’ and ‘under-trust’, we set up dynamic timing models for the multi-agent scheduling problem and develop necessary and sufficient conditions to test the schedulability of the human multi-agent collaborative task. Our simulation results show that the proposed algorithm guarantees effective real-time scheduling of the human multi-agent collaboration system while ensuring a proper level of human–agent mutual trust.
Chapter
Full-text available
It is envisioned that a human operator is able to monitor and control one or more (semi)autonomous underwater robots simultaneously in future marine operations. To enable such operations, a human operator must trust the capability of a robot to perform tasks autonomously, and the robot must establish its trust to the human operator based on human performance and follow guidance accordingly. Therefore, we seek to i model the mutual trust between humans and robots (especially (semi)autonomous underwater robots in this chapter), and ii) develop a set of trust-based algorithms to control the human-robot team so that the mutual trust level can be maintained at a desired level. We propose a time series based mutual trust model that takes into account robot performance, human performance and overall human-robot system fault rates. The robot performance model captures the performance evolution of a robot under autonomous mode and teleoperated mode, respectively. Furthermore, we specialize the robot performance model of a YSI EcoMapper autonomous underwater robot based on its distance to a desired waypoint. The human performance model is inspired by the Yerkes-Dodson law in psychology, which describes the relationship between human arousal and performance. Based on the mutual trust model, we first study a simple case of one human operator controlling a single robot and propose a trust-triggered control strategy depending on the limit conditions of the desired trust region. The method is then enhanced for the case of one human operator controlling a swarm of robots. In this framework, a periodic trust-based control strategy with a highest-trust-first scheduling algorithm is proposed. Matlab simulation results are provided to validate the proposed model and control strategies that guarantee effective real-time scheduling of teleoperated and autonomous controls in both one human one underwater robot case and one human multiple underwater robots case.
Article
Full-text available
Objective: We systematically review recent empirical research on factors that influence trust in automation to present a three-layered trust model that synthesizes existing knowledge. Background: Much of the existing research on factors that guide human-automation interaction is centered around trust, a variable that often determines the willingness of human operators to rely on automation. Studies have utilized a variety of different automated systems in diverse experimental paradigms to identify factors that impact operators’ trust. Method: We performed a systematic review of empirical research on trust in automation from January 2002 to June 2013. Papers were deemed eligible only if they reported the results of a human-subjects experiment in which humans interacted with an automated system in order to achieve a goal. Additionally, a relationship between trust (or a trust-related behavior) and another variable had to be measured. All together, 101 total papers, containing 127 eligible studies, were included in the review. Results: Our analysis revealed three layers of variability in human–automation trust (dispositional trust, situational trust, and learned trust), which we organize into a model. We propose design recommendations for creating trustworthy automation and identify environmental conditions that can affect the strength of the relationship between trust and reliance. Future research directions are also discussed for each layer of trust. Conclusion: Our three-layered trust model provides a new lens for conceptualizing the variability of trust in automation. Its structure can be applied to help guide future research and develop training interventions and design procedures that encourage appropriate trust.
Article
Full-text available
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a ‘trust transfer function’ is developed using lime series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
Article
Full-text available
We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI). To date, reviews of trust in HRI have been qualitative or descriptive. Our quantitative review provides a fundamental empirical foundation to advance both theory and practice. Meta-analytic methods were applied to the available literature on trust and HRI. A total of 29 empirical studies were collected, of which 10 met the selection criteria for correlational analysis and 11 for experimental analysis. These studies provided 69 correlational and 47 experimental effect sizes. The overall correlational effect size for trust was r = +0.26,with an experimental effect size of d = +0.71. The effects of human, robot, and environmental characteristics were examined with an especial evaluation of the robot dimensions of performance and attribute-based factors. The robot performance and attributes were the largest contributors to the development of trust in HRI. Environmental factors played only a moderate role. Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated. There was little evidence for effects of human-related factors. The findings provide quantitative estimates of human, robot, and environmental factors influencing HRI trust. Specifically, the current summary provides effect size estimates that are useful in establishing design and training guidelines with reference to robot-related factors of HRI trust. Furthermore, results indicate that improper trust calibration may be mitigated by the manipulation of robot design. However, many future research needs are identified.
Article
Full-text available
We consider problems of sequence processing and propose a solution based on a discrete-state model in order to represent past context. We introduce a recurrent connectionist architecture having a modular structure that associates a subnetwork to each state. The model has a statistical interpretation we call input-output hidden Markov model (IOHMM). It can be trained by the estimation-maximization (EM) or generalized EM (GEM) algorithms, considering state trajectories as missing data, which decouples temporal credit assignment and actual parameter estimation. The model presents similarities to hidden Markov models (HMMs), but allows us to map input sequences to output sequences, using the same processing style as recurrent neural networks. IOHMMs are trained using a more discriminant learning paradigm than HMMs, while potentially taking advantage of the EM algorithm. We demonstrate that IOHMMs are well suited for solving grammatical inference problems on a benchmark problem. Experimental results are presented for the seven Tomita grammars, showing that these adaptive models can attain excellent generalization.
Article
Aomation has become prevalent in the everyday lives of humans. However, despite significant technological advancements, human supervision and intervention are still necessary in almost all sectors of automation, ranging from manufacturing and transportation to disaster management and health care [1]. Therefore, it is expected that the future will be built around human?agent collectives [2] that will require efficient and successful interaction and coordination between humans and machines. It is well established that, to achieve this coordination, human trust in automation plays a central role [3]-[5]. For example, the benefits of automation are lost when humans override it due to a fundamental lack of trust [3], [5], and accidents may occur due to human mistrust in such systems [6]. Therefore, trust should be appropriately calibrated to avoid the disuse or misuse of automation [4].
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation. Copyright © 2004, Human Factors and Ergonomics Society. All rights reserved.
Conference Paper
Trust is essential for human-robot collaboration and user adoption of autonomous systems, such as robot assistants. This paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human behaviors, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants). The results show that the trust-POMDP improves human-robot team performance in this task. They further suggest that maximizing trust in itself may not improve team performance.
Article
The interaction between humans and robot teams is highly relevant in many application domains, for example in collaborative manufacturing, search and rescue, and logistics. It is well-known that humans and robots have complementary capabilities: Humans are excellent in reasoning and planning in unstructured environments, while robots are very good in performing tasks repetitively and precisely. In consequence, one of the key research questions is how to combine human and robot team decision making and task execution capabilities in order to exploit their complementary skills. From a controls perspective this question boils down to how control should be shared among them. This article surveys advances in human-robot team interaction with special attention devoted to control sharing methodologies. Additionally, aspects affecting the control sharing design, such as human behavior modeling, level of autonomy and human-machine interfaces are identified. Open problems and future research directions towards joint decision making and task execution in human-robot teams are discussed.
Article
This article focuses on the design of systems in which a human operator is responsible for overseeing autonomous agents and providing feedback based on sensor data. In the control systems community, the term human supervisory control (or simply supervisory control) is often used as a shorthand reference for systems with this type of architecture [5]-[7]. In a typical human supervisory control application, the operator does not directly manipulate autonomous agents but rather indirectly interacts with these components via a central data-processing station. As such, system designers have the opportunity to easily incorporate automated functionalities to control how information is presented to the operator and how the input provided by the operator is used by automated systems. The goal of these functionalities is to take advantage of the inherent robustness and adaptability of human operators, while mitigating adverse effects such as unpredictability and performance variability. In some contexts, to meet the goal of single-operator supervision of multiple automated sensor systems, such facilitating mechanisms are not only useful but necessary for practical use [8], [9]. A successful system design must carefully consider the goals of each part of the system as a whole and seamlessly stitch components together using facilitating functionalities.
Article
Robots are increasingly introduced to work in concert with people in high-intensity domains, such as manufacturing, space exploration and hazardous environments. Tasks in these domains are often well-defined, but involve complex coordination under constraints and are performed under time pressure. Although numerous studies on human teamwork and coordination in these settings, very little prior work ex-ists on applying these models to human-robot interaction. In this paper we propose a methodology for applying prior art in Shared Mental Models (SMMs) to promote effective human-robot teaming. SMMs are measurable models devel-oped among team members prior to task execution and are strongly correlated to team performance.
Article
Two experiments are reported that investigate to what extent performance consequences of automated aids are dependent on the distribution of functions between human and automation and on the experience an operator has with an aid. In the first experiment, performance consequences of three automated aids for the support of a supervisory control task were compared. Aids differed in degree of automation (DOA). Compared with a manual control condition, primary and secondary task performance improved and subjective workload decreased with automation support, with effects dependent on DOA. Performance costs include return-to-manual performance issues that emerged for the most highly automated aid and effects of complacency and automation bias, respectively, which emerged independent of DOA. The second experiment specifically addresses how automation bias develops over time and how this development is affected by prior experience with the system. Results show that automation failures entail stronger effects than positive experience (reliably working aid). Furthermore, results suggest that commission errors in interaction with automated aids can depend on three sorts of automation bias effects: (a) withdrawal of attention in terms of incomplete cross-checking of information, (b) active discounting of contradictory system information, and (c) inattentive processing of contradictory information analog to a “looking-but-not-seeing” effect.
Conference Paper
Prior work in human trust of autonomous robots suggests the timing of reliability drops impact trust and control allocation strategies. However, trust is traditionally measured post-run, thereby masking the real-time changes in trust, reducing sensitivity to factors like inertia, and subjecting the measure to biases like the primacy-recency effect. Likewise, little is known on how feedback of robot confidence interacts in real-time with trust and control allocation strategies. An experiment to examine these issues showed trust loss due to early reliability drops is masked in traditional post-run measures, trust demonstrates inertia, and feedback alters allocation strategies independent of trust. The implications of specific findings on development of trust models and robot design are also discussed.
Article
This paper addresses theoretical, empirical, and analytical studies pertaining to human use, misuse, disuse, and abuse of automation technology. Use refers to the voluntary activation or disengagement of automation by human operators. Trust, mental workload, and risk can influence automation use, but interactions between factors and large individual differences make prediction of automation use difficult. Misuse refers to over reliance on automation, which can result in failures of monitoring or decision biases. Factors affecting the monitoring of automation include workload, automation reliability and consistency, and the saliency of automation state indicators. Disuse, or the neglect or underutilization of automation, is commonly caused by alarms that activate falsely. This often occurs because the base rate of the condition to be detected is not considered in setting the trade-off between false alarms and omissions. Automation abuse, or the automation of functions by designers and implementation by managers without due regard for the consequences for human performance, tends to define the operator's roles as by-products of the automation. Automation abuse can also promote misuse and disuse of automation by human operators. Understanding the factors associated with each of these aspects of human use of automation can lead to improved system design, effective training methods, and judicious policies and procedures involving automation use.
Article
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a 'trust transfer function' is developed using time series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
Conference Paper
Simulators have played a critical role in robotics research as tools for quick and efficient testing of new concepts, strategies, and algorithms. To date, most simulators have been restricted to 2D worlds, and few have matured to the point where they are both highly capable and easily adaptable. Gazebo is designed to fill this niche by creating a 3D dynamic multi-robot environment capable of recreating the complex worlds that would be encountered by the next generation of mobile robots. Its open source status, fine grained control, and high fidelity place Gazebo in a unique position to become more than just a stepping stone between the drawing board and real hardware: data visualization, simulation of remote environments, and even reverse engineering of blackbox systems are all possible applications. Gazebo is developed in cooperation with the Player and Stage projects (Gerkey, B. P., et al., July 2003), (Gerkey, B. P., et al., May 2001), (Vaughan, R. T., et al., Oct. 2003), and is available from http://playerstage.sourceforge.net/gazebo/ gazebo.html.
Optimo: Online probabilistic trust inference model for asymmetric human-robot collaborations
  • A Xu
  • G Dudek
ROS Robotics Projects : Build and Control Robots Powered by the Robot Operating System, Machine Learning, and Virtual Reality
  • G Ramkumar
  • J Lentin