Conference PaperPDF Available

Artificial Trust as a Tool in Human-AI Teams

Authors:

Abstract

Mutual trust is considered a required coordinating mechanism for achieving effective teamwork in human teams. However, it is still a challenge to implement such mechanisms in teams composed by both humans and AI (human-AI teams), even though those are becoming increasingly prevalent. Agents in such teams should not only be trustworthy and promote appropriate trust from the humans, but also know when to trust a human teammate to perform a certain task. In this project, we study trust as a tool for artificial agents to achieve better team work. In particular, we want to build mental models of humans so that agents can understand human trustworthiness in the context of human-AI teamwork, taking into account factors such as human teammates', task's and environment's characteristics.
Artificial Trust as a Tool in Human-AI Teams
Carolina Centeio Jorge
Intelligent Systems
Delft University of Technology
Delft, The Netherlands
C.Jorge@tudelft.nl
Myrthe L. Tielman
Intelligent Systems
Delft University of Technology
Delft, The Netherlands
M.L.Tielman@tudelft.nl
Catholijn M. Jonker
Intelligent Systems 1 & LIACS 2
1 Delft University of Technology
2 Leiden University
1 Delft & 2 Leiden, The Netherlands
C.M.Jonker@tudelft.nl
Abstract—Mutual trust is considered a required coordinating
mechanism for achieving effective teamwork in human teams.
However, it is still a challenge to implement such mechanisms in
teams composed by both humans and AI (human-AI teams), even
though those are becoming increasingly prevalent. Agents in such
teams should not only be trustworthy and promote appropriate
trust from the humans, but also know when to trust a human
teammate to perform a certain task. In this project, we study
trust as a tool for artificial agents to achieve better team work.
In particular, we want to build mental models of humans so that
agents can understand human trustworthiness in the context of
human-AI teamwork, taking into account factors such as human
teammates’, task’s and environment’s characteristics.
Index Terms—trust, trustworthiness, human-robot teams,
human-agent, human-AI, hybrid intelligence, HART
I. I NTRODUCTION
As technology advances, the understanding that artificial
agents should collaborate with humans, instead of ultimately
replacing them, becomes more corroborated and important.
The idea that humans and Artificial Intelligence (AI) should
work together comes from the understanding that both entities
have a set of strengths and limitations, that can complement
each other. Consequently, they can cover each other’s weaker
points, becoming stronger together. Hopefully, humans and
AI can work as teammates, interdependently, helping each
other. For this to become possible, it is important to explore
mechanisms that contribute and allow effective teamwork and
interdependence of human-AI teams. In particular, mutual trust
is one key driver of effective teamwork in human teams [1].
In this project, we want to explore how we can use the
notion of trust as a tool for prediction for artificial agents,
when interacting with human teammates. If an agent would
know how to estimate trustworthiness, it could know what to
expect from a teammate regarding a task. More specifically,
the agent would be able to decide when to rely on someone
(we see reliance as the resulting behaviour of trust evaluation).
We call artificial trust [2] to the artificial agent’s belief in
trustworthiness (in particular, human trustworthiness).
In a dyadic relation between two cognitive agents [3]
(artificial or human), trust involves two parties, the trustor
and the trustee, and an action (trusted by the trustor to the
trustee) that affects a goal (of the trustor) [4]. Trust is dynamic
and it is affected by several factors, from individual properties
AI*MAN of Delft University of Technology
(both trustor’s and trustee’s characteristics) to environmental
properties (such as challenges and limitations). Trust can be
seen as the perceived trustworthiness, where trustworthiness is
a property of the trustee. In several contexts, including human-
AI teams, it is not only important that there is trust among
teammates, but also that this trust is appropriate, i.e., that trust
corresponds to actual trustworthiness (avoiding understrust
and overtrust) [5]. Trustworthiness is a complex concept, and
following the literature it can consist of a set of dimensions
that range from the trustee’s competence to its intentions [6].
Models in slightly different settings propose that trust de-
pends on how one perceives another’s 1) Ability, Benevolence
and Integrity [7] (in human organizations), 2) Willingness,
Competence and Dependence [4] (in multi-agent systems),
and 3) Performance, Process and Purpose [8] (when the
human is the trustor and an artificial agent is the trustee). The
way trustworthiness is perceived can also depend on trustor’s
characteristics [7] and is usually influenced by external factors,
which are contextual conditions determining the situation in
which the task is executed [9], such as environmental configu-
ration, emotional state, workload, etc. When studying trust in
human-robot teams, we particularly need to take into account
that the perception (from a human) of robot’s trustworthiness
may be influenced by its specific robotic characteristics, such
as embodiment [10], which may also affect how the agent
should trust the human. Moreover, trust is dynamic and in
these teams we also need to consider how trust develops.
Particularly, teammates may not possess the time to deepen
their knowledge regarding other’s trustworthiness dimensions,
making use of swift trust [11], for example.
Trust has been vastly explored in several contexts in human
teams (see e.g. [12]–[16]), and recently starts to being investi-
gated also for human-AI teams (see e.g. [5], [17]–[19]). When
diving into the perspective of an artificial agent’s trust towards
other entities, multi-agent systems community has addressed
several important aspects, mostly when the other entity is
also an artificial agent (see e.g. [20]–[25]). In particular, it
is relevant for this work to take into account the models
that distinguish internal qualities (krypta) of the agents from
their observable signs (manifesta), to estimate trustworthiness,
such as done in Falcone et al. [9]. Although there are several
contributions in 1) how humans trust humans, 2) how agents
can trust other agents, 3) how humans trust artificial agents
(see e.g. [26], [27]), and 4) team trust (still recent but growing
HRI
Pioneer
HRI 2022, March 7-10, 2022, Sapporo, Hokkaido, Japan
1155978-1-6654-0731-1/22/$31.00 ©2022 IEEE
in human-AI contexts), there is little research on how an
artificial agent should trust its human teammates. However,
there is some work in this direction, for instance on how an
artificial agent can detect that a situation requires trust [28],
[29] and also how an artificial agent can detect whether a
human is being trustworthy, based on episodic memory [30]
and social cues [31]. Also, Azevedo-Sa et al. [2] has recently
proposed a model for trusting tasks in human-robot teams,
making a clear distinction between natural trust (when the
trustor is a human) and artificial trust (when the trustee is an
artificial agent). The focus of the authors’ model is capabilities,
whereas in this current project we hypothesise that we should
take more dimensions into account when determining trust.
Research on how an artificial agent should use the concepts
of trust and trustworthiness in human-AI teams, as to under-
stand better their human teammates’ mental models, is still
preliminary, and this research project aims at filling a part
of that gap. The main research question of this PhD project
is: “How can an artificial agent make use of trust in human
teammates regarding tasks, in order to achieve the team’s
goals?”. Although we aim at providing general frameworks for
human-AI teams, our goal is to apply our research to robots,
such as drones for search and rescue scenarios.
II. P ROP O S ED A PPROACH
To answer our research question, we want to develop
methods that will allow the artificial agent to both ask for
help and initiate assistance when teaming up with humans,
through reasoning about trust. Imagine there is a task (e.g.
identifying an image that the agent captured). Which teammate
would do it? How well? Would they need help? Which factors
should the agent take into account? What should the agent
do? To allow an agent to answer these questions, we will go
from conceptually defining our model to later on tuning it
from data. In particular, we want to use hybrid AI techniques,
bridging formal (e.g. mental models, beliefs) and machine
learning models (e.g. Machine Theory of Mind [32]), to decide
on when and who to trust for a certain task. We want to apply
these techniques to robots that can update the models based
on interactions.
We start by defining human trustworthiness (i.e. what is a
trustworthy human teammate?) and its dimensions (i.e. what
influences human trustworthiness, e.g. integrity), in the context
of human-AI teams, given a task. After knowing which dimen-
sions are related to trustworthiness, we can form artificial trust
(which can computationally unfold into other beliefs, such as
competence and willingness belief [33]). For this, we want
build machine learning models which, based on behaviour than
hint to such dimensions, can estimate trustworthiness (e.g.,
learn integrity of a human teammate from observations and
estimate whether a human teammate will perform a task).
With such models, we can detect critical points (such as
very low trustworthiness, meaning a human will likely be
unreliable regarding a certain sub-task) in the process of a
human teammate performing tasks. When detecting critical
points, the artificial agent can act accordingly, adjusting its
actions to the actions of its human teammate, ensuring as much
as possible the achievement of the team goal (e.g., if the agent
knows a human will not be able to perform a certain part of
the process, then it can decide to help the human, ask some
other human to do it, etc). Consequently, our agent should be
provided with models that recognize when and who to ask
for help as well as when its human teammates may need its
help. This model should be used on robots and learn from
mistakes of the interactions with human teammates, updating
itself. Finally, we want to update this model to a real scenario,
such as drones on urban search and rescue (USAR) or medical
domains.
III. P ROGRESS
To define trustworthiness for this project, we started by
investigating the general dynamics of trust in human-AI
teams. In such teams, there are several dyadic trust rela-
tionships (human-human, human-agent, agent-human, agent-
agent). More important than dyadic trust in teams, is appro-
priate dyadic trust, i.e. when one teammate’s trust in another
actually corresponds to the latter’s trustworthiness. We looked
at the specific beliefs in trust and trustworthiness that affect
1) an agent’s appropriate trust in a human teammate and 2)
a human’s appropriate trust in an agent teammate, and how
these beliefs are nested, in [34]. All of these trust beliefs
contribute to the overall team trust, which we have been further
investigating in a collaboration with psychology researchers
and recently submitted a paper.
To form artificial trust (i.e. the artificial belief in human’s
trustworthiness, which usually unfolds into competence and
willingness beliefs [33] when computing trust) regarding a
human teammate, the agent needs to understand which human
internal features (the krypta [9]) make a human trustworthy
(i.e. ability, benevolence and integrity (ABI)), and how these
can be observed through human behaviour (the manifesta [9]).
To explore the relationships among these concepts, we de-
signed, implemented, and ran a study with 54 human subjects
in which people teamed up with artificial agents for collecting
products from a supermarket, in a 2D grid online world. We
have submitted a paper with the results, where we present
a mental model of human trustworthiness, defending that
an artificial agent can form artificial trust from behaviours
that manifest ABI. Results also suggest that humans follow
different strategies, depending on effort and reward, which also
needs to be considered when assessing human trustworthiness
for a certain task, in human-AI teams.
Moving forward, we hope to use the mental model of the
first experiment, to learn how to interactively estimate human’s
trustworthiness in teamwork. For this, we will start by ex-
ploring machine learning models, such as Machine Theory of
Mind [32] for this problem. We will also further explore which
social signals may serve as relevant observable behaviour to
estimate trustworthiness dimensions, so we can apply these
models to human-robot teams.
HRI
Pioneer
HRI 2022, March 7-10, 2022, Sapporo, Hokkaido, Japan
1156
R EFERENCES
[1] E. Salas, D. E. Sims, and C. Burke, “Is there a “big five” in teamwork?,
Small Group Research, vol. 36, pp. 555 599, 2005.
[2] H. Azevedo-Sa, X. J. Yang, L. P. Robert, and D. M. Tilbury, A
unified bi-directional model for natural and artificial trust in human-robot
collaboration,” IEEE Robotics Autom. Lett., vol. 6, no. 3, pp. 5913–5920,
2021.
[3] C. Castelfranchi and R. Falcone, “Trust is much more than subjective
probability: Mental components and sources of trust,” in Proceedings
of the 33rd annual Hawaii international conference on system sciences,
IEEE, 2000.
[4] C. Castelfranchi and R. Falcone, Trust & Self-Organising Socio-
technical Systems. Springer International Publishing, 2010.
[5] M. Lewis, H. Li, and K. Sycara, “Deep learning, transparency, and
trust in human robot teamwork,” in Trust in Human-Robot Interaction,
pp. 321–352, Elsevier, 2020.
[6] N. Griffiths, “Task delegation using experience-based multi-dimensional
trust,” in AAMAS ’05, 2005.
[7] R. C. Mayer, J. H. Davis, and F. D. Schoorman, “An integrative model
of organizational trust,” Source: The Academy of Management Review,
vol. 20, pp. 709–734, 1995.
[8] J. D. Lee and K. A. See, “Trust in automation: Designing for appro-
priate reliance,” Human Factors: The Journal of Human Factors and
Ergonomics Society, vol. 46, pp. 50 80, 2004.
[9] R. Falcone, M. Piunti, M. Venanzi, and C. Castelfranchi, “From man-
ifesta to krypta: The relevance of categories for trusting others,” ACM
Transactions on Intelligent Systems and Technology, vol. 4, 3 2013.
[10] J. Goetz, S. Kiesler, and A. Powers, “Matching robot appearance
and behavior to tasks to improve human-robot cooperation,” in The
12th IEEE International Workshop on Robot and Human Interactive
Communication, 2003. Proceedings. ROMAN 2003., pp. 55–60, 2003.
[11] K. S. Haring, E. Phillips, E. H. Lazzara, D. Ullman, A. L. Baker, and
J. R. Keebler, “Chapter 17 - applying the swift trust model to human-
robot teaming,” in Trust in Human-Robot Interaction (C. S. Nam and
J. B. Lyons, eds.), pp. 407–427, Academic Press, 2021.
[12] C. Breuer, J. H¨
uffmeier, F. Hibben, and G. Hertel, “Trust in teams: A
taxonomy of perceived trustworthiness factors and risk-taking behaviors
in face-to-face and virtual teams,” Human Relations, vol. 73, pp. 3
34, 2020.
[13] H. Huynh, C. E. Johnson, and H. S. Wehe, “Humble coaches and their
influence on players and teams: The mediating role of affect-based (but
not cognition-based) trust,” Psychological Reports, vol. 123, pp. 1297
1315, 2019.
[14] A. M. Naber, S. C. Payne, and S. S. Webber, “The relative influence of
trustor and trustee individual differences on peer assessments of trust,”
Personality and Individual Differences, vol. 128, pp. 62–68, 7 2018.
[15] A. Y. Lee, G. D. Bond, D. C. Russell, J. Tost, C. Gonz ´
alez, and
P. S. Scarbrough, “Team perceived trustworthiness in a complex military
peacekeeping simulation,” Military Psychology, vol. 22, no. 3, pp. 237–
261, 2010.
[16] B. D. Adams, S. Waldherr, and J. Sartori, “Trust in teams scale, trust in
leaders scale: Manual for administration and analyses,” 2008.
[17] A.-S. Ulfert and E. Georganta, “A model of team trust in human-agent
teams,” in Companion Publication of the 2020 International Conference
on Multimodal Interaction, ICMI ’20 Companion, (New York, NY,
USA), p. 171–176, Association for Computing Machinery, 2020.
[18] K. E. Schaefer, B. S. Perelman, G. M. Gremillion, A. R. Marathe, and
J. S. Metcalfe, “A roadmap for developing team trust metrics for human-
autonomy teams,” in Trust in Human-Robot Interaction, Academic Press,
2021.
[19] E. J. D. Visser, Marieke, M. M. Peeters, Malte, F. Jung, S. Kohn, Tyler,
H. Shaw, R. Pak, and M. A. Neerincx, “Towards a theory of longitudinal
trust calibration in human-robot teams,” International Journal of Social
Robotics, vol. 12, pp. 459–478, 2020.
[20] J. Urbano, A. P. Rocha, and E. Oliveira, “A socio-cognitive perspective
of trust,” in Agreement Technologies, pp. 419–429, Springer, 2013.
[21] J. Sabater-Mir and L. Vercouter, “Trust and reputation in multiagent
systems,” Multiagent systems, p. 381, 2013.
[22] A. Herzig, E. Lorini, J. F. H ¨
ubner, and L. Vercouter, “A logic of trust
and reputation,” Logic Journal of the IGPL, vol. 18, pp. 214–244, 12
2009.
[23] C. Burnett, T. J. Norman, and K. Sycara, “Stereotypical trust and bias in
dynamic multiagent systems,” ACM Transactions on Intelligent Systems
and Technology, vol. 4, 3 2013.
[24] K. Chhogyal, A. C. Nayak, A. Ghose, and K. H. Dam, “A value-based
trust assessment model for multi-agent systems,” 28th International Joint
Conference on Artificial Intelligence (IJCAI-19), 2019.
[25] C. Cruciani, A. Moretti, and P. Pellizzari, “Dynamic patterns in
similarity-based cooperation: An agent-based investigation, Journal of
Economic Interaction and Coordination, vol. 12, no. 1, 2017.
[26] M. Winikoff, “Towards trusting autonomous systems,” Lecture Notes in
Computer Science, vol. 10738 LNAI, pp. 3–20, 2018.
[27] C. Nam, P. Walker, H. Li, M. Lewis, and K. Sycara, “Models of trust
in human control of swarms with varied levels of autonomy, IEEE
Transactions on Human-Machine Systems, vol. 50, pp. 194–204, 6 2020.
[28] A. R. Wagner and R. C. Arkin, “Recognizing situations that demand
trust,” in 2011 RO-MAN, pp. 7–14, IEEE, 2011.
[29] A. R. Wagner, P. Robinette, and A. Howard, “Modeling the human-
robot trust phenomenon: A conceptual framework based on risk,” ACM
Transactions on Interactive Intelligent Systems, vol. 8, 11 2018.
[30] S. Vinanzi, M. Patacchiola, A. Chella, and A. Cangelosi, “Would a robot
trust you? developmental robotics model of trust and theory of mind,”
Philosophical Transactions of the Royal Society B: Biological Sciences,
vol. 374, 4 2019.
[31] V. Surendran and A. Wagner, “Your robot is watching: Using surface
cues to evaluate the trustworthiness of human actions,” 2019 28th IEEE
International Conference on Robot and Human Interactive Communica-
tion (RO-MAN), pp. 1–8, 2019.
[32] N. C. Rabinowitz, F. Perbet, H. F. Song, C. Zhang, S. M. A. Es-
lami, and M. Botvinick, “Machine theory of mind,” in Proceedings of
the 35th International Conference on Machine Learning, ICML 2018,
Stockholmsm¨
assan, Stockholm, Sweden, July 10-15, 2018 (J. G. Dy and
A. Krause, eds.), vol. 80 of Proceedings of Machine Learning Research,
pp. 4215–4224, PMLR, 2018.
[33] R. Falcone and C. Castelfranchi, “Trust dynamics: How trust is influ-
enced by direct experiences and by trust itself, in AAMAS, pp. 740–747,
IEEE Computer Society, 2004.
[34] C. C. Jorge, S. Mehrotra, C. M. Jonker, and M. L. Tielman, “Trust
should correspond to trustworthiness: a formalization of appropriate
mutual trust in human-agent teams,” in Proceedings of the International
Workshop in Agent Societies, 2021.
HRI
Pioneer
HRI 2022, March 7-10, 2022, Sapporo, Hokkaido, Japan
1157
... Artificial intelligence (AI) is playing an increasingly important role within the work context and has been described as one of the most impactful current trends (Kaplan & Haenlein, 2020). Instead of fully replacing employees, current research highlights that AI can be most beneficial when humans and AI systems closely collaborate, complementing each other's strengths and weaknesses (Centeio Jorge et al., 2022). With their continuously improving capabilities, AI systems are moving away from being a technology used as a tool towards becoming members of work teams (Larson & DeChurch, 2020). ...
... Yet, there are still central challenges to address before humans and AI agents can be truly complementary -with all parties benefiting from their collaboration. Especially a better understanding of how human-AI teams function and the main mechanisms that support collaboration, such as trust, is needed (Centeio Jorge et al., 2022). ...
Article
Full-text available
Intelligent systems are increasingly entering the workplace, gradually moving away from technologies supporting work processes to artificially intelligent (AI) agents becoming team members. Therefore, a deep understanding of effective human-AI collaboration within the team context is required. Both psychology and computer science literature emphasize the importance of trust when humans interact either with human team members or AI agents. However, empirical work and theoretical models that combine these research fields and define team trust in human-AI teams are scarce. Furthermore, they often lack to integrate central aspects, such as the multilevel nature of team trust and the role of AI agents as team members. Building on an integration of current literature on trust in human-AI teaming across different research fields, we propose a multidisciplinary framework of team trust in human-AI teams. The framework highlights different trust relationships that exist within human-AI teams and acknowledges the multilevel nature of team trust. We discuss the framework’s potential for human-AI teaming research and for the design and implementation of trustworthy AI team members.
... Trust in the agent Human trust in their AI teammates is integral to both their acceptance of the AI and the team's overall performance (Costa et al. 2018;Centeio Jorge et al. 2022). For this reason, trust was measured in the post-task survey after each interaction exercise with the AI agent using a 3-item 5-point Likert scale (1=strongly disagree, 5=strongly agree). ...
Article
Full-text available
An obstacle to effective teaming between humans and AI is the agent’s "black box" design. AI explanations have proven benefits, but few studies have explored the effects that explanations can have in a teaming environment with AI agents operating at heightened levels of autonomy. We conducted two complementary studies, an experiment and participatory design sessions, investigating the effect that varying levels of AI explainability and AI autonomy have on the participants’ perceived trust and competence of an AI teammate to address this research gap. The results of the experiment were counter-intuitive, where the participants actually perceived the lower explainability agent as both more trustworthy and more competent. The participatory design sessions further revealed how a team’s need to know influences when and what teammates need explained from AI teammates. Based on these findings, several design recommendations were developed for the HCI community to guide how AI teammates should share decision information with their human counterparts considering the careful balance between trust and competence in human-AI teams.
... Finally, it is worth mentioning one additional future expansion of CASPER: the inclusion of Artificial Trust (AT) [39,66,67]. By leveraging AT abilities, the robot will be able to assess the capabilities of other agents, whether humans or robots, to pursue the desired goal. ...
Article
Full-text available
Our world is being increasingly pervaded by intelligent robots with varying degrees of autonomy. To seamlessly integrate themselves in our society, these machines should possess the ability to navigate the complexities of our daily routines even in the absence of a human’s direct input. In other words, we want these robots to understand the intentions of their partners with the purpose of predicting the best way to help them. In this paper, we present the initial iteration of cognitive architecture for social perception and engagement in robots: a symbolic cognitive architecture that uses qualitative spatial reasoning to anticipate the pursued goal of another agent and to calculate the best collaborative behavior. This is performed through an ensemble of parallel processes that model a low-level action recognition and a high-level goal understanding, both of which are formally verified. We have tested this architecture in a simulated kitchen environment and the results we have collected show that the robot is able to both recognize an ongoing goal and to properly collaborate towards its achievement. This demonstrates a new use of qualitative spatial relations applied to the problem of intention reading in the domain of human–robot interaction.
... There has been a rapid increase of studies considering hybrid forms of interaction between humans and AI, where AI is often seen as a "teammate" (Bansal et al., 2019;Seeber et al., 2020;Zhang et al., 2021). Several challenges have been identified in this type of interaction, considering in particular the lack of control by humans (Yang et al., 2020), lack of transparency in AI agency (Liu, 2021;Vössing et al., 2022), and lack of trust in AI (Jorge et al., 2022). Nevertheless, researchers have also been studying how human-AI interactions can be of value to individuals, organizations and society in general. ...
Article
Full-text available
Solid research depends on systematic, verifiable and repeatable scientometric analysis. However, scientometric analysis is difficult in the current research landscape characterized by the increasing number of publications per year, intersections between research domains, and the diversity of stakeholders involved in research projects. To address this problem, we propose SciCrowd, a hybrid human-AI mixed-initiative system, which supports the collaboration between Artificial Intelligence services and crowdsourcing services. This work discusses the design and evaluation of SciCrowd. The evaluation is focused on attitudes, concerns and intentions towards use. This study contributes a nuanced understanding of the interplay between algorithmic and human tasks in the process of conducting scientometric analysis.
... As such, it may be important to study behavioural dynamics at the micro level of interactions to understand (and perhaps model) actors' mutual influence each other's judgements and experiences (e.g., similar to decomposing the role of others' nonverbal behaviour [5] and situational context [6] in computational analysis of Face-to-Face settings). Furthermore, these behaviours can be further explored as to be automatically detected by artificial teammates in human-AI teams, leading to a better understanding of human teammates [13] and, consequently, appropriate mutual trust [12]. ...
Conference Paper
Full-text available
In competitive multiplayer online video games, teamwork is of utmost importance, implying high levels of interdependence between the joint outcomes of players. When engaging in such interdependent interactions, humans rely on trust to facilitate coordination of their individual behaviours. However, online games often take place between teams of strangers, with individual members having little to no information about each other than what they observe throughout the interaction itself. A better understanding of the social behaviours that are used by players to form trust could not only facilitate richer gaming experiences, but could also lead to insights about team interactions. As such, this paper presents a first step towards understanding how and which types of in-game behaviour relate to trust formation. In particular, we investigate a)which in-game behaviour were relevant for trust formation (first part of the study) and b) how they relate to the reported player’s trust in their teammates (the second part of the study). The first part consisted of interviews with League of Legends players in order to create a taxonomy of in-game behaviours relevant for trust formation. As for the second part, we ran a small-scale pilot study where participants played the game and then answered a questionnaire to measure their trust in their teammates. Our preliminary results present a taxonomy of in-game behaviours which can be used to annotate the games regarding trust behaviours. Based on the pilot study, the list of behaviours could be extended as to improve the results. These findings can be used to research the role of trust formation in teamwork .
... One additional future expansion on CASPER is the inclusion of artificial trust considerations [35,53,54] through which the endowed robot will be able to assess the capabilities of other agents (humans or robots) to pursue the desired goal. Our hypothesis is that this cognitive skill will be valuable in allowing the group of heterogeneous robots to sort each other between the several available collaborative assignments required to assist the humans of their team. ...
Preprint
Full-text available
Our world is being increasingly pervaded by intelligent robots with varying degrees of autonomy. To seamlessly integrate themselves in our society, these machines should possess the ability to navigate the complexities of our daily routines even in the absence of a human's direct input. In other words, we want these robots to understand the intentions of their partners with the purpose of predicting the best way to help them. In this paper, we present CASPER (Cognitive Architecture for Social Perception and Engagement in Robots): a symbolic cognitive architecture that uses qualitative spatial reasoning to anticipate the pursued goal of another agent and to calculate the best collaborative behavior. This is performed through an ensemble of parallel processes that model a low-level action recognition and a high-level goal understanding, both of which are formally verified. We have tested this architecture in a simulated kitchen environment and the results we have collected show that the robot is able to both recognize an ongoing goal and to properly collaborate towards its achievement. This demonstrates a new use of Qualitative Spatial Relations applied to the problem of intention reading in the domain of human-robot interaction.
Preprint
Full-text available
We introduce a novel capabilities-based bi-directional multi-task trust model that can be used for trust prediction from either a human or a robotic trustor agent. Tasks are represented in terms of their capability requirements, while trustee agents are characterized by their individual capabilities. Trustee agents' capabilities are not deterministic; they are represented by belief distributions. For each task to be executed, a higher level of trust is assigned to trustee agents who have demonstrated that their capabilities exceed the task's requirements. We report results of an online experiment with 284 participants, revealing that our model outperforms existing models for multi-task trust prediction from a human trustor. We also present simulations of the model for determining trust from a robotic trustor. Our model is useful for control authority allocation applications that involve human-robot teams.
Article
Full-text available
We introduce a novel capabilities-based bi-directional multi-task trust model that can be used for trust prediction from either a human or a robotic trustor agent. Tasks are represented in terms of their capability requirements, while trustee agents are characterized by their individual capabilities. Trustee agents' capabilities are not deterministic; they are represented by belief distributions. For each task to be executed, a higher level of trust is assigned to trustee agents who have demonstrated that their capabilities exceed the task's requirements. We report results of an online experiment with 284 participants, revealing that our model outperforms existing models for multi-task trust prediction from a human trustor. We also present simulations of the model for determining trust from a robotic trustor. Our model is useful for control authority allocation applications that involve human-robot teams.
Conference Paper
Full-text available
In human-agent teams, how one teammate trusts another teammate should correspond to the latter's actual trustworthiness, creating what we would call appropriate mutual trust. Although this sounds obvious, the notion of appropriate mutual trust for human-agent teamwork lacks a formal definition. In this article, we propose a formalization which represents trust as a belief about trustworthiness. Then, we address mutual trust, and pose that agents can use beliefs about trustworthiness to represent how they trust their human teammates, as well as to reason about how their human teammates trust them. This gives us a formalization with nested beliefs about beliefs of trustworthiness. Next, we highlight that mutual trust should also be appropriate, where we define appropriate trust in an agent as the trust which corresponds directly to that agent's trustworthiness. Finally, we explore how agents can define their own trustworthiness, using the concepts of ability, benevolence and integrity. This formalization of appropriate mutual trust can form the base for developing agents which can promote such trust.
Article
Full-text available
The introduction of artificial teammates in the form of autonomous social robots, with fewer social abilities compared to humans, presents new challenges for human–robot team dynamics. A key characteristic of high performing human-only teams is their ability to establish, develop, and calibrate trust over long periods of time, making the establishment of longitudinal human–robot team trust calibration a crucial part of these challenges. This paper presents a novel integrative model that takes a longitudinal perspective on trust development and calibration in human–robot teams. A key new proposed factor in this model is the introduction of the concept relationship equity. Relationship equity is an emotional resource that predicts the degree of goodwill between two actors. Relationship equity can help predict the future health of a long-term relationship. Our model is descriptive of current trust dynamics, predictive of the impact on trust of interactions within a human–robot team, and prescriptive with respect to the types of interventions and transparency methods promoting trust calibration. We describe the interplay between team trust dynamics and the establishment of work agreements that guide and improve human–robot collaboration. Furthermore, we introduce methods for dampening (reducing overtrust) and repairing (reducing undertrust) mis-calibrated trust between team members as well as methods for transparency and explanation. We conclude with a description of the implications of our model and a research agenda to jump-start a new comprehensive research program in this area.
Chapter
While trust has a long and rich history, there are multiple principles, theories, models, and study topics that need both refinement and revision to develop appropriate team trust metrics for effective human-autonomy teaming. This chapter builds on current theory and research to develop an effective roadmap forward to developing these metrics. The first part of the chapter builds the roadmap's foundation by understanding the impact of definitions, the trust process, and measurement techniques. The second part of this chapter builds onto the surface of this roadmap by providing new approaches related to teaming by quantifying group clustering and methods for effectively predicting trust-based decision-making in the real world. By understanding the state of the art, the needs related to team trust, and current research directions moving forward, this chapter identifies a conceptual path forward for this research community at large to support the development of team trust metrics specific to human-autonomy teaming.
Chapter
Swift trust is a type of trust that is necessary when temporary group members rapidly develop a working relationship and interact with each other to perform team tasks. These teams are characterized by a lack of prior history of collaboration, experiences, or interactions to judge each other's trustworthiness and little prospect of working together in the future. Due to the current technological capabilities of robots along with the context they are typically used, swift trust is also relevant for human-robot teams. Although swift trust has traditionally been applied to solely human-human teams, there is a need to understand how swift trust is developed for human-robot teams given the proliferation of robots for team tasks. This chapter discusses the contrast of swift trust with more traditional trust conceptions and describes how swift trust can be used to describe the trust relationships formed in current and future human-robot teams.
Chapter
For autonomous AI systems to be accepted and trusted, the users should be able to understand the reasoning process of the system (i.e., the system should be transparent). Robotics presents unique programming difficulties in that systems need to map from complicated sensor inputs such as camera feeds and laser scans to outputs such as joint angles and velocities. Advances in deep neural networks are now making it possible to replace laborious handcrafted features and control code by learning control policies directly from high-dimensional sensor inputs. Because Atari games, where these capabilities were first demonstrated, replicate the robotics problem, they are ideal for investigating how humans might come to understand and interact with agents who have not been explicitly programmed. We present computational and human results for making DRLN more transparent using object saliency visualizations of internal states and test the effectiveness of expressing saliency through teleological verbal explanations.
Conference Paper
Trust is a central element for effective teamwork and successful human-technology collaboration. Although technologies, such as agents, are increasingly becoming autonomous team members operating alongside humans, research on team trust in human-agent teams is missing. Thus far, empirical and theoretical work have focused on aspects of trust only towards the agent as a technology neglecting how team trust – with regards to the human-agent team as a whole – develops. In this paper, we present a model of team trust in human-agent teams combining two streams of research: (1) theories of trust in human teams and (2) theories of human-computer interaction (HCI). We propose different antecedents (integrity, ability, benevolence) that influence team trust in human-agent teams as well as individual, team, system, and temporal factors that impact this relationship. The goal of the present article is to advance our understanding of team trust in human-agent teams and encourage an integration between HCI and team research when planning future research. This will also help to design trustworthy human-agent teams and thereby, when introducing human-agent teams, support organizational functioning.
Book
For Autonomous AI systems to be accepted and trusted, the users should be able to understand the reasoning process of the system (i.e., the system should be transparent). Robotics presents unique programming difficulties in that systems need to map from complicated sensor inputs such as camera feeds and laser scans to outputs such as joint angles and velocities. Advances in Deep Neural Networks are now making it possible to replace laborious handcrafted features and control code by learning control policies directly from high dimensional sensor inputs. Because Atari games, where these capabilities were first demonstrated, replicate the robotics problem they are ideal for investigating how humans might come to understand and interact with agents who have not been explicitly programmed. We present computational and human results for making DRLN more transparent using object saliency visualizations of internal states and test the effectiveness of expressing saliency through teleological verbal explanations.