ArticlePDF Available

Towards Modeling Social-Cognitive Mechanisms in Robots to Facilitate Human-Robot Teaming

Authors:

Abstract

For effective human-robot teaming, robots must gain the appropriate social-cognitive mechanisms that allow them to function naturally and intuitively in social interactions with humans. However, there is a lack of consensus on social cognition broadly, and how to design such mechanisms for embodied robotic systems. To this end, recommendations are advanced that are drawn from HRI, psychology, robotics, neuroscience and philosophy as well as theories of embodied cognition, dual process theory, ecological psychology, and dynamical systems. These interdisciplinary and multi-theoretic recommendations are meant to serve as integrative and foundational guidelines for the design of robots with effective social-cognitive mechanisms.
http://pro.sagepub.com/
Ergonomics Society Annual Meeting
Proceedings of the Human Factors and
http://pro.sagepub.com/content/57/1/1278
The online version of this article can be found at:
DOI: 10.1177/1541931213571283 2013 57: 1278Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Travis J. Wiltshire, Daniel Barber and Stephen M. Fiore
Towards Modeling Social-Cognitive Mechanisms in Robots to Facilitate Human-Robot Teaming
Published by:
http://www.sagepublications.com
On behalf of:
Human Factors and Ergonomics Society
can be found at:Proceedings of the Human Factors and Ergonomics Society Annual MeetingAdditional services and information for
http://pro.sagepub.com/cgi/alertsEmail Alerts:
http://pro.sagepub.com/subscriptionsSubscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://pro.sagepub.com/content/57/1/1278.refs.htmlCitations:
What is This?
- Sep 30, 2013Version of Record >>
at University of Central Florida Libraries on October 4, 2013pro.sagepub.comDownloaded from at University of Central Florida Libraries on October 4, 2013pro.sagepub.comDownloaded from at University of Central Florida Libraries on October 4, 2013pro.sagepub.comDownloaded from at University of Central Florida Libraries on October 4, 2013pro.sagepub.comDownloaded from at University of Central Florida Libraries on October 4, 2013pro.sagepub.comDownloaded from at University of Central Florida Libraries on October 4, 2013pro.sagepub.comDownloaded from
Towards Modeling Social-Cognitive Mechanisms in Robots to Facilitate
Human-Robot Teaming
Travis J. Wiltshire, Daniel Barber, and Stephen M. Fiore
University of Central Florida, Orlando, FL
For effective human-robot teaming, robots must gain the appropriate social-cognitive mechanisms that al-
low them to function naturally and intuitively in social interactions with humans. However, there is a lack of
consensus on social cognition broadly, and how to design such mechanisms for embodied robotic systems.
To this end, recommendations are advanced that are drawn from HRI, psychology, robotics, neuroscience
and philosophy as well as theories of embodied cognition, dual process theory, ecological psychology, and
dynamical systems. These interdisciplinary and multi-theoretic recommendations are meant to serve as inte-
grative and foundational guidelines for the design of robots with effective social-cognitive mechanisms.
INTRODUCTION
A number of efforts in Human-Robot Interaction (HRI)
are focused on transforming the common perception of robots
as tools to robots viewed as teammates, collaborators, or part-
ners (e.g., Fiore, Elias, Gallagher, & Jentsch, 2008; Hoffman
& Breazeal, 2004; Lackey, Barber, Reinerman-Jones, Badler,
& Hudson, 2011; Phillips, Ososky, & Jentsch, 2011). Though
the development of social-cognitive mechanisms has received
significantly less emphasis in HRI, it is essential for effective
human-robot teaming, as such mechanisms allows robots to
function naturally and intuitively during their interactions with
humans (Breazeal, 2004). To effectively coordinate and coop-
erate with human-teammates, a robot must not only recognize
the human teammate’s observable actions, but it must also
understand the intentions behind such actions in relation to the
environment. To have such a capability, robots must gain the
requisite social-cognitive mechanisms that afford the interpre-
tation of mental states (i.e., intentions, beliefs, desires) of
teammates during interactive contexts and concomitantly dis-
play the appropriate behaviors. This will allow robot team-
mates to work with humans towards shared goals and dynami-
cally adjust plans based on both observable human actions and
unobservable mental states (e.g., Hoffman & Breazeal, 2004).
Engineering human social cognition. In service of the
aforementioned need, we seek to extend the recently advanced
approach termed Engineering Human Social Cognition
(EHSC), which aims to leverage an interdisciplinary under-
standing of human social-cognitive processes for the develop-
ment and design of systems that possess social intelligence
(Streater, Bockelman Morrow, & Fiore, 2012). This effort
draws heavily from advances in Social Signal Processing
(SSP) with the aim of providing mechanisms for computers
that are able to interpret high-level social signals (i.e., mental
states) from combinations of low-level social cues (see Vinci-
arelli et al., 2012 for review).
Our efforts build off of this work with a particular empha-
sis on social robotics. Importantly, there are a number of dif-
ferentiating characteristics that distinguish robots from com-
puters. Social robots are defined as: (a) physically embodied
agents that, (b) function at some level of autonomy, and (c)
interact and communicate with humans by, (d) adhering to
normative and expected social behaviors (Bartneck & Forlizzi,
2004). Given that one of the long term goals for designers of
robotic teammates, and EHSC, is for robots to function auton-
omously, it is essential to first frame our approach with a defi-
nition of autonomous agents. Autonomous agents are defined
as an embodied system that is designed to “satisfy internal and
external goals by its own actions while in continuous long-
term interaction with the environment in which it is situated”
(Beer, 1995; p.173). Although there are inherent challenges
associated with automated systems performing as team mem-
bers (cf. Klein, Woods, Bradshaw, Hoffman, & Feltovich,
2004), we submit that providing foundational social-cognitive
mechanisms is a necessary step towards mitigating many of
these issues. Our aim with this paper is not to articulate the
challenges that could be solved by the instantiation of such
mechanisms, but rather provide an outline of such a system.
Next, in framing our approach, we suggest that robots
must be designed with serious consideration of their embodi-
ment. We suggest this strongly given the increasing support for
the modeling of artificial cognitive systems and, in particular
social robots, on principles of embodied cognition across dis-
ciplines such as HRI, philosophy, psychology, robotics, and
neuroscience (e.g., Barsalou, 2008; Breazeal et al., 2009;
Chaminade & Cheng, 2009; Dautenhahn, Ogden, & Quick,
2002; Fiore et al., 2008; Gallagher, 2007; Hoffman, 2012;
Pezzulo et al., 2013; Pfeifer, Lungarella, & Iida, 2007). By
principles of embodied cognition we mean that designers must
account for (a) the environment and ecological niche and the
associated physical laws in which (b) the body and morpholog-
ical structures of an agent are grounded, (c) the sensorimotor
couplings (e.g., sensor and effector relations), as a function of
morphology, that shape the dynamic interactions between
agent and environment, and (d) the situatedness of the agents’
cognitive processes as a function of varying contexts (e.g.,
Pezzulo et al., 2013; Pfeifer et al., 2007). This emphasis on
considering embodiment is central to social cognition in hu-
mans (e.g., Bohl & van den Bos, 2012); though, taking an em-
bodied view of social cognition requires further explanation of
the dual way in which these processes occur.
Dual paths for social signal processing
.
Decades of re-
search in social psychology and more recently, social cognitive
neuroscience (among other cognitive fields of inquiry) suggest
that there are two generally distinct types of cognitive process-
es that follow different pathways evident at the neural and
functional levels and emergent at the personal level (e.g.
Bargh, 1984; Chaiken, & Trope, 1999; Satpute & Lieberman,
2006). Type 1 (T1) processes are implicit, automatic, stimulus-
Copyright 2013 by Human Factors and Ergonomics Society, Inc. All rights reserved. DOI 10.1177/1541931213571283
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 1278
driven, evolutionarily older, shared with a broader range of
species, and assign primacy to action. Type 2 (T2) processes
are explicit, controlled, evolutionarily recent, and characteris-
tically off-line (e.g., Bohl & van den Bos, 2012; Chaiken, &
Trope, 1999). In Cognitive Science, a related account uses the
terms “direct” and “reflective” perception (Bockelman Mor-
row & Fiore, 2012). Related to T1 processes, direct perception
involves processing that considers the enactive and embodied
coupling between humans and their world (e.g., Noë, 2004;
Thompson, 2005). This has been extended to social-cognitive
processes and is argued to allow for rapid access of intentional
and affective states of another agent (De Jaegher, 2009; Gal-
lagher, 2008). Related to T2 processes, reflective perception
relies on memory and context to construct interpretations of
interactions. Communication and joint-attention draw from
norms and prior experience to enable an agent to make sense
of intersubjective experiences (Gallagher & Hutto, 2008).
Bohl and van den Bos (2012) advanced a multi-level
framework that leverages dual-process theory to integrate in-
teractionism and theory of mind approaches to social cogni-
tion. In this account, interactionism emphasizes the role of
neural systems (e.g., mirror neuron system) in providing a
more direct and automatic perception of others’ mental states
during an interaction. Conversely, theory of mind approaches
emphasize a “mindreading system” to make inferences or men-
tal simulations to understand others’ mental states. Taken to-
gether, these two approaches, unified by dual process theory,
provide a rich account of social cognition in humans.
We suggest that this integration can inform the design of
social-cognitive models for robots in order to improve human
robot teaming. If social signal processing in robots could be
modeled in this way, it may approximate the sophisticated and
flexible social-cognitive processes in humans. Specifically, T1
processes would allow for a more direct understanding of, and
automatic interaction with, the environment and human team-
mates, and T2 processes would allow for more complex and
deliberate forms of cognition, such as mental simulation, that
allow for prediction and interpretation of novel situations.
We argue that taking into account a dual process perspec-
tive would ultimately lead to not only a more effective robot,
but also a more effective teammate. For example, a robot in-
teracting with a teammate under time constraints would be able
to leverage T1 processes in service of directly engaging in
appropriate actions. However, when time constraints are less
relevant, or if a robot is engaging in, for example, a surveil-
lance task, T2 processes would allow for more analytic types
of cognition to better understand and predict the future states
of a situation. To the best of our knowledge, no such outline
exists to explicate the nature of a dual social signal processing
system in an embodied robot. Therefore, as the main body of
this paper, we aim to provide the preliminary foundation for
such a system with reference to each recommendation’s link to
T1 and/or T2 processes. Because design recommendations for
such social-cognitive systems are sparse, the aim of this paper
is to leverage an interdisciplinary and multi-theoretic approach
to provide recommendations for the design of robots that will
one day function, and be perceived, as teammates.
RECOMMENDATIONS FOR MODELING SOCIAL-
COGNITIVE MECHANISMS IN ROBOTS
The recommendations in this paper are drawn from disci-
plines such as HRI, philosophy, psychology, robotics, and neu-
roscience as well as from theories of embodied cognition, dual
process theory, ecological psychology, and dynamical systems
theory. More specifically, at this point the aim is not to pro-
vide modeling formalisms for social robotics as many of these
are included in the references we cite. Rather, the aim is to
highlight a set of recommendations that are relevant for ad-
vancing social-cognitive mechanisms through linkages across a
number of disciplines and approaches. Taken together, these
recommendations serve as a foundational step towards improv-
ing human-robot teaming although admittedly, the attempt here
is ambitious given the complexity of social cognition. As such,
our future efforts will adopt a more technical and instantiating
approach based on these recommendations. Therefore, the
recommendations and ideas presented herein are meant to be
more illustrative than exhaustive.
Leverage the ecological approach to robotics. To the
extent possible, the system should be designed in such a way
as to provide the opportunity for direct and non-
representational interaction with the physical and social envi-
ronment to minimize the expense of computational resources.
We argue that the ecological approach provides a framework
for instantiating T1 processes in robots. Representative works
in robotics related to this notion are found in Brooks (1999) as
well as Duchon, Warren, and Kaelbling (1998). Brooks’
(1999) approach led to the development of the subsumption
architecture where each sub-system for the robot was added
on to more basic sub-systems without interfering with previous
systems. Similarly, Duchon et al. (1998) developed robots that
were able to navigate the environment solely though optic flow
rather than construction of an internal model of the world.
The ecological approach to robots is characterized by the
following principles: (a) treatment of the robot and the envi-
ronment as a system, (b) the robot’s behavior is emergent from
the dynamics of the agent-environment system, (c) there is a
direct coupling between perception and action, (d) the infor-
mation required for adaptive behavior is available in the envi-
ronment, and (e) a robot does not always require the need to
represent the environment in a central model (Duchon et al.,
1998). These notions derive largely from Gibson’s (1979) eco-
logical approach to perception. However, for our purposes,
they are also characteristic of T1 processes. As such, the eco-
logical approach could serve to inform the design of a social
signal processing system by providing more direct mechanisms
for interaction with the physical and social environment.
Utilize physical and social affordances. Stemming from
the ecological approach is the notion of affordances. Af-
fordances are thought of as the directly perceivable opportuni-
ties for action or interaction arising between the agent and en-
vironment. This interaction could be composed of a number of
objects, substances, and surfaces, and also includes other
agents (Gibson, 1979; Chemero, 2003). Further, affordances
are dependent on the type of embodiment an agent has as well
as the concomitant structuring of the environment as perceiva-
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 1279
ble in an agent’s optic array and other sensory modalities (e.g.,
Kono, 2009). As such, affordances are essential to advancing
our goal of providing a robot with more direct or T1 processes
for interaction.
Mechanisms for control of autonomous robots have uti-
lized and developed formalisms of affordances that specify
relations between the agent and the environment and have
proven useful for navigation through a physical environment
(e.g., Sahin, Cakmak, Dogar, Ugur, & Ucolik, 2007). Much
work on affordances in robotics is of this nature; that is, focus-
ing on interaction with the physical environment with relative-
ly little emphasis on the affordances of the social environment.
However, although not explicitly following the ecological ap-
proach, Pandey and Alami’s (2012) efforts towards the devel-
opment of complex social-cognitive behaviors in robots in-
cludes mechanisms for the analysis of affordances not only
between an agent, objects, and the environment, but also be-
tween multiple agents. Their approach specifies three key ca-
pabilities (visuo-spatial perspective taking, effort analysis, and
affordance analysis) as necessary for providing a robot with
social-cognitive mechanisms, with which, a robot could gather
information from the environment about teammates and act on
them in order to achieve cooperative and coordinative goals.
The first capability, visuo-spatial perspective taking, is
analogous to gaze-following and perspective-taking behaviors
in humans and provides a mechanism for interpreting what a
teammate visually perceives at a given point in time. The sec-
ond capability, effort analysis, represents the amount of effort
required by a teammate to execute a given task given the cur-
rent situation, positioning, and morphology of a teammate’s
body. Lastly, affordance analysis provides a mechanism for
identifying opportunities for action existing between agent-
agent, agent-location, agent-object, and object-agent. Fusing
these capabilities provides a robot with information regarding
the movements a teammate might be able to execute at a given
time, the tasks executable between multiple teammates at a
given time, and the possibilities for movement of objects (see
Pandey & Alami, 2012 for details).
Notably, proponents of the ecological view may object to
these implementations of affordances for robots as most main-
tain that affordances are non-representational (e.g., Chemero
& Turvey, 2012). However, addressing this argument is be-
yond the scope of this paper and, as such, we remain agnostic
to either view and leave room for both representation and non-
representational approaches (cf. Horton, Chakraborty, &
Amant, 2012). Regardless, affordance-based robotics research
is needed to extend efforts beyond interaction with the physi-
cal environment to novel methods for interaction with the so-
cial environment, facilitating effective human-robot teaming.
Incorporate dynamical modeling and analysis of inter-
action dynamics. Across multiple levels ranging from the
neural phenomena within an individual, to that of multiple
individuals interacting in an environment, there are emergent
and dynamic self-organizational patterns that specify and con-
strain interaction possibilities as they occur across time and
space (e.g., Marsh, Richardson, & Schmidt, 2009; Pfeifer et
al., 2007). As such, a number of researchers have pursued
modeling autonomous agents and more broadly, cognition, as
dynamical systems (e.g., Beer, 1995; Treur, 2012). Mecha-
nisms such as this support robots by providing more computa-
tionally efficient means of interpreting and interacting with
both the physical and social environment.
Aside from modeling, dynamical systems tools have also
been used for analyzing interaction dynamics from motion
sensing data (e.g., Marsh et al., 2009). With such mechanisms,
robots could conceivably analyze interaction dynamics unfold-
ing between teammates. In this context, we propose that robots
should be designed with mechanisms for analyzing and syn-
chronizing with emergent interaction dynamics. In doing so,
this may provide robots with an additional capability analo-
gous to T1 processes in humans.
Instantiate modal perceptual and motor representa-
tions. To the extent that the robot cannot rely on non-
representational interaction with the physical and social envi-
ronment, the embodied cognitive system of a social robot must
rely on multi-modal sensory and motor representations (e.g.,
Hoffman, 2012). According to Pezzulo et al. (2013), this
means that incoming information from the robot’s sensors
must be represented in a form that is linked to its modality
(e.g., visual or auditory). This provides the foundation from
which a robot could begin to manipulate modal perceptions to
not only interpret a human teammate (Breazeal et al., 2009),
but also to form concepts, memories, and make decisions
(Hoffman, 2012). From this, modal perceptual and motor rep-
resentations can be taken as providing an essential form of
input for both T1 and T2 processes.
Further, in line with the goal of developing autonomous
agents, the robot must be goal-oriented and thus, it must gen-
erate internal modalities such as affect and motivation (Pez-
zulo et al., 2013; Hoffman, 2012). Notions such as affect may
seem abstract for a robot; however, this can provide a robot
with social-cognitive mechanism for understanding the affec-
tive states of human teammates. Such efforts would point the
way forward for modeling the dynamic, cyclical, and interde-
pendent nature of affective and cognitive states (Treur, 2012).
Couple perception, action, and cognition. To enable
both T1 and T2 processes, modal representations require inte-
gration and association with one another to couple perceptual
sensors with motor effectors. This notion stems from recent
work in neuroscience showing that perception and action are
closely coupled, from which, cognitive processes are grounded
(e.g., Gangohopadhyay & Schilbach, 2011; Knoblich &
Sebanz, 2006); although, this notion was posited in early eco-
logical theory (Gibson, 1979). As such, this line of thinking
has become increasingly adopted in robotics through recogni-
tion of the reciprocal interdependencies of the perception-
action continuum. Again, Brooks’ (1999) subsumption archi-
tecture was one of the first robotics efforts emphasizing direct
perception-action links, which provided motor commands in
direct response to sensory input, allowing the robot to engage
in real-time interaction with the environment. More recently,
Hoffman (2012) proposed achieving multi-modal integration
through symmetrical action-perception activation networks.
Within these networks, perceptions influence higher level as-
sociations that contribute to the selection of actions; however,
perceptions are conversely biased through motor activities.
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 1280
Not only do these networks provide a basis for perceptual
learning, but they may also lead to more fluent interaction be-
tween robots and human teammates (Hoffman, 2012).
Provide motor and perceptual resonance mechanisms.
Providing robot teammates with motor and perceptual reso-
nance mechanisms akin to those elicited in humans by the mir-
ror neuron system (cf. Elias & Fiore, 2008), may contribute to
a better understanding of human teammates’ intentions as they
move through the environment, which, in turn, could lead to
more coordinative joint actions (e.g., Chaminade & Cheng,
2009; Schütz-Bosbach & Prinz, 2007). Specifically, resonance
mechanisms resembling the mirroring system can be character-
ized as a T1 process (Bohl & van den Bos, 2012). Barakova
and Lourens (2009) implemented one example of such a mir-
ror neuron framework that provided simulated and embodied
robots with a mechanism for synchronizing movements and
entraining neuronal firing patterns. This facilitated turn taking
between two agents and other teaming related behaviors. The
bi-directionality afforded by motor and perceptional resonance
mechanisms allows for robots to engage in social interaction
more similarly to those occurring in humans and would likely
improve overall social competence (e.g., Chaminade & Cheng,
2009; Schütz-Bosbach & Prinz, 2007).
Abstract from modal experiences. Mechanisms such as
simulation have the ability to abstract upon modal representa-
tions to instantiate T2 processes. Barsalou (2008) defines this
as the “reenactment of perceptual, motor, and introspective
states acquired during experience with the world, body, and
mind” (pp. 618-619). Along these lines, Breazeal et al. (2009)
developed an embodied social robot system comprised of in-
terconnected perceptual, motor, belief, and intention modules
from which the robot generates its own states and re-uses them
to simulate and infer the perspective and intentions of humans
during an interaction. In support of interactive capabilities,
other efforts have used Dynamic Bayesian Networks for un-
derstanding mental states (Pezzulo, 2012). In short, simulation
mechanisms, in this context, are similar to T2 processes and
would provide the robot with a mechanism for engaging in
mental state attributions of others.
Leverage simulation-based top-down perceptual bias-
ing. Simulation mechanisms also provide a means for a robot
to predict future states in service of engaging in coordinative
action with the physical and social environment (Hoffman &
Breazeal, 2004). Hoffman (2012) submits that “simulation-
based top-down perceptual biasing may specifically be the key
to more fluent coordination between humans and robots work-
ing together in a socially structured interaction” (p. 6). Percep-
tual priming as a simulation mechanism essentially stimulates
and triggers the motor system towards the selection of appro-
priate actions (Marsh, Richardson, & Schmidt, 2009; Schütz-
Bosbach & Prinz, 2007). This deliberate mechanism could
serve as the link between T1 and T2 processes as a function of
informing the more direct perception-action links and also
contribute to the learning and perception of affordances.
CONCLUSIONS
Our goal with this paper was to lay the foundation for
modeling social-cognitive mechanisms in robots such that they
can be designed to function, and be perceived, as effective
teammates. Table 1 lists the modeling recommendations ad-
vanced within this paper with the relation to T1 & T2 process-
es, examples of associated computational formalisms support-
ing their instantiation, and representative references essential
for consideration in their implementation.
Table 1. Modeling Recommendations with potential computational formal-
isms and representative references
Modeling Recom-
mendations
Potential computational
formalisms References
Leverage the ecologi-
cal approach to ro-
botics. (T1)
Subsumption architec-
tures
Optical flow control laws
Brooks, 1999;
Duchon et al.,
1998
Utilize physical and
social affordances.
(T1)
Visuo-spatial perspective
taking, effort and af-
fordance analysis
(effect,(entity,behavior))
Pandley & Alimi,
2012; Sahin et al.,
2007
Incorporate dynam-
ical modeling and
analysis of interac-
tion dynamics. (T1)
Dynamical systems mod-
eling techniques
HKB equations
Beer, 1995;
Marsh et al.,
2009; Treur, 2012
Instantiate modal
perceptual and motor
representa-
tions.(Input for both
T1 & T2)
Modality streams, nodes,
action networks, afferent,
efferent
Hoffman, 2012;
Pezzulo et al.,
2013
Couple perception,
action, and cognition.
(Link T1 & T2)
Convergence zones
Action-perception activa-
tion networks
Brooks, 1999;
Hoffman, 2012
Provide motor and
perceptual resonance
mechanisms. (T1)
Hebbian learning net-
works associating visual
and motor representations
Chaminade &
Cheng, 2009;
Schütz-Bosbach
& Prinz, 2007
Abstract from modal
experiences. (T2)
Generation and simula-
tion mode
Dynamic Bayesian mod-
eling of mental states
Breazeal et al.,
2009; Pezzulo,
2012
Leverage simulation-
based top-down per-
ceptual biasing. (Link
T1 & T2)
Markov-chain Bayesian
predictor
Intermodal Hebbian rein-
forcing
Hoffman, 2012;
Schütz-Bosbach
& Prinz, 2007
Though we have synthesized our recommendations and
made interconnections from a number of disciplinary ap-
proaches, these have centered primarily around modeling the
perceptual, motor, and cognitive architecture of the robot. Re-
search and modeling efforts are warranted to take the next step
towards integrating and developing such a system based on
these recommendations. That being said, these recommenda-
tions serve to explicate recent advances in understanding so-
cial-cognitive mechanisms in humans and the beginnings of
leveraging such knowledge for the design of social robotic
systems. Admittedly, given the instantiation and integration of
these mechanisms, certainly more challenges will need to be
addressed to ensure the robot will perform as an effective team
member (cf. Klein et al., 2004). These recommendations, if
instantiated, would provide some very basic perceptual, motor,
and cognitive abilities, but future efforts should address
whether these would also support more complex forms of so-
cial interaction. We expect that these social-cognitive mecha-
nisms, if shown to support non-verbal interactions, would also
be foundational for linguistic interaction (cf. Pezzulo, 2012).
In sum, our goal has been to outline how an embodied so-
cial robot can begin to function autonomously as a teammate.
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 1281
Adopting a dual processing approach to modeling social-
cognitive mechanisms in robots would not only provide a more
sophisticated and flexible perceptual, motor, and cognitive
architecture for robots, it would also allow for a more direct
understanding of, and automatic interaction with, the environ-
ment and human teammates. It could also provide the mecha-
nism for better understanding human behaviors and mental
states as well as allow for prediction and interpretation of nov-
el and complex social situations.
REFERENCES
Bargh, J. A. (1984). Automatic and conscious processing of social infor-
mation. In R. S. Wyer Jr., & T. K. Srull (Eds.), The Handbook of
Social Cognition (Vol. 3, pp. 1-43). Hillsdale, NJ: Erlbaum.
Barakova, E. I., & Lourens, T. (2009). Mirror neuron framework yields repre-
sentations for robot interaction. Neurocomputing, 72, 895-900.
Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology,
59, 617-645.
Bartneck, C., & Forlizzi, J. (2004). A designed-centered framework for social
human-robot interaction. Proceedings of the Ro-Man 2004, Ku-
rashiki, 591-594.
Beer, R. D. (1995). A dynamical systems perspective on agent-environment
interaction. Artificial Intelligence, 72, 173-215.
Bockelman Morrow, P., & Fiore, S.M. (2012). Supporting human-robot teams
in social dynamicism: An overview of the metaphoric inference
framework. Proceedings of 56
th
Annual Meeting of the HFES (pp.
1718-1722). Boston, MA: HFES.
Bohl, V., & van den Bos, W. (2012). Toward an integrative account of social
cognition: Marrying theory of mind and interactionism to study
the interplay of type 1 and type 2 processes. Frontiers in Human
Neuroscience, 6(274), 1-15.
Breazeal, C. (2004). Social interactions in HRI: The robot view. IEEE Trans-
actions on Systems Man and Cybernetics Part C Applications and
Reviews. IEEE.
Breazeal, C., Gray, J., & Berlin, M. (2009). An embodied cognition approach
to mindreading skills for socially intelligent robots. The Interna-
tional Journal of Robotics Research, 28, 656-680.
Brooks, R. A. (1999). Cambrian intelligence: The early history of the new
AI. Cambridge, MA: MIT Press.
Chaiken, S., & Trope, Y. (1999). Dual Process Theories in Social Psycholo-
gy. New York: Guilford.
Chaminade, T., & Cheng, G. (2009). Social cognitive neuroscience and hu-
manoid robotics. Journal of Physiology – Paris, 103, 286-295.
Chemero, A. (2003) An outline of a theory of affordances. Ecological Psy-
chology 15(2), pp. 181–95.
Chemero, A., & Turvey, M. T. (2007). Gibsonian affordances for roboticists.
Adaptive Behavior, 15(4), 473-480.
Dautenhahn, L, Ogden, B. & Quick, T. (2002). From embodied to social
embedded agents: Implications for interaction-aware robots. Cog-
nitive Systems Research, 3, 397-428.
De Jaegher, H. (2009). Social understanding through direct perception? Yes,
by interacting. Consciousness and Cognition, 18(2), 535–542.
Duchon, A. P., Warren, W. H., & Kaelbling, L. P. (1998). Ecological robot-
ics. Adaptive Behavior, 6, 473-507.
Elias, J. & Fiore, S. M. (2008). From Psychology, to Neuroscience, to Robots:
An Interdisciplinary Approach to Bio-Inspired Robotics. Present-
ed at the 20th Annual Convention of the American Psychological
Society, May, Chicago, IL.
Fiore, S. M., Elias, J., Gallagher, S., & Jentsch, F. (2008). Cognition and
coordination: Applying cognitive science to understand macro-
cognition in human-agent teams. Proceedings of the 8th Annual
Symposium on Human Interaction with Complex Systems, Nor-
folk, Virginia.
Gallagher, S. (2007). Social cognition and social robots. Pragmatics & Cog-
nition, 15(3), 435-453.
Gallagher, S. (2008). Direct perception in the intersubjective context. Con-
sciousness and Cognition, 17, 535–543.
Gallagher, S., & Hutto, D. D. (2008). Understanding others through primary
interaction and narrative practice. In J. Zlatev, T. Racine, C. Sinha
& E. Itkonen (Eds.), The shared mind: Perspectives on intersub-
jectivity. Amsterdam: John Benjamins.
Gangohopadhyay, N., & Schilbach, L. (2011). Seeing minds: A neurophilo-
sophical investigation of the role of perception-action coupling in
social perception. Social Neuroscience, 1-14.
Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Boston:
Houghton Mifflin.
Hoffman, G. (2012). Embodied cognition for autonomous interactive robots.
Topics in Cognitive Science, 4(4), 759-772.
Hoffman, G., & Breazeal, C. (2004). Collaboration in human-robot teams. In
Proceedings of AIAA 1
st
Intelligent Systems Technical Confer-
ence, Chicago, IL (pp. 1-18). Reston, VA: AIAA.
Horton, T. E., Chakraborty, A., & Amant, R. S. (2012). Affordances for ro-
bots: A brief survey. Avant, 3(2), 70-84.
Klein, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., & Feltovich, P. J.
(2004). Ten challenges for making automation a. Intelligent Sys-
tems, IEEE, 19(6), 91-95.
Knoblich, G., & Sebanz, N. (2006). The social nature of perception and ac-
tion. Current Directions in Psychological Science, 15(3), 99-104.
Kono, T. (2009). Social affordances and the possibility of ecological linguis-
tics. Integrative Psychological Behavioral Science, 43, 356-373.
Lackey, S. J., Barber, D. J., Reinerman-Jones, L., Badler, N., & Hudson, I.
(2011). Defining next-generation multi-modal communication in
human-robot interaction. Proceedings of 55
th
Annual Meeting of
the HFES (pp. 461-464). Santa Monica, CA: HFES.
Marsh, K. L., Richardson, M. J., & Schmidt, R. C. (2009). Social connection
through joint action and interpersonal coordination. Topics in
Cognitive Science, 1, 320-339.
Noë, A. (2004). Action in perception. Cambridge: MIT Press.
Pandey, A. K., & Alami, R. (2012, July). Visuo-spatial ability, effort and
affordance analyses: Towards building blocks for robot’s complex
socio-cognitive behaviors. In Workshops at the Twenty-Sixth AAAI
Conference on Artificial Intelligence.
Pezullo, G. (2012). The “Interaction Engine”: A common pragmatic compe-
tence across linguistic and nonlinguistic interactions. IEEE Trans-
actions on Autonomous Mental Development, 4(2), 105-123.
Pezullo, G., Barsalou, L. W., Cangelosi, A., Fischer, A., McRae, K., &
Spivey, M. (2013). Computational Grounded Cognition: A new al-
liance between grounded cognition and computational modeling.
Frontiers in Psychology, 3(612), 1-11.
Pfeifer, R., Lungarella, M., & Iida, F. (2007). Self-organization, embodiment,
and biologically inspired robotics. Science, 318, 1088-1093.
Phillips, E., Ososky, S., & Jentsch, F., (2011). From tools to teammates: To-
ward the development of appropriate mental models for intelligent
robots. Proceedings of 55
th
Annual Meeting of the HFES (pp.
1491-1495). Santa Monica, CA: HFES.
Sahin, E., Cakmak, M., Dogar, M. R., Ugur, E., & Ucolik, G. (2007). To
afford or not to afford: A new formalization of affordances toward
affordance-based robot control. Adaptive Behavior, 15(4), 447-
472.
Satpute, A. B., & Lieberman, M. D. (2006). Integrating automatic and con-
trolled processes into neurocognitive models of social cognition.
Brain Research, 1079, 86-97.
Schütz-Bosbach, S., & Prinz, W. (2007). Perceptual resonance: Action-
induced modulation of perception. Trends in Cognitive Sciences,
11(8), 349-355.
Streater, J. P., Bockelman Morrow, P., & Fiore, S. M. (2012). Making things
that understand people: The beginnings of an interdisciplinary ap-
proach for engineering computational social intelligence. Present-
ed at the 56
th
Annual Meeting of the Human Factors and Ergo-
nomics Society (October 22-26), Boston, MA.
Thompson, E. (2005). Sensorimotor subjectivity and the enactive approach to
experience. Phenomenology and the Cognitive Sciences, 4, 407-
427.
Treur, J. (2012). An integrative dynamical systems perspective on emotions.
Biologically Inspired Cognitive Architectures Journal,
http://dx.doi.org/10.1016/j.bica.2012.07.005.
Vinciarelli, A., Pantic, N., Heylen, D., Pelachaud, C., Poggi, I., D’Errico, F.,
& Schröder, M. (2012). Bridging the gap between social animal
and unsocial machine: A survey of social signal processing. IEEE
Transactions on Affective Computing, 3(1), 69-87.
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 1282
... Examples of these AGI models include cognitive architectures like: the Learning Intelligent Distribution Agent (LIDA; Ramamurthy, Baars, D'Mello, & Franklin, 2006), Soar (Laird, Newell, & Rosenbloom, 1987), and the Adaptive Control of Thought -Rational architecture (ACT-R; Anderson, 1993;Anderson et al., 2004). 1 Each of these AGI models are grounded in a series of conceptual commitments, which allows for integration of the theoretical components into computational mechanisms that characterize each model. Toward a similar end, but focusing on the socialcognitive components of artificial intelligence, this paper represents an elaboration of our previous work regarding social-cognitive mechanisms in robots (Wiltshire, Barber, & Fiore, 2013). This paper details progress on key components of our approach to engineering human social cognition and elaboration of each of the modeling recommendations. ...
... In this section, we advance and extend our approach termed Engineering Human Social Cognition (EHSC), which aims to leverage an interdisciplinary and multi-level understanding of human social-cognitive processes for the development and design of robotic systems that possess social intelligence (Streater, Bockelman Morrow, & Fiore, 2012;Wiltshire, Barber et al., 2013). Our goal with EHSC is to address a number of longstanding problems in humanrobot and human-machine interaction such that robots and machines can interact more naturally with people as an effective and collaborative teammate (e.g., Fiore et al., 2011;Wiltshire, Smith, & Keebler, 2013). ...
... The interpretation of social cues, in turn, can be characterized by the type of cognitive process used (i.e., T1 or T2) and the specific characteristics of the social situation (see Wiltshire, Snow, Lobato, & Fiore, 2014). Interestingly, the social cues and signals distinction applies to both robots interpreting humans and humans interpreting robots capable of displaying social cues (e.g., DeSteno et al., 2012;Fiore et al., 2013;Warta, 2015;Wiltshire, Barber et al., 2013). Therefore, expanding upon the above description of dual-process theory requires further explanation with regard to the dual way in which social-cognitive processes occur in biological systems and ways they can be implemented in artificial cognitive systems. ...
... Perspective-shifting has been examined by considering different sensory cues [33], and it has long been studied for robot use [34][35][36], mostly for reasoning from spatial relations that may change based on human and robot positions [2,3,10,11,[37][38][39][40]. Pioneering work by Trafton et al. [2] first showed how visual and spatial perspective-taking could be implemented in robots by designing a cognitive architecture that could take multiple perspectives and use similar frames of reference as humans when completing a collaborative task. ...
... To explore the effect of perspective-taking on spatial referring expressions, we designed a task in which a robot asked users to pick up objects described either from the user's perspective or from the robot's own perspective and formulated the following hypotheses: Hypothesis 1. Because taking the user perspective is stated to benefit the effectiveness of the interaction between the robot and the user [35], we hypothesized that people would spend less time when the robot described the object from their perspective. ...
Article
Full-text available
For effective verbal communication in collaborative tasks, robots need to account for the different perspectives of their human partners when referring to objects in a shared space. For example, when a robot helps its partner find correct pieces while assembling furniture, it needs to understand how its collaborator perceives the world and refer to objects accordingly. In this work, we propose a method to endow robots with perspective-taking abilities while spatially referring to objects. To examine the impact of our proposed method, we report the results of a user study showing that when the objects are spatially described from the users’ perspectives, participants take less time to find the referred objects, find the correct objects more often and consider the task easier.
... J. McNeese et al., 2018). This rapidly advancing field of human-agent teamwork builds on traditional teamwork principles and research but integrates technical performance in the real world with social factors (Bansal et al., 2019;Wiltshire et al., 2013). The integration of these two fields allows research principles previously observed in human-human teamwork to act as a basis for human-agent teaming, allowing a human-centered approach for the design and development of AI teammates and human-agent teams (Flathmann et al., 2019). ...
Article
Full-text available
Technical and practical advancements in Artificial Intelligence (AI) have led to AI teammates working alongside humans in an area known as human-agent teaming. While critical past research has shown the benefit to trust driven by the incorporation of interaction rules and structures (i.e. etiquette) in both AI tools and robotic teammates, research has yet to explicitly examine etiquette for digital AI teammates. Given the historic importance of trust within human-agent teams, the identification of etiquette’s impact within said teams should be paramount. Thus, this study empirically evaluates the impact of AI teammate etiquette through a mixed-methods study that compares AI teammates that either adhere to or ignore traditional etiquette standards for machine systems. The quantitative results show that traditional etiquette adherence leads to greater trust, perceived performance of the AI, and perceived performance of the team as a whole. However, qualitative results reveal that not all traditional etiquette behaviors have universal appeal due to the presence of individual differences. This research provides the first empirical and explicit exploration of etiquette within human-agent teams, and the results of this study should be used further design specific etiquette behaviors for AI teammates.
... A clearer understanding of social cognitive constructs (such as determining intentionality, which suggests an intimate connection between social cues and the perception of robots as social agents) is required to fully optimise HRI [14]. This statement emerges from a shift in the perception of robots as tools that extend human capabilities to teammates that collaborate with people [15][16][17][18]. ...
Article
Full-text available
Industry 4.0 has ushered in a new era of process automation, thus redefining the role of people and altering existing workplaces into unknown formats. The number of robots in the manufacturing industry has been steadily increasing for several decades and in recent years the number and variety of industries using robots have also increased. For robots to become allies in the day-to-day lives of operators, they need to provide positive and fit-for-purpose experiences through smooth and satisfying interactions. In this sense, user experience (UX) serves as the greatest link between persons and robots. Essential to the study of UX is its evaluation. Therefore, the aim of this study is to identify methodologies that evaluate the human–robot interaction (HRI) from a human-centred approach. A systematic literature review has been carried out, in which 24 articles have been identified. Among these, are 15 experimental studies, in addition to theoretical frameworks and tools. The review has provided insight into how evaluations are conducted in HRI. The results show the most evaluated factors and how they are measured considering different types of measurements: qualitative and quantitative, objective and subjective. Research gaps and future directions are correspondingly identified.
... Typically, this is through the utilization of combinations of neural networks, to replicate cognitive processes [42,43]. This has potential implications for the facilitation of collaborative behaviour and the improvement of human-machine-interaction. Recent work on social cognition and social intelligence suggests that providing intelligent robots with social understanding, and human-like cognitive processes and structures, will better enable natural and intuitive behaviour when interacting with humans [44,45]. ...
Article
Several decades of development in the fields of robotics and automation have resulted in human-robot interaction is commonplace, and the subject of intense study. These interactions are particularly prevalent in manufacturing, where human operators (HOs) have been employed in numerous robotics and automation tasks. The presence of HOs continues to be a source of uncertainty in such systems, despite the study of human factors, in an attempt to better understand these variations in performance. Concurrent developments in intelligent manufacturing present opportunities for adaptability within robotic control. This article examines relevant human factors and develops a framework for integrating the necessary elements of intelligent control and data processing to provide appropriate adaptability to robotic elements, consequently improving collaborative interaction with human colleagues. A neural network-based learning approach is used to predict the influence on human task performance and use these predictions to make informed changes to programed behavior, and a methodology developed to explore the application of learning techniques to this area further. This article is supported by an example case study, in which a simulation model is used to explore the application of the developed system, and its performance in a real-world production scenario. The simulation results reveal that adaptability can be realized with some relatively simple techniques and models if applied in the right manner and that such adaptability is helpful to tackle the issue of performance disparity in manufacturing operations.
... This body enables them to both perceive and act socially in their environment (Paauwe et al., 2015). Further, an agents' corporality is intrinsically related to how the body allows the robot not only to sense its environment and act in response to it, but also to exert an action as an agent in the environment (Wiltshire et al., 2013). ...
Article
Full-text available
The term affordance has been inconsistently applied both in robotics and communication. While the robotics perspective is mostly object-based, the communication science view is commonly user-based. In an attempt to bring the two perspectives together, this theoretical paper argues that social robots present new social communicative affordances emerging from a two-way relational process. I first explicate conceptual approaches of affordance in robotics and communication. Second, a model of enacted communicative affordance in the context of Human-Machine Communication (HMC) is presented. Third and last, I explain how a pivotal social robot characteristic—embodiment—plays a key role in the process of social communicative affordances in HMC, which may entail behavioral, emotional, and cognitive effects. The paper ends by presenting considerations for future affordance research in HMC.
... New approaches to understanding trust are therefore needed and especially those that are affectively grounded [73,128]. To understand how the introduction of social robots in a team might affect trust development and maintenance, we examine trust development in human-human teams [108,152]. Even though trust between humans and robots may not be tantamount to trust among humans [87], we may still draw insights from human-human trust development frameworks [29]. ...
Article
Full-text available
The introduction of artificial teammates in the form of autonomous social robots, with fewer social abilities compared to humans, presents new challenges for human–robot team dynamics. A key characteristic of high performing human-only teams is their ability to establish, develop, and calibrate trust over long periods of time, making the establishment of longitudinal human–robot team trust calibration a crucial part of these challenges. This paper presents a novel integrative model that takes a longitudinal perspective on trust development and calibration in human–robot teams. A key new proposed factor in this model is the introduction of the concept relationship equity. Relationship equity is an emotional resource that predicts the degree of goodwill between two actors. Relationship equity can help predict the future health of a long-term relationship. Our model is descriptive of current trust dynamics, predictive of the impact on trust of interactions within a human–robot team, and prescriptive with respect to the types of interventions and transparency methods promoting trust calibration. We describe the interplay between team trust dynamics and the establishment of work agreements that guide and improve human–robot collaboration. Furthermore, we introduce methods for dampening (reducing overtrust) and repairing (reducing undertrust) mis-calibrated trust between team members as well as methods for transparency and explanation. We conclude with a description of the implications of our model and a research agenda to jump-start a new comprehensive research program in this area.
... It is important for robots to have appropriate social-cognitive mechanisms that allow them to function naturally and intuitively with humans. From their review of the literature, [45] recommended a number of methods for modeling socialcognitive mechanisms in robots centered on modeling perceptual, motor, and cognitive architectures of the robot. These include (a) leveraging ecological approaches; (b) incorporating physical and social affordances; (c) using dynamical modeling and analysis of interaction dynamics; (d) instantiating modal perceptual and motor representations; (e) a coupling of action, perception, and cognition; (f) including motor and perceptual resonance mechanisms; (g) the capability to abstract from modal experiences; and (h) leveraging simulation-based top-down perceptual bias. ...
Chapter
Full-text available
Trust is paramount to the development of effective human-robot teaming. It becomes even more important as robotic systems evolve to make both independent and interdependent decisions in high-risk, dynamic environments. Yet, despite decades of research looking at trust in human-interpersonal teams, human-animal teams, and human-automation interaction, there are still a number of critical research gaps related to human-robot trust. The US Army Research Laboratory Robotics Collaborative Technology Alliance (RCTA) is a 10-year program with government, industry and academia combining to conduct collaborative research across four major robotic technical areas of intelligence, perception, human-robot interaction, and manipulation and mobility. This paper describes findings from over 60 publications and 49 presentations describing research conducted as part of the RCTA from 2010 to 2017 to address these critical gaps on human-robot trust.
... A number of efforts in HRI today are focused on transforming the common perception of robots as mere tools to robots viewed as teammates, collaborators and companions [1]. Current research [2], [3], [4] explores the effects of presenting human-like behavior patterns and human-specific features (e.g. ...
Conference Paper
Perfect memory, strong reasoning abilities, and flawless performance are typical cognitive traits associated with robots. In contrast, forgetting and erroneous reasoning are typical cognitive patterns of humans. This discrepancy may fundamentally affect the way how robots and humans interact and collaborate together and is today still little explored. In this paper, we investigate the effect of differences between erroneous and perfect robots in a competitive scenario in which humans and robots solve reasoning tasks and memorize numbers. Participants are randomly assigned to one of two groups: in the first group they interact with a perfect, flawless robot, while in the second, they interact with a human-like robot with occasional errors and imperfect memory abilities. Participants rated attitude, sympathy, and attributes in a questionnaire and we measured their task performance. The results show that the erroneous robot triggered more positive emotions but lead to a lower human performance than the perfect one. Effects of both conditions on the group of students with and without technical background are reported.
Article
Full-text available
With robotics rapidly advancing, more effective human–robot interaction is increasingly needed to realize the full potential of robots for society. While spoken language must be part of the solution, our ability to provide spoken language interaction capabilities is still very limited. In this article, based on the report of an interdisciplinary workshop convened by the National Science Foundation, we identify key scientific and engineering advances needed to enable effective spoken language interaction with robotics. We make 25 recommendations, involving eight general themes: putting human needs first, better modeling the social and interactive aspects of language, improving robustness, creating new methods for rapid adaptation, better integrating speech and language with other communication modalities, giving speech and language components access to rich representations of the robot’s current knowledge and state, making all components operate in real time, and improving research infrastructure and resources. Research and development that prioritizes these topics will, we believe, provide a solid foundation for the creation of speech-capable robots that are easy and effective for humans to work with.
Article
Full-text available
In this paper, we consider the influence of Gibson's affordance theory on the design of robotic agents. Affordance theory (and the ecological approach to agent design in general) has in many cases contributed to the development of successful robotic systems; we provide a brief survey of AI research in this area. However, there remain significant issues that complicate discussions on this topic, particularly in the exchange of ideas between researchers in artificial intel l igence and ecological psychology. We identify some of these issues, specifical ly the lack of a general ly accepted definition of "affordance" and fundamental differences in the current approaches taken in AI and ecological psychology. While we consider reconcil iation between these fields to be possible and mutual ly beneficial, it will require some flexibil ity on the issue of direct perception.
Article
Full-text available
With teleoperation being the contemporary standard for Human Robot Interaction (HRI), research into multi-modal communication (MMC) has focused on development of advanced Operator Control Units (OCU) supporting control of one or more robots. However, with advances being made to improve the perception, intelligence, and mobility of robots, a need exists to revolutionize the ways in which Soldiers interact with robotic team members. Within this future vision, mixed-initiative Soldier-Robot (SR) teams will work collaboratively sharing information back-and-forth in a fluid natural manner using combinations of communication methods. Therefore, new definitions are required to focus research efforts to support next-generation MMC. After a thorough survey of the literature and a scientific workshop on the topic, this paper aims to operationally define MMC, Explicit Communication, and Implicit Communication to encompass the shifting paradigm of HRI from a controller/controlled relationship to a cooperative team mate relationship. This paper presents the results from a survey of the literature and a scientific workshop that inform proposed definitions for multi-modal, explicit, and implicit communication. An illustrative scenario vignette provides context and specific examples of each communication type. Finally, future research efforts are summarized.
Article
Full-text available
A transition in robotics from tools to teammates is underway, but, because it is in an early state, experience with intelligent robots and agents is limited. As such, human mental models of intelligent robots are primitive, easily influenced by superficial characteristics, and often incomplete or inaccurate. This paper investigates the factors that influence mental models of robots, and explores solutions for the formation of accurate and useful mental models with a specific focus on military applications. Humans must possess a clear and accurate understanding of how robots communicate and operate, particularly in military settings where intelligent, autonomous robotic agents are desired. Complete and accurate mental models in these hazardous and critical applications will reduce the inherent danger of automation disuse or misuse. Implications for training and developing appropriate trust are also discussed.
Article
Full-text available
There are striking parallels between ecological psychology and the new trends in robotics and computer vision, particularly regarding how agents interact with the environment. We present some ideas from ecological psychology, including control laws using optic flow, affordances, and action modes, and describe our implementation of these concepts in two mobile robots that can avoid obstacles and chase or flee moving targets solely by using optic flow. The properties of these methods were explored further in simulation. This work ties in with that of others who argue for a methodological approach in robotics that forgoes a central model or planner. Not only might ecological psychology contribute to robotics, but robotic implementations might, in turn, provide a test bed for ecological principles and a source of ideas that could be tested in animals and humans.
Article
Metaphoric classification of social interaction in human-robot teams can provide a useful frame for directing robot-human engagement while assisting the robot in sense-making of dynamic world behaviors. This paper describe a framework built from principles of embodiment to show how members of teams are not separated from their world, but make sense and interact in a world via a continuous (rather than causal) flow of engagement facilitated by two forms of perception. In this context, we distinguish between direct and reflective perception, arguing that agents socially engage via these modalities of perception through body, language, and context. We then argue that these forms of perception can direct the use of metaphors. The metaphors, in turn, act as classification frames for robot social intelligence using established human schemas.
Conference Paper
For the long term co-existence of robots with us in complete harmony, they will be expected to show socio-cognitive behaviors. In this paper, taking inspiration from child development research and human behavioral psychology we will identify the basic but key capabilities: perceiving abilities, effort and affordances. Further we will present the concepts, which fuse these components to perform multi-effort ability and affordance analysis. We will show instantiations of these capabilities on real robot and will discuss its potential applications for more complex socio-cognitive behavior. Copyright © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Article
Humans engage in a wide range of social ac- tivities. Previous research has focused on the role of higher cognitive functions, such as mentalizing (the ability to infer others' mental states) and language processing, in social exchange. This article reviews recent studies on action perception and joint action suggesting that basic percep- tion-action links are crucial for many social interactions. Mapping perceived actions to one's own action repertoire enables direct understanding of others' actions and sup- ports action identification. Joint action relies on shared action representations and involves modeling of others' performance in relation to one's own. Taking the social nature of perception and action seriously not only con- tributes to the understanding of dedicated social processes but has the potential to create a new perspective on the individual mind and brain. KEYWORDS—perception and action; action perception; joint action; social cognition; social neuroscience mediate way of social understanding and social interaction, based on the close link between perception and action. For ex- ample, when one observes another individual performing a particular action, this activates the representations in one's own action system that one uses to perform the observed action. Taking the social functions of perception and action seriously might help to better understand disorders of social functions, including autism and certain symptoms of schizophrenia such as delusions of control. In this article, we discuss recent findings from two research domains that shed new light on the social nature of perception and action. Research on action perception demonstrates that individuals rely on their bodies and the action system moving their bodies to understand others' actions and to identify their own actions. Research on joint action has revealed how indi- viduals share representations, predict each other's actions, and learn to jointly plan ahead.