Content uploaded by Travis J. Wiltshire
Author content
All content in this area was uploaded by Travis J. Wiltshire on Aug 17, 2015
Content may be subject to copyright.
http://pro.sagepub.com/
Ergonomics Society Annual Meeting
Proceedings of the Human Factors and
http://pro.sagepub.com/content/57/1/1278
The online version of this article can be found at:
DOI: 10.1177/1541931213571283 2013 57: 1278Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Travis J. Wiltshire, Daniel Barber and Stephen M. Fiore
Towards Modeling Social-Cognitive Mechanisms in Robots to Facilitate Human-Robot Teaming
Published by:
http://www.sagepublications.com
On behalf of:
Human Factors and Ergonomics Society
can be found at:Proceedings of the Human Factors and Ergonomics Society Annual MeetingAdditional services and information for
http://pro.sagepub.com/cgi/alertsEmail Alerts:
http://pro.sagepub.com/subscriptionsSubscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://pro.sagepub.com/content/57/1/1278.refs.htmlCitations:
What is This?
- Sep 30, 2013Version of Record >>
at University of Central Florida Libraries on October 4, 2013pro.sagepub.comDownloaded from at University of Central Florida Libraries on October 4, 2013pro.sagepub.comDownloaded from at University of Central Florida Libraries on October 4, 2013pro.sagepub.comDownloaded from at University of Central Florida Libraries on October 4, 2013pro.sagepub.comDownloaded from at University of Central Florida Libraries on October 4, 2013pro.sagepub.comDownloaded from at University of Central Florida Libraries on October 4, 2013pro.sagepub.comDownloaded from
Towards Modeling Social-Cognitive Mechanisms in Robots to Facilitate
Human-Robot Teaming
Travis J. Wiltshire, Daniel Barber, and Stephen M. Fiore
University of Central Florida, Orlando, FL
For effective human-robot teaming, robots must gain the appropriate social-cognitive mechanisms that al-
low them to function naturally and intuitively in social interactions with humans. However, there is a lack of
consensus on social cognition broadly, and how to design such mechanisms for embodied robotic systems.
To this end, recommendations are advanced that are drawn from HRI, psychology, robotics, neuroscience
and philosophy as well as theories of embodied cognition, dual process theory, ecological psychology, and
dynamical systems. These interdisciplinary and multi-theoretic recommendations are meant to serve as inte-
grative and foundational guidelines for the design of robots with effective social-cognitive mechanisms.
INTRODUCTION
A number of efforts in Human-Robot Interaction (HRI)
are focused on transforming the common perception of robots
as tools to robots viewed as teammates, collaborators, or part-
ners (e.g., Fiore, Elias, Gallagher, & Jentsch, 2008; Hoffman
& Breazeal, 2004; Lackey, Barber, Reinerman-Jones, Badler,
& Hudson, 2011; Phillips, Ososky, & Jentsch, 2011). Though
the development of social-cognitive mechanisms has received
significantly less emphasis in HRI, it is essential for effective
human-robot teaming, as such mechanisms allows robots to
function naturally and intuitively during their interactions with
humans (Breazeal, 2004). To effectively coordinate and coop-
erate with human-teammates, a robot must not only recognize
the human teammate’s observable actions, but it must also
understand the intentions behind such actions in relation to the
environment. To have such a capability, robots must gain the
requisite social-cognitive mechanisms that afford the interpre-
tation of mental states (i.e., intentions, beliefs, desires) of
teammates during interactive contexts and concomitantly dis-
play the appropriate behaviors. This will allow robot team-
mates to work with humans towards shared goals and dynami-
cally adjust plans based on both observable human actions and
unobservable mental states (e.g., Hoffman & Breazeal, 2004).
Engineering human social cognition. In service of the
aforementioned need, we seek to extend the recently advanced
approach termed Engineering Human Social Cognition
(EHSC), which aims to leverage an interdisciplinary under-
standing of human social-cognitive processes for the develop-
ment and design of systems that possess social intelligence
(Streater, Bockelman Morrow, & Fiore, 2012). This effort
draws heavily from advances in Social Signal Processing
(SSP) with the aim of providing mechanisms for computers
that are able to interpret high-level social signals (i.e., mental
states) from combinations of low-level social cues (see Vinci-
arelli et al., 2012 for review).
Our efforts build off of this work with a particular empha-
sis on social robotics. Importantly, there are a number of dif-
ferentiating characteristics that distinguish robots from com-
puters. Social robots are defined as: (a) physically embodied
agents that, (b) function at some level of autonomy, and (c)
interact and communicate with humans by, (d) adhering to
normative and expected social behaviors (Bartneck & Forlizzi,
2004). Given that one of the long term goals for designers of
robotic teammates, and EHSC, is for robots to function auton-
omously, it is essential to first frame our approach with a defi-
nition of autonomous agents. Autonomous agents are defined
as an embodied system that is designed to “satisfy internal and
external goals by its own actions while in continuous long-
term interaction with the environment in which it is situated”
(Beer, 1995; p.173). Although there are inherent challenges
associated with automated systems performing as team mem-
bers (cf. Klein, Woods, Bradshaw, Hoffman, & Feltovich,
2004), we submit that providing foundational social-cognitive
mechanisms is a necessary step towards mitigating many of
these issues. Our aim with this paper is not to articulate the
challenges that could be solved by the instantiation of such
mechanisms, but rather provide an outline of such a system.
Next, in framing our approach, we suggest that robots
must be designed with serious consideration of their embodi-
ment. We suggest this strongly given the increasing support for
the modeling of artificial cognitive systems and, in particular
social robots, on principles of embodied cognition across dis-
ciplines such as HRI, philosophy, psychology, robotics, and
neuroscience (e.g., Barsalou, 2008; Breazeal et al., 2009;
Chaminade & Cheng, 2009; Dautenhahn, Ogden, & Quick,
2002; Fiore et al., 2008; Gallagher, 2007; Hoffman, 2012;
Pezzulo et al., 2013; Pfeifer, Lungarella, & Iida, 2007). By
principles of embodied cognition we mean that designers must
account for (a) the environment and ecological niche and the
associated physical laws in which (b) the body and morpholog-
ical structures of an agent are grounded, (c) the sensorimotor
couplings (e.g., sensor and effector relations), as a function of
morphology, that shape the dynamic interactions between
agent and environment, and (d) the situatedness of the agents’
cognitive processes as a function of varying contexts (e.g.,
Pezzulo et al., 2013; Pfeifer et al., 2007). This emphasis on
considering embodiment is central to social cognition in hu-
mans (e.g., Bohl & van den Bos, 2012); though, taking an em-
bodied view of social cognition requires further explanation of
the dual way in which these processes occur.
Dual paths for social signal processing
.
Decades of re-
search in social psychology and more recently, social cognitive
neuroscience (among other cognitive fields of inquiry) suggest
that there are two generally distinct types of cognitive process-
es that follow different pathways evident at the neural and
functional levels and emergent at the personal level (e.g.
Bargh, 1984; Chaiken, & Trope, 1999; Satpute & Lieberman,
2006). Type 1 (T1) processes are implicit, automatic, stimulus-
Copyright 2013 by Human Factors and Ergonomics Society, Inc. All rights reserved. DOI 10.1177/1541931213571283
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 1278
driven, evolutionarily older, shared with a broader range of
species, and assign primacy to action. Type 2 (T2) processes
are explicit, controlled, evolutionarily recent, and characteris-
tically off-line (e.g., Bohl & van den Bos, 2012; Chaiken, &
Trope, 1999). In Cognitive Science, a related account uses the
terms “direct” and “reflective” perception (Bockelman Mor-
row & Fiore, 2012). Related to T1 processes, direct perception
involves processing that considers the enactive and embodied
coupling between humans and their world (e.g., Noë, 2004;
Thompson, 2005). This has been extended to social-cognitive
processes and is argued to allow for rapid access of intentional
and affective states of another agent (De Jaegher, 2009; Gal-
lagher, 2008). Related to T2 processes, reflective perception
relies on memory and context to construct interpretations of
interactions. Communication and joint-attention draw from
norms and prior experience to enable an agent to make sense
of intersubjective experiences (Gallagher & Hutto, 2008).
Bohl and van den Bos (2012) advanced a multi-level
framework that leverages dual-process theory to integrate in-
teractionism and theory of mind approaches to social cogni-
tion. In this account, interactionism emphasizes the role of
neural systems (e.g., mirror neuron system) in providing a
more direct and automatic perception of others’ mental states
during an interaction. Conversely, theory of mind approaches
emphasize a “mindreading system” to make inferences or men-
tal simulations to understand others’ mental states. Taken to-
gether, these two approaches, unified by dual process theory,
provide a rich account of social cognition in humans.
We suggest that this integration can inform the design of
social-cognitive models for robots in order to improve human
robot teaming. If social signal processing in robots could be
modeled in this way, it may approximate the sophisticated and
flexible social-cognitive processes in humans. Specifically, T1
processes would allow for a more direct understanding of, and
automatic interaction with, the environment and human team-
mates, and T2 processes would allow for more complex and
deliberate forms of cognition, such as mental simulation, that
allow for prediction and interpretation of novel situations.
We argue that taking into account a dual process perspec-
tive would ultimately lead to not only a more effective robot,
but also a more effective teammate. For example, a robot in-
teracting with a teammate under time constraints would be able
to leverage T1 processes in service of directly engaging in
appropriate actions. However, when time constraints are less
relevant, or if a robot is engaging in, for example, a surveil-
lance task, T2 processes would allow for more analytic types
of cognition to better understand and predict the future states
of a situation. To the best of our knowledge, no such outline
exists to explicate the nature of a dual social signal processing
system in an embodied robot. Therefore, as the main body of
this paper, we aim to provide the preliminary foundation for
such a system with reference to each recommendation’s link to
T1 and/or T2 processes. Because design recommendations for
such social-cognitive systems are sparse, the aim of this paper
is to leverage an interdisciplinary and multi-theoretic approach
to provide recommendations for the design of robots that will
one day function, and be perceived, as teammates.
RECOMMENDATIONS FOR MODELING SOCIAL-
COGNITIVE MECHANISMS IN ROBOTS
The recommendations in this paper are drawn from disci-
plines such as HRI, philosophy, psychology, robotics, and neu-
roscience as well as from theories of embodied cognition, dual
process theory, ecological psychology, and dynamical systems
theory. More specifically, at this point the aim is not to pro-
vide modeling formalisms for social robotics as many of these
are included in the references we cite. Rather, the aim is to
highlight a set of recommendations that are relevant for ad-
vancing social-cognitive mechanisms through linkages across a
number of disciplines and approaches. Taken together, these
recommendations serve as a foundational step towards improv-
ing human-robot teaming although admittedly, the attempt here
is ambitious given the complexity of social cognition. As such,
our future efforts will adopt a more technical and instantiating
approach based on these recommendations. Therefore, the
recommendations and ideas presented herein are meant to be
more illustrative than exhaustive.
Leverage the ecological approach to robotics. To the
extent possible, the system should be designed in such a way
as to provide the opportunity for direct and non-
representational interaction with the physical and social envi-
ronment to minimize the expense of computational resources.
We argue that the ecological approach provides a framework
for instantiating T1 processes in robots. Representative works
in robotics related to this notion are found in Brooks (1999) as
well as Duchon, Warren, and Kaelbling (1998). Brooks’
(1999) approach led to the development of the subsumption
architecture where each sub-system for the robot was added
on to more basic sub-systems without interfering with previous
systems. Similarly, Duchon et al. (1998) developed robots that
were able to navigate the environment solely though optic flow
rather than construction of an internal model of the world.
The ecological approach to robots is characterized by the
following principles: (a) treatment of the robot and the envi-
ronment as a system, (b) the robot’s behavior is emergent from
the dynamics of the agent-environment system, (c) there is a
direct coupling between perception and action, (d) the infor-
mation required for adaptive behavior is available in the envi-
ronment, and (e) a robot does not always require the need to
represent the environment in a central model (Duchon et al.,
1998). These notions derive largely from Gibson’s (1979) eco-
logical approach to perception. However, for our purposes,
they are also characteristic of T1 processes. As such, the eco-
logical approach could serve to inform the design of a social
signal processing system by providing more direct mechanisms
for interaction with the physical and social environment.
Utilize physical and social affordances. Stemming from
the ecological approach is the notion of affordances. Af-
fordances are thought of as the directly perceivable opportuni-
ties for action or interaction arising between the agent and en-
vironment. This interaction could be composed of a number of
objects, substances, and surfaces, and also includes other
agents (Gibson, 1979; Chemero, 2003). Further, affordances
are dependent on the type of embodiment an agent has as well
as the concomitant structuring of the environment as perceiva-
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 1279
ble in an agent’s optic array and other sensory modalities (e.g.,
Kono, 2009). As such, affordances are essential to advancing
our goal of providing a robot with more direct or T1 processes
for interaction.
Mechanisms for control of autonomous robots have uti-
lized and developed formalisms of affordances that specify
relations between the agent and the environment and have
proven useful for navigation through a physical environment
(e.g., Sahin, Cakmak, Dogar, Ugur, & Ucolik, 2007). Much
work on affordances in robotics is of this nature; that is, focus-
ing on interaction with the physical environment with relative-
ly little emphasis on the affordances of the social environment.
However, although not explicitly following the ecological ap-
proach, Pandey and Alami’s (2012) efforts towards the devel-
opment of complex social-cognitive behaviors in robots in-
cludes mechanisms for the analysis of affordances not only
between an agent, objects, and the environment, but also be-
tween multiple agents. Their approach specifies three key ca-
pabilities (visuo-spatial perspective taking, effort analysis, and
affordance analysis) as necessary for providing a robot with
social-cognitive mechanisms, with which, a robot could gather
information from the environment about teammates and act on
them in order to achieve cooperative and coordinative goals.
The first capability, visuo-spatial perspective taking, is
analogous to gaze-following and perspective-taking behaviors
in humans and provides a mechanism for interpreting what a
teammate visually perceives at a given point in time. The sec-
ond capability, effort analysis, represents the amount of effort
required by a teammate to execute a given task given the cur-
rent situation, positioning, and morphology of a teammate’s
body. Lastly, affordance analysis provides a mechanism for
identifying opportunities for action existing between agent-
agent, agent-location, agent-object, and object-agent. Fusing
these capabilities provides a robot with information regarding
the movements a teammate might be able to execute at a given
time, the tasks executable between multiple teammates at a
given time, and the possibilities for movement of objects (see
Pandey & Alami, 2012 for details).
Notably, proponents of the ecological view may object to
these implementations of affordances for robots as most main-
tain that affordances are non-representational (e.g., Chemero
& Turvey, 2012). However, addressing this argument is be-
yond the scope of this paper and, as such, we remain agnostic
to either view and leave room for both representation and non-
representational approaches (cf. Horton, Chakraborty, &
Amant, 2012). Regardless, affordance-based robotics research
is needed to extend efforts beyond interaction with the physi-
cal environment to novel methods for interaction with the so-
cial environment, facilitating effective human-robot teaming.
Incorporate dynamical modeling and analysis of inter-
action dynamics. Across multiple levels ranging from the
neural phenomena within an individual, to that of multiple
individuals interacting in an environment, there are emergent
and dynamic self-organizational patterns that specify and con-
strain interaction possibilities as they occur across time and
space (e.g., Marsh, Richardson, & Schmidt, 2009; Pfeifer et
al., 2007). As such, a number of researchers have pursued
modeling autonomous agents and more broadly, cognition, as
dynamical systems (e.g., Beer, 1995; Treur, 2012). Mecha-
nisms such as this support robots by providing more computa-
tionally efficient means of interpreting and interacting with
both the physical and social environment.
Aside from modeling, dynamical systems tools have also
been used for analyzing interaction dynamics from motion
sensing data (e.g., Marsh et al., 2009). With such mechanisms,
robots could conceivably analyze interaction dynamics unfold-
ing between teammates. In this context, we propose that robots
should be designed with mechanisms for analyzing and syn-
chronizing with emergent interaction dynamics. In doing so,
this may provide robots with an additional capability analo-
gous to T1 processes in humans.
Instantiate modal perceptual and motor representa-
tions. To the extent that the robot cannot rely on non-
representational interaction with the physical and social envi-
ronment, the embodied cognitive system of a social robot must
rely on multi-modal sensory and motor representations (e.g.,
Hoffman, 2012). According to Pezzulo et al. (2013), this
means that incoming information from the robot’s sensors
must be represented in a form that is linked to its modality
(e.g., visual or auditory). This provides the foundation from
which a robot could begin to manipulate modal perceptions to
not only interpret a human teammate (Breazeal et al., 2009),
but also to form concepts, memories, and make decisions
(Hoffman, 2012). From this, modal perceptual and motor rep-
resentations can be taken as providing an essential form of
input for both T1 and T2 processes.
Further, in line with the goal of developing autonomous
agents, the robot must be goal-oriented and thus, it must gen-
erate internal modalities such as affect and motivation (Pez-
zulo et al., 2013; Hoffman, 2012). Notions such as affect may
seem abstract for a robot; however, this can provide a robot
with social-cognitive mechanism for understanding the affec-
tive states of human teammates. Such efforts would point the
way forward for modeling the dynamic, cyclical, and interde-
pendent nature of affective and cognitive states (Treur, 2012).
Couple perception, action, and cognition. To enable
both T1 and T2 processes, modal representations require inte-
gration and association with one another to couple perceptual
sensors with motor effectors. This notion stems from recent
work in neuroscience showing that perception and action are
closely coupled, from which, cognitive processes are grounded
(e.g., Gangohopadhyay & Schilbach, 2011; Knoblich &
Sebanz, 2006); although, this notion was posited in early eco-
logical theory (Gibson, 1979). As such, this line of thinking
has become increasingly adopted in robotics through recogni-
tion of the reciprocal interdependencies of the perception-
action continuum. Again, Brooks’ (1999) subsumption archi-
tecture was one of the first robotics efforts emphasizing direct
perception-action links, which provided motor commands in
direct response to sensory input, allowing the robot to engage
in real-time interaction with the environment. More recently,
Hoffman (2012) proposed achieving multi-modal integration
through symmetrical action-perception activation networks.
Within these networks, perceptions influence higher level as-
sociations that contribute to the selection of actions; however,
perceptions are conversely biased through motor activities.
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 1280
Not only do these networks provide a basis for perceptual
learning, but they may also lead to more fluent interaction be-
tween robots and human teammates (Hoffman, 2012).
Provide motor and perceptual resonance mechanisms.
Providing robot teammates with motor and perceptual reso-
nance mechanisms akin to those elicited in humans by the mir-
ror neuron system (cf. Elias & Fiore, 2008), may contribute to
a better understanding of human teammates’ intentions as they
move through the environment, which, in turn, could lead to
more coordinative joint actions (e.g., Chaminade & Cheng,
2009; Schütz-Bosbach & Prinz, 2007). Specifically, resonance
mechanisms resembling the mirroring system can be character-
ized as a T1 process (Bohl & van den Bos, 2012). Barakova
and Lourens (2009) implemented one example of such a mir-
ror neuron framework that provided simulated and embodied
robots with a mechanism for synchronizing movements and
entraining neuronal firing patterns. This facilitated turn taking
between two agents and other teaming related behaviors. The
bi-directionality afforded by motor and perceptional resonance
mechanisms allows for robots to engage in social interaction
more similarly to those occurring in humans and would likely
improve overall social competence (e.g., Chaminade & Cheng,
2009; Schütz-Bosbach & Prinz, 2007).
Abstract from modal experiences. Mechanisms such as
simulation have the ability to abstract upon modal representa-
tions to instantiate T2 processes. Barsalou (2008) defines this
as the “reenactment of perceptual, motor, and introspective
states acquired during experience with the world, body, and
mind” (pp. 618-619). Along these lines, Breazeal et al. (2009)
developed an embodied social robot system comprised of in-
terconnected perceptual, motor, belief, and intention modules
from which the robot generates its own states and re-uses them
to simulate and infer the perspective and intentions of humans
during an interaction. In support of interactive capabilities,
other efforts have used Dynamic Bayesian Networks for un-
derstanding mental states (Pezzulo, 2012). In short, simulation
mechanisms, in this context, are similar to T2 processes and
would provide the robot with a mechanism for engaging in
mental state attributions of others.
Leverage simulation-based top-down perceptual bias-
ing. Simulation mechanisms also provide a means for a robot
to predict future states in service of engaging in coordinative
action with the physical and social environment (Hoffman &
Breazeal, 2004). Hoffman (2012) submits that “simulation-
based top-down perceptual biasing may specifically be the key
to more fluent coordination between humans and robots work-
ing together in a socially structured interaction” (p. 6). Percep-
tual priming as a simulation mechanism essentially stimulates
and triggers the motor system towards the selection of appro-
priate actions (Marsh, Richardson, & Schmidt, 2009; Schütz-
Bosbach & Prinz, 2007). This deliberate mechanism could
serve as the link between T1 and T2 processes as a function of
informing the more direct perception-action links and also
contribute to the learning and perception of affordances.
CONCLUSIONS
Our goal with this paper was to lay the foundation for
modeling social-cognitive mechanisms in robots such that they
can be designed to function, and be perceived, as effective
teammates. Table 1 lists the modeling recommendations ad-
vanced within this paper with the relation to T1 & T2 process-
es, examples of associated computational formalisms support-
ing their instantiation, and representative references essential
for consideration in their implementation.
Table 1. Modeling Recommendations with potential computational formal-
isms and representative references
Modeling Recom-
mendations
Potential computational
formalisms References
Leverage the ecologi-
cal approach to ro-
botics. (T1)
• Subsumption architec-
tures
• Optical flow control laws
Brooks, 1999;
Duchon et al.,
1998
Utilize physical and
social affordances.
(T1)
• Visuo-spatial perspective
taking, effort and af-
fordance analysis
• (effect,(entity,behavior))
Pandley & Alimi,
2012; Sahin et al.,
2007
Incorporate dynam-
ical modeling and
analysis of interac-
tion dynamics. (T1)
• Dynamical systems mod-
eling techniques
• HKB equations
Beer, 1995;
Marsh et al.,
2009; Treur, 2012
Instantiate modal
perceptual and motor
representa-
tions.(Input for both
T1 & T2)
• Modality streams, nodes,
action networks, afferent,
efferent
Hoffman, 2012;
Pezzulo et al.,
2013
Couple perception,
action, and cognition.
(Link T1 & T2)
• Convergence zones
• Action-perception activa-
tion networks
Brooks, 1999;
Hoffman, 2012
Provide motor and
perceptual resonance
mechanisms. (T1)
• Hebbian learning net-
works associating visual
and motor representations
Chaminade &
Cheng, 2009;
Schütz-Bosbach
& Prinz, 2007
Abstract from modal
experiences. (T2)
• Generation and simula-
tion mode
• Dynamic Bayesian mod-
eling of mental states
Breazeal et al.,
2009; Pezzulo,
2012
Leverage simulation-
based top-down per-
ceptual biasing. (Link
T1 & T2)
• Markov-chain Bayesian
predictor
• Intermodal Hebbian rein-
forcing
Hoffman, 2012;
Schütz-Bosbach
& Prinz, 2007
Though we have synthesized our recommendations and
made interconnections from a number of disciplinary ap-
proaches, these have centered primarily around modeling the
perceptual, motor, and cognitive architecture of the robot. Re-
search and modeling efforts are warranted to take the next step
towards integrating and developing such a system based on
these recommendations. That being said, these recommenda-
tions serve to explicate recent advances in understanding so-
cial-cognitive mechanisms in humans and the beginnings of
leveraging such knowledge for the design of social robotic
systems. Admittedly, given the instantiation and integration of
these mechanisms, certainly more challenges will need to be
addressed to ensure the robot will perform as an effective team
member (cf. Klein et al., 2004). These recommendations, if
instantiated, would provide some very basic perceptual, motor,
and cognitive abilities, but future efforts should address
whether these would also support more complex forms of so-
cial interaction. We expect that these social-cognitive mecha-
nisms, if shown to support non-verbal interactions, would also
be foundational for linguistic interaction (cf. Pezzulo, 2012).
In sum, our goal has been to outline how an embodied so-
cial robot can begin to function autonomously as a teammate.
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 1281
Adopting a dual processing approach to modeling social-
cognitive mechanisms in robots would not only provide a more
sophisticated and flexible perceptual, motor, and cognitive
architecture for robots, it would also allow for a more direct
understanding of, and automatic interaction with, the environ-
ment and human teammates. It could also provide the mecha-
nism for better understanding human behaviors and mental
states as well as allow for prediction and interpretation of nov-
el and complex social situations.
REFERENCES
Bargh, J. A. (1984). Automatic and conscious processing of social infor-
mation. In R. S. Wyer Jr., & T. K. Srull (Eds.), The Handbook of
Social Cognition (Vol. 3, pp. 1-43). Hillsdale, NJ: Erlbaum.
Barakova, E. I., & Lourens, T. (2009). Mirror neuron framework yields repre-
sentations for robot interaction. Neurocomputing, 72, 895-900.
Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology,
59, 617-645.
Bartneck, C., & Forlizzi, J. (2004). A designed-centered framework for social
human-robot interaction. Proceedings of the Ro-Man 2004, Ku-
rashiki, 591-594.
Beer, R. D. (1995). A dynamical systems perspective on agent-environment
interaction. Artificial Intelligence, 72, 173-215.
Bockelman Morrow, P., & Fiore, S.M. (2012). Supporting human-robot teams
in social dynamicism: An overview of the metaphoric inference
framework. Proceedings of 56
th
Annual Meeting of the HFES (pp.
1718-1722). Boston, MA: HFES.
Bohl, V., & van den Bos, W. (2012). Toward an integrative account of social
cognition: Marrying theory of mind and interactionism to study
the interplay of type 1 and type 2 processes. Frontiers in Human
Neuroscience, 6(274), 1-15.
Breazeal, C. (2004). Social interactions in HRI: The robot view. IEEE Trans-
actions on Systems Man and Cybernetics Part C Applications and
Reviews. IEEE.
Breazeal, C., Gray, J., & Berlin, M. (2009). An embodied cognition approach
to mindreading skills for socially intelligent robots. The Interna-
tional Journal of Robotics Research, 28, 656-680.
Brooks, R. A. (1999). Cambrian intelligence: The early history of the new
AI. Cambridge, MA: MIT Press.
Chaiken, S., & Trope, Y. (1999). Dual Process Theories in Social Psycholo-
gy. New York: Guilford.
Chaminade, T., & Cheng, G. (2009). Social cognitive neuroscience and hu-
manoid robotics. Journal of Physiology – Paris, 103, 286-295.
Chemero, A. (2003) An outline of a theory of affordances. Ecological Psy-
chology 15(2), pp. 181–95.
Chemero, A., & Turvey, M. T. (2007). Gibsonian affordances for roboticists.
Adaptive Behavior, 15(4), 473-480.
Dautenhahn, L, Ogden, B. & Quick, T. (2002). From embodied to social
embedded agents: Implications for interaction-aware robots. Cog-
nitive Systems Research, 3, 397-428.
De Jaegher, H. (2009). Social understanding through direct perception? Yes,
by interacting. Consciousness and Cognition, 18(2), 535–542.
Duchon, A. P., Warren, W. H., & Kaelbling, L. P. (1998). Ecological robot-
ics. Adaptive Behavior, 6, 473-507.
Elias, J. & Fiore, S. M. (2008). From Psychology, to Neuroscience, to Robots:
An Interdisciplinary Approach to Bio-Inspired Robotics. Present-
ed at the 20th Annual Convention of the American Psychological
Society, May, Chicago, IL.
Fiore, S. M., Elias, J., Gallagher, S., & Jentsch, F. (2008). Cognition and
coordination: Applying cognitive science to understand macro-
cognition in human-agent teams. Proceedings of the 8th Annual
Symposium on Human Interaction with Complex Systems, Nor-
folk, Virginia.
Gallagher, S. (2007). Social cognition and social robots. Pragmatics & Cog-
nition, 15(3), 435-453.
Gallagher, S. (2008). Direct perception in the intersubjective context. Con-
sciousness and Cognition, 17, 535–543.
Gallagher, S., & Hutto, D. D. (2008). Understanding others through primary
interaction and narrative practice. In J. Zlatev, T. Racine, C. Sinha
& E. Itkonen (Eds.), The shared mind: Perspectives on intersub-
jectivity. Amsterdam: John Benjamins.
Gangohopadhyay, N., & Schilbach, L. (2011). Seeing minds: A neurophilo-
sophical investigation of the role of perception-action coupling in
social perception. Social Neuroscience, 1-14.
Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Boston:
Houghton Mifflin.
Hoffman, G. (2012). Embodied cognition for autonomous interactive robots.
Topics in Cognitive Science, 4(4), 759-772.
Hoffman, G., & Breazeal, C. (2004). Collaboration in human-robot teams. In
Proceedings of AIAA 1
st
Intelligent Systems Technical Confer-
ence, Chicago, IL (pp. 1-18). Reston, VA: AIAA.
Horton, T. E., Chakraborty, A., & Amant, R. S. (2012). Affordances for ro-
bots: A brief survey. Avant, 3(2), 70-84.
Klein, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., & Feltovich, P. J.
(2004). Ten challenges for making automation a. Intelligent Sys-
tems, IEEE, 19(6), 91-95.
Knoblich, G., & Sebanz, N. (2006). The social nature of perception and ac-
tion. Current Directions in Psychological Science, 15(3), 99-104.
Kono, T. (2009). Social affordances and the possibility of ecological linguis-
tics. Integrative Psychological Behavioral Science, 43, 356-373.
Lackey, S. J., Barber, D. J., Reinerman-Jones, L., Badler, N., & Hudson, I.
(2011). Defining next-generation multi-modal communication in
human-robot interaction. Proceedings of 55
th
Annual Meeting of
the HFES (pp. 461-464). Santa Monica, CA: HFES.
Marsh, K. L., Richardson, M. J., & Schmidt, R. C. (2009). Social connection
through joint action and interpersonal coordination. Topics in
Cognitive Science, 1, 320-339.
Noë, A. (2004). Action in perception. Cambridge: MIT Press.
Pandey, A. K., & Alami, R. (2012, July). Visuo-spatial ability, effort and
affordance analyses: Towards building blocks for robot’s complex
socio-cognitive behaviors. In Workshops at the Twenty-Sixth AAAI
Conference on Artificial Intelligence.
Pezullo, G. (2012). The “Interaction Engine”: A common pragmatic compe-
tence across linguistic and nonlinguistic interactions. IEEE Trans-
actions on Autonomous Mental Development, 4(2), 105-123.
Pezullo, G., Barsalou, L. W., Cangelosi, A., Fischer, A., McRae, K., &
Spivey, M. (2013). Computational Grounded Cognition: A new al-
liance between grounded cognition and computational modeling.
Frontiers in Psychology, 3(612), 1-11.
Pfeifer, R., Lungarella, M., & Iida, F. (2007). Self-organization, embodiment,
and biologically inspired robotics. Science, 318, 1088-1093.
Phillips, E., Ososky, S., & Jentsch, F., (2011). From tools to teammates: To-
ward the development of appropriate mental models for intelligent
robots. Proceedings of 55
th
Annual Meeting of the HFES (pp.
1491-1495). Santa Monica, CA: HFES.
Sahin, E., Cakmak, M., Dogar, M. R., Ugur, E., & Ucolik, G. (2007). To
afford or not to afford: A new formalization of affordances toward
affordance-based robot control. Adaptive Behavior, 15(4), 447-
472.
Satpute, A. B., & Lieberman, M. D. (2006). Integrating automatic and con-
trolled processes into neurocognitive models of social cognition.
Brain Research, 1079, 86-97.
Schütz-Bosbach, S., & Prinz, W. (2007). Perceptual resonance: Action-
induced modulation of perception. Trends in Cognitive Sciences,
11(8), 349-355.
Streater, J. P., Bockelman Morrow, P., & Fiore, S. M. (2012). Making things
that understand people: The beginnings of an interdisciplinary ap-
proach for engineering computational social intelligence. Present-
ed at the 56
th
Annual Meeting of the Human Factors and Ergo-
nomics Society (October 22-26), Boston, MA.
Thompson, E. (2005). Sensorimotor subjectivity and the enactive approach to
experience. Phenomenology and the Cognitive Sciences, 4, 407-
427.
Treur, J. (2012). An integrative dynamical systems perspective on emotions.
Biologically Inspired Cognitive Architectures Journal,
http://dx.doi.org/10.1016/j.bica.2012.07.005.
Vinciarelli, A., Pantic, N., Heylen, D., Pelachaud, C., Poggi, I., D’Errico, F.,
& Schröder, M. (2012). Bridging the gap between social animal
and unsocial machine: A survey of social signal processing. IEEE
Transactions on Affective Computing, 3(1), 69-87.
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013 1282