Conference PaperPDF Available

Presence of Life-Like Robot Expressions Influences Children's Enjoyment of Human-Robot Interactions in the Field

Authors:

Abstract and Figures

Emotions, and emotional expression, have a broad influence on the interactions we have with others and are thus a key factor to consider in developing social robots. As part of a collaborative EU project, this study examined the impact of lifelike affective facial expressions, in the humanoid robot Zeno, on children's behavior and attitudes towards the robot. Results indicate that robot expressions have mixed effects depending on the gender of the participant. Male participants showed a positive affective response, and indicated greater liking towards the robot, when it made positive and negative affective facial expressions during an interactive game, when compared to the same robot with a neutral expression. Female participants showed no marked difference across two conditions. This is the first study to demonstrate an effect of lifelike emotional expression on children's behavior in the field. We discuss the broader implications of these findings in terms of gender differences in HRI, noting the importance of the gender appearance of the robot (in this case, male) and in relation to the overall strategy of the project to advance the understanding of how interactions with expressive robots could lead to task-appropriate symbiotic relationships.
Content may be subject to copyright.
Presence of Life-Like Robot Expressions Influences
Children’s Enjoyment of Human-Robot Interactions in
the Field
David Cameron1, Samuel Fernando2, Emily Collins1, Abigail Millings1, Roger Moore2, Amanda Sharkey2,
Vanessa Evers3, and Tony Prescott1
1 Dept. of Psychology, University of Sheffield, S10 2TN, UK.
Email: {d.s.cameron, e.c,.collins, a.millings, t.j.prescott}@sheffield.ac.uk
2 Dept. of Computer Science, University of Sheffield S10 2TN, UK.
Email: {s.fernando, r.k.moore, a.sharkey}@sheffield.ac.uk
3 Dept. of Electrical Engineering, Mathematics and Computer Science,
University of Twente, NL. Email: v.evers@utwente.nl
Abstract. Emotions, and emotional expression, have a broad
influence on the interactions we have with others and are thus a
key factor to consider in developing social robots. As part of a
collaborative EU project, this study examined the impact of life-
like affective facial expressions, in the humanoid robot Zeno, on
children’s behavior and attitudes towards the robot. Results
indicate that robot expressions have mixed effects depending on
the gender of the participant. Male participants showed a positive
affective response, and indicated greater liking towards the robot,
when it made positive and negative affective facial expressions
during an interactive game, when compared to the same robot
with a neutral expression. Female participants showed no marked
difference across two conditions. This is the first study to
demonstrate an effect of life-like emotional expression on
children’s behavior in the field. We discuss the broader
implications of these findings in terms of gender differences in
HRI, noting the importance of the gender appearance of the robot
(in this case, male) and in relation to the overall strategy of the
project to advance the understanding of how interactions with
expressive robots could lead to task-appropriate symbiotic
relationships.
1 INTRODUCTION
A key challenge in human robot interaction (HRI) is the
development of robots that can successfully engage with people.
Effective social engagement requires robots to present engaging
personalities [1] and to dynamically respond to and shape their
interactions to meet human user needs [2].
The current project seeks to develop a biologically grounded
[3] robotic system capable of meeting these requirements in the
form of a socially-engaging Synthetic Tutoring Assistant (STA).
In developing the STA, we aim to further the understanding of
human-robot symbiotic interaction where symbiosis is defined as
the capacity of the robot, and the person, to mutually influence
each other in a positive way. Symbiosis, in a social context,
requires that the robot can interpret, and be responsive to, the
behavior and state of the person, and adapt its own actions
appropriately. By applying methods from social psychology we
aim to uncover key factors in robot personality, behavior, and
appearance that can promote symbiosis. We hope that this work
will also contribute to a broader theory of human-robot bonding
that we are developing drawing on comparisons with our
psychological understanding of human-human, human-animal and
human-object bonds [4].
A key factor in social interaction is the experience of emotions
[5]. Emotions provide important information and context to social
events and dynamically influence how interactions unfold over
time [6]. Emotions can promote cooperative and collaborative
behavior and can exist as shared experiences, bringing individuals
closer together [7]. Communication of emotion can be thought of
as a request for others to acknowledge and respond to our
concerns and to shape their behaviors to align with our motives
[8]. Thus emotional expression can be important to dyadic
interactions, such as that between a teacher and student, where
there is a need to align goals.
Research with a range of robot platforms has demonstrated the
willingness of humans to interpret robot expressive behavior –
gesture [9], posture [10], and facial expression [1] as affective
communication. The extent to which robot expression will
promote symbiosis will depend, however, on how well the use of
expression is tuned to the ongoing interaction. Inappropriate use
of affective expression could disrupt communication and be
detrimental to symbiosis. Good timing and sending clear signals is
obviously important.
Facial expression is a fundamental component of human
emotional communication [11]. Emotion expressed through the
face is also considered to be especially important as a means for
communicating evaluations and appraisals [12]. Given the
importance of facial expressions to the communication of human
affect, they should also have significant potential as a
communication means for robots [13]. This intuition has lead to
the development of many robot platforms with the capacity to
produce human-like facial expression, ranging from the more
iconic/cartoon-like [e.g., 14, 15] to the more natural/realistic [e.g.,
16, 17, 18].
Given the need to communicate clearly it has been argued that,
for facial expression, iconic/cartoon-like expressive robots may be
more appropriate for some HRI applications, for instance, where
the goal is to communicate/engage with children [16, 15].
Nevertheless, as the technology for constructing robot faces has
become more sophisticated, robots are emerging with richly-
expressive life-like faces [16, 17, 18], with potential for use in a
range of real-world applications including use with children. The
current study arose out of a desire to evaluate one side of this
symbiotic interaction – exploring the value of life-like facial
expression in synthetic tutoring assistants for children. Whilst it is
clear that people can distinguish robot expressions almost as well
as human ones [16, 18], there is little direct evidence to show a
positive benefit of life-like expression on social interaction or
bonding. Although children playing with an expressive robot are
more expressive than those playing alone [19], this finding could
be a result of the robot’s social presence [20] and not simply due
to its use of expression. A useful step toward improving our
understanding would be the controlled use of emotional
expression in a setting in which other factors, such as the presence
of the robot and its physical and behavioral design, are strictly
controlled.
In the current study the primary manipulation was to turn on or
off the presence of appropriate positive and negative facial
expressions during a game-playing interaction, with other features
such as the nature and duration of the game, and the robot’s
bodily and verbal expression held constant. As our platform we
employed a Hanson Robokind Zeno R50 [21] which has a
realistic silicon rubber (“flubber”) face, that can be reconfigured,
by multiple concealed motors, to display a range of reasonably
life-like facial expressions in real-time (Figure 1).
Figure 1. The Hanson Robokind Zeno R50 Robot with example
facial expressions
By recording participants (with parental consent), and through
questionnaires, we obtained measures of proximity, human
emotional facial expression, and reported affect. We hypothesized
that children would respond to the presence of facial expression
by (a) reducing their distance from the robot, b) showing greater
positive facial expression themselves during the interaction, and
c) reporting greater enjoyment of the interaction compared to
peers who interacted with the same robot but in the absence of
facial expression. Previous studies have shown some influence of
demographics such as age and gender on HRI [22, 23, 24]. In our
study, a gender difference could also arise due to the visual
appearance of the Zeno robot as similar to a male child, which
could prompt different responses in male and female children. We
therefore considered these other factors as potential moderators of
children’s responses to the presence or absence of robot emotional
expression.
2 METHOD
2.1 Design
Due to the potential of repeated robot exposure prejudicing
participants’ affective responses, we employed a between-subjects
design, such that participants were allocated to either the
experimental condition – interaction with a facially expressive
robot, or to the control condition of a non-facially-expressive
robot. Allocation to condition was not random, but determined by
logistics due to the real-world setting of the research. The study
took place as part of a two-day special exhibit demonstrating
modern robotics at a museum in the UK. Robot expressiveness
was manipulated between the two consecutive days, such that
visitors who participated in the study on the first day were
allocated to the expressive condition, and visitors who
participated in the study on the second day were allocated to the
non-expressive condition.
2.2 Participants
Children visiting the exhibit were invited to participate in the
study by playing a game with Zeno. Sixty children took part in the
study in total (37 male and 23 female; M age = 7.57, SD = 2.80).
Data were trimmed by age to ensure sufficient cognitive capacity
(those aged < 5 were excluded4) and interest in the game (those
aged >11 were excluded) leaving 46 children (28 male and 18
Female; M age = 8.04, SD = 1.93).
2.3 Measures
Our primary dependent variables were interpersonal responses to
Zeno measured through two objective measures: affective
expressions and interpersonal distance. Additional measures
comprised of a self-report questionnaire, completed by
participating children, with help from their parent/carer if
required, and an observer’s questionnaire, completed by
parents/carers.
2.3.1 Objective Measures
Interpersonal distance between the child and the robot over the
duration of the game was recorded, using a Microsoft Kinect
sensor, and mean interpersonal distance during the game
calculated. Participant expressions were recorded throughout the
game and automatically coded for discrete facial expressions:
Neutral, Happy, Sad, Angry, Surprised, Scared, and Disgusted,
using Noldus FaceReader version 5. Mean intensity of the seven
facial expressions across the duration of the game were calculated.
Participants’ game performances (final scores) were also recorded.
FaceReader offers automated coding of expressions at an accuracy
comparable to trained raters of expression [25].
2.3.2 Questionnaires
Participants completed a brief questionnaire on their enjoyment of
the game and their beliefs about the extent to which they thought
that the robot liked them. Enjoyment of playing Simon Says with
Zeno was recorded using a single-item, four-point measure,
ranging from ‘I definitely did not enjoy it’ to ‘I really enjoyed it’.
Participants’ perceptions of the extent to which Zeno liked them
single-item on a thermometer scale, ranging from ‘I do not think
he liked me very much’ to ‘I think he liked me a lot’. They were
also asked if they would like to play the game again. Parents and
4 Additional reasons for excluding children below the age of 5 were
questionable levels of understanding when completing the self-report
questionnaires, and low reliability in FaceReader’s d etection of
expressions in young children.
carers completed a brief questionnaire on their perceptions of
their child’s enjoyment and engagement with the game on single-
item thermometer scales, ranging from ‘Did not enjoy the game at
all’ to ‘Enjoyed the game very much and ‘Not at all engaged’ to
‘Completely engaged’.
2.4 Procedure
The experiment took place in a publicly accessible lab and
prospective participants could view games already underway.
Brief information concerning the experiment was provided to
parents or carers and informed consent was obtained from parents
or carers prior to participation.
During the game, children were free to position themselves
relative to Zeno within a ‘play zone’ boundary marked on the
floor by a mat (to delineate the area in which the system would
correctly detect movements) and could leave the game at their
choosing. The designated play zone was marked by three foam
.62msq mats. The closest edge of the play zone was 1.80m from
the robot and the play zone extended to 3.66m away. These limits
approximate the ‘social distance’ classification [26]. This range
was chosen for 2 reasons i) Participants would likely expect the
game used to occur within social rather than public- or personal-
distance ii) This enabled reliable recordings of movement by the
Kinect sensor. The mean overall distance for the participants from
the robot fell well within social-distance boundaries (2.48m).
At the end of the game, participants completed the self-report
questionnaire, while parents completed the observer’s
questionnaire. Participant-experimenter interaction consistency
was maintained over the two days by using the same experimenter
on all occasions for all tasks.
Interaction with the robot took the form of the widely known
Simon Says game (Figure 2). This game was chosen for several
reasons: children’s familiarity with the game, its uncluttered
structure allows autonomous instruction and feedback delivery by
Zeno, and its record of successful use in a prior field study [27].
The experiment began with autonomous instructions delivered
by Zeno as soon as children stepped into the designated play zone
in front of the Kinect sensor. Zeno introduced the game by saying,
Hello. Are you ready to play with me? Let's play Simon Says. If I
say Simon Says you must do the action. Otherwise you must keep
still.” The robot would then play ten rounds of the game or play
until the child chose to leave the designated play zone. In each
round, Zeno gave one of three simple action instructions: ‘Wave
your hands’, ‘Put your hands up’ or ‘Jump up and down’. Each
instruction was given either with the prefix of 'Simon says’ or no
prefix.
Figure 2. A child playing Simon Says with Zeno
The OpenNI/Kinect skeleton tracking system was used to
determine if the child had performed the correct action in three
seconds following instruction. For the ‘Wave your hands’ action,
our system monitored the speed of the hands moving. If sufficient
movement for the arms were detected following instruction then
the movement was marked as a wave. For the ‘Jump up and
down’ action the vertical velocity of the head was monitored,
again with a threshold to determine if a jump had taken place.
Finally for the ‘Put your hands up’ action, our system monitored
the positions of the hands relative to the waist. If the hands were
found to be above the waist for more than half of the three
seconds following the instruction then the action was judged to
have been executed. The thresholds for the action detection were
determined by previous trial and error during pilot testing in a
university laboratory. The resulting methods of action detection
were found to be over 98% accurate in our study. In the rare cases
where the child did the correct action and the system judged
incorrectly then the experimenters would step in and say Sorry,
the robot made a mistake there, you got it right”.
If children followed the action instruction after hearing ‘Simon
says’ the robot would say, Well done, you got that right”. If the
child remained still when the prefix was not given, Zeno would
congratulate them on their correct action with Well done, I did
not say Simon Says and you kept still”. Conversely, if the child
did not complete the requested movement when the prefix was
given Zeno would say, “Oh dear, I said Simon Says, you should
have waved your hands”. If they completed the requested
movement in the absence of the prefix, Zeno would inform them
of their mistake with, Oh dear, I did not say Simon Says, you
should have kept still”. Zeno gave children feedback of a running
total of their score at the end of each round (the number of correct
turns completed).
If the child left the play zone before ten rounds were played,
the robot would say, “Are you going? You can play up to ten
rounds. Stay on the mat to keep playing”. The system would then
wait three seconds before announcing, Goodbye. Your final
score was (score)”. This short buffer was to prevent the game
ending abruptly if the child accidentally left the play zone for a
few seconds.
At the end of the ten rounds, the robot would say, All right,
we had ten goes. I had fun playing with you, but it is time for me
to play with someone else now. Goodbye.”
The sole experimental manipulation coincided with Zeno’s
spoken feedback to the children after each turn. In the expressive
robot condition, Zeno responded with appropriate ‘happiness’ or
‘sadness’ expressions, following children’s correct or incorrect
responses. These expressions were prebuilt animations, provided
with the Zeno robot, named ‘victory’ and ‘disappointment’
respectively. These animations were edited to remove gestures so
only facial expression were present. In contrast, in the non-
expressive robot condition, Zeno’s expressions remained in a
neutral state regardless of child performance. Previous work
indicates that children can recognize these facial expression
representations by the Zeno robot with a good degree of accuracy
[28].
3 RESULTS
A preliminary check was run to ensure even distribution of
participants to expressive and non-expressive conditions. There
were 9 female and 16 male participants in the expressive
condition and 9 female and 12 male participants in the non-
expressive condition. A chi square test was run before analysis to
check for even gender distribution across conditions indicates no
significant difference (X2 (1,48) = 2.25, p = .635).
3.1 Objective Measures
Overall, we did not observe any significant main effects of Zeno’s
expressiveness on objective measures of interpersonal distance or
facial expressions between conditions. However, there were
significant interaction effects, when gender was included as a
variable.
There was a significant interaction of experimental condition
and child’s gender on average child’s expressions of happiness
F(1,39) = 4.75, p = .038. While male participants showed greater
average happiness in the expressive robot condition in comparison
to those in the non-expressive condition (19.1%, SE 3.3% versus
5.3%, SE 4.1%), female participants did not differ between
conditions (7.4%, SE 4.3% versus 12.6%, SE 4.6%). Simple
effects tests (with Bonferroni correction) indicated that the
observed differences between conditions for male participants was
significant (p = .012).
A contrasting interaction was found for average expressions of
surprise F(1,39) = 5.16, p = .029. Male participants in the
expressive robot condition showed less surprise than those in the
non-expressive condition (6.1%, SE 3.2% versus 19.6%, SE
4.0%), whereas female participant expressions for surprise did not
differ between conditions (11.9%, SE 4.2% versus 7.1%, SE
4.5%). There were no further significant interactions for any of
the remaining expressions.
There was a near significant interaction for experimental
condition and child’s gender for interpersonal distance F(1,41) =
2.81, p = .10 (Figure 3). Male participants interacting with the
expressive robot tended to stand closer (M = 2.28m, SE .10m)
than did those interacting with the non-expressive robot (M =
2.57m, SE .13m), whereas female participants interacting with the
expressive robot tended to stand further away (M = 2.59m, SE
.14m) than those interacting with the non-expressive robot (M =
2.45m, SE .14m). A follow-up simple effect test indicates that the
difference between conditions for male participants was also near
significant (p = .086).
Figure 3. Mean interpersonal distance during game
Controlling for participant age or success/failure in the game
made no material difference to any of the objective measures
findings.
3.2 Questionnaires
No significant main effects of condition were seen for self-
reported measures or observer reported measures. However, there
were significant gender effects, and significant gender X
condition effects. Gender had a main effect on children’s beliefs
about the extent to which the robot liked them F(1,38) = 5.53, p =
0.03. Female participants reported significantly lower ratings (M
= 3.08, SE .34) than did male participants (M = 4.17, SE .31).
We observed a significant interaction of gender and
experimental condition for participants’ enjoyment in interacting
with Zeno F(1,38) = 4.64, p = .04. Male participants interacting
with the expressive Zeno reported greater enjoyment of the
interaction than those who interacted with the non-expressive
Zeno (M = 3.40, SE .18 versus M = 3.00, SE .23), whereas female
participants interacting with the expressive Zeno reported less
enjoyment than those interacting with the non-expressive Zeno
(M = 3.22, SE .23 versus M = 3.78, SE .23). Simple effects tests
did not indicate that the difference found between conditions were
significant for either male participants (p > .10) or female
participants (p > .10).
Results from the observer reports generated by the participants’
parents or carers showed the same trends as those from the self-
report results but did not show significant main or interaction
effects. Controlling for participant age or success/failure in the
game made no material difference to any of the questionnaire data
findings.
4 DISCUSSION
The results provide new evidence that life-like facial expressions
in humanoid robots can impact on children’s experience and
enjoyment of HRI. Moreover, our results are consistent across
multiple modalities of measurement. The presence of expressions
could be seen to cause differences in approach behaviors, positive
expression, and self-reports of enjoyment. However, the findings
are not universal as boys showed more favorable behaviors and
views towards the expressive robot compared to the non-
expressive robot, whereas girls tended to show the opposite.
Sex differences towards facially expressive robots during HRI
could have profound impact on the design and development of
future robots; it is important to replicate these experimental
conditions and explore these results in more depth in order to
identify why these results arise. At this stage, the mechanisms
underpinning these differences still remain to be determined. We
outline two potential processes that could explain our results.
The current results could be due to children’s same-sex
preferences for friends and playmates typically exhibited at the
ages range tested (ages five to ten) [29]. Zeno is nominally a ‘boy’
robot and expressions may be emphasizing cues seen on the face
to encourage user perceptions of it as a boy. As a result, children
may be acting in accordance with existing preferences for play
partners [30]. If this is the case, it would be anticipated that
replication of the current study with a ‘girl’ robot counterpart
would produce results contrasting with the current findings.
Alternatively, results could be due to the robot’s expressions
emphasizing the existing social situation experienced by the
children. The current study took place in a publically accessible
space, with participants in the company of museum visitors, other
volunteers, and the children’s parents or carer. Results from the
current study could represent children’s behavior towards the
robot based on existing gender driven behavioral attitudes. Girls
may have felt more uncomfortable than boys when in front of
their parents whilst engaging in explorative play [20] with a
strange person (in the form of their perceived proximity to the
experimenter) and an unfamiliar object (the robot). Social cues
from an expressive robot, absent in a neutral robot, may reinforce
these differences through heightening the social nature of the
experiment.
Behavioral gender differences in children engaging in public or
explorative play are well established, and the link between these
gender differences and the influence of direct parents/carers
differential socialization of their children dependent upon the sex
[31,32], is a further established link of developmental study. To
better explore the gender difference observed in our study we
must take into consideration existing observed behavioral patterns
in children engaging in explorative play around their parents.
Replication in a familiar environment away from an audience
including children’s parents may then impact on apparent sex
differences observed in the current HRI study.
The current study is a small-sample field experiment. As with
the nature of field studies, maintaining an exacting control over
experimental conditions is prohibitively difficult. Along with
possible confounds from the public testing space, the primary
experimenter knew the condition each child was assigned to;
despite best efforts in maintaining impartiality, the current study
design cannot rule out potential unconscious experimenter
influence on children’s behaviors. In studies concerning emotion
and expression, potential contagion effects of expression and
emotion [33] could impact on participant’s expressions and
reported emotions. The current results therefore offer a strong
indication of the areas to be further explored under stricter
experimental conditions.
We aim to repeat the current study in a more controlled
experimental environment. Children will complete the same
Simon-says game in the familiar environment of their school, this
time without an audience. Rather than allocation by day to
condition, the study protocol will be modified to randomly
allocate children to conditions, and the study will be conducted by
an experimenter naïve to conditions. Testing at local schools
offers better controls over participant sample demographics as
children can be recruited based on age and having similar
educational and social backgrounds. The environment of this
study also removes any direct influence by the presence of
parents/carers. Thus, a repeat of the current study under stricter
conditions also offers opportunity to further test the proposed
hypotheses for the observed sex differences in enjoyment in
interacting with a facially expressive robot.
We have previously proposed that human-robot bonds could be
analyzed in terms of their similarities to different types of existing
bond with other human, animals, and objects [4]. Our
relationships with robots that are lacking in human-like faces may
have interesting similarities to human-animal bonds which can be
simpler than those with other people—expectations are clearer,
demands are lower, and loyalty is less prone to change. Robots
with more human-like faces and behavior, on the other hand, may
prompt responses from users that include more of the social
complexities of human-human interaction. Thus, aspects of
appearance that indicate gender can become more important,
subtleties of facial and vocal expression may be subjected to
greater scrutiny and interpretation. Overall, as we progress
towards more realistic human-like robots we should bear in mind
that whilst the potential is there for a richer expressive
vocabulary, the bar may also be higher for getting the
communication right.
5 CONCLUSION
This paper offers further steps towards developing a theoretical
understanding of symbiotic interactions between humans and
robots. The production of emulated emotional communication
through facial expression by robots is identified as a central factor
in shaping human attitudes and behaviors during HRI. Results
from both self-repot and objective measures of behavior point
towards possible sex differences in responses to facially
expressive robots; follow-up work to examine these is identified.
These findings highlight important considerations to be made in
the future development of a socially engaging robot.
6 ACKNOWLEDGMENTS
This work is supported by the European Union Seventh
Framework Programme (FP7-ICT-2013-10) under grant
agreement no. 611971. We wish to acknowledge the contribution
of all project partners to the ideas investigated in this study.
7 REFERENCES
[1] Breazeal, C., & Scassellati, B. How to build robots that make friends
and influence people. In Intelligent Robots and Systems, 1999.
IROS'99. Proceedings. 1999 IEEE/RSJ International Conference on
(Vol. 2, pp. 858-863). IEEE.
[2] Pitsch, K., Kuzuoka, H., Suzuki, Y., Sussenbach, L., Luff, P., &
Heath, C. “The first five seconds”: Contingent stepwise entry into an
interaction as a means to secure sustained engagement in HRI. In
Robot and Human Interactive Communication, 2009. RO-MAN
2009. The 18th IEEE International Symposium on (Toyama, Japan,
Sept 27, 2009) IEEE 985-991 DOI:10.1109/ROMAN.2009.5326167
[3] Verschure, P. F. Distributed adaptive control: a theory of the mind,
brain, body nexus. Biologically Inspired Cognitive Architectures, 1,
55-72, (2012). DOI:10.1016/j.bica.2012.04.005
[4] Collins, E. C., Millings, A., Prescott, T. J. 2013. Attachment in
assistive technology: A new conceptualisation. In Assistive
Technology: From Research to Practice, Encarnação, P.,
Azevedo,L., Gelderblom, G. J., Newell, A., & Mathiassen N. IOS
Press, 823-828. DOI:10.3233/978-1-61499-304-9-823
[5] Van Kleef, G. A. How emotions regulate social life the emotions as
social information (EASI) model. Current Directions in
Psychological Science, 18, 184-188, (2009). DOI:10.1111/j.1467-
8721.2009.01633.x
[6] Hareli, S., & Rafaeli, A. Emotion cycles: On the social influence of
emotion in organizations. Research in organizat ional behavior, 28,
35-59, (2008). DOI:10.1016/j.riob.2008.04.007
[7] Kelly, J. R., & Barsade, S. G. Mood and emotions in small groups
and work teams. Organizational behavi or and human decision
processes, 86, 99-130, (2001). DOI:10.1006/obhd.2001.2974
[8] Parkinson, B. Do facial movements express emotions or
communicate motives. Personality and Soci al Psychology Review,
9, 278-311, (2005). DOI = 10.1207/s15327957pspr0904_1
[9] Tielman, M., Neerincx, M., Meyer, J., & Looije, R. Adaptive
emotional expression in robot-child interaction. In Proceedings of
the 2014 ACM/IEEE Internati onal Conference on Human-robot
Interaction, (Bielefeld, Germany, Mar. 03 – 06, 2014) ACM, New
York, NY, 407-414. DOI:10.1145/2559636.2559663.
[10] Beck, A., Cañamero, L., Damiano, L., Sommavilla, G., Tesser, F., &
Cosi, P. Children interpretation of emotional body language
displayed by a robot. Social Robotics, 62–70. (2011) Springer,
Berlin Heidelberg.
[11] Buck, R. W., Savin, V. J., Miller, R. E., & Caul, W. F.
Communication of affect through facial expressions in humans.
Journal of Personality and Social Psychology, 23, 362-371, (1972).
DOI:10.1037/h0033171
[12] Parkinson, B. Emotions are social. British Journal of Psychology,
87, 663-683, (1996). DOI:10.1111/j.2044-8295.1996.tb02615.x
[13] Nitsch, V., & Popp, M. Emotions in robot psychology. Biological
cybernetics, 1-9, (2014). DOI: 10.1007/s00422-014-0594-6
[14] Breazeal, C. Emotion and sociable humanoid robots. International
Journal of Human-Computer Studies, 59, 119-155, (2003).
DOI:10.1016/S1071-5819(03)00018-1
[15] Espinoza, R.R , Nalin, M., Wood, R., Baxter, P., Looije, R.,
Demiris, Y., ... & Pozzi, C. Child-robot interaction in the wild:
advice to the aspiring experimenter. In Proceedings of the 13th
international conference on multimodal interfaces (Alicante, Spain,
Nov. 14 – 18, 2011) ACM, New York, NY, 335-342.
DOI:10.1145/2070481.2070545
[16] Becker-Asano, C., & Ishiguro, H. Evaluating facial displays of
emotion for the android robot Geminoid F. In Affective
Computational Intelligence (WACI), 2011 IEEE Workshop on
(Paris, France, April, 11-15, 2011) IEEE 1-8
DOI:10.1109/WACI.2011.5953147
[17] Fagot, B. I. The influence of sex of child on parental reactions to
toddler children. Child development, 2, 459-465, (1978).
DOI:jstor.org/stable/1128711
[18] Mazzei, D., Lazzeri, N., Hanson, D., & De Rossi, D. HEFES: An
Hybrid Engine for Facial Expressions Synthesis to control human-
like androids and avatars. In Biomedical Robotics and
Biomechatronics (BioRob), 2012 4th IEEE RAS & EMBS
International Conference on (Rome, Italy, Jun. 24 -27, 2012) IEEE
195-200 DOI:10.1109/BioRob.2012.6290687
[19] Shahid, S., Krahmer, E., & Swerts, M. Child–robot interaction
across cultures: How does playing a game with a social robot
compare to playing a game alone or with a friend? Computers in
Human behaviour, 40, 86 -100, (2014).
DOI:10.1016/j.chb.2014.07.043.
[20] Kraut, R. E., & Johnston, R. E. Social and emotional messages of
smiling: An ethological approach. Journal of Personality and Social
Psychology, 37, 1539-1553, (1979). DOI:10.1037/0022-
3514.37.9.1539
[21] Hanson, D., Baurmann, S., Riccio, T., Margolin, R., Dockins, T.,
Tavares, M., & Carpenter, K.. Zeno: A cognitive character. AI
Magazine, 9-11, (2009).
[22] Kanda, T., Hirano, T., Eaton, D., & Ishiguro, H. Interactive robots as
social partners and peer tutors for children: A field trial. Human-
computer interaction, 19, 61-84, (2004).
DOI:10.1207/s15327051hci1901&2_4
[23] Kuo, I. H., Rabindran, J. M., Broadbent, E., Lee, Y. I., Kerse, N.,
Stafford, R. M. Q., & MacDonald, B. A. Age and gender factors in
user acceptance of healthcare robots. In Robot and Human
Interactive Communication, 2009. RO-MAN 2009. The 18th IEEE
International Symposium on (Toyama, Japan, Sept. 27- Oct. 2,
2009) IEEE. 214-219 DOI:10.1109/ROMAN.2009.5326292
[24] Shahid, S., Krahmer, E., Swerts, M., & Mubin, O. Child-robot
interaction during collaborative game play: Effects of age and
gender on emotion and experience. In Proceedings of the 22nd
Conference of the Computer-Human Interaction Special Interest
Group of Australia on Computer-Human Interaction (Brisbane,
Australia. November 22-26, 2010) ACM, New York, USA. 332-335
DOI:10.1145/1952222.1952294
[25] Lewinski, P., den Uyl, T. M., & Butler, C. Automated facial
coding: Validation of basic emotions and FACS AUs in
FaceReader. Journal of Neuroscience, Psychology, and
Economics, 7 227-236, (2014). DOI:10.1037/npe0000028
[26] Burgess, J. Interpersonal spacing behavior between surrounding
nearest neighbors reflects both familiarity and environmental
density. Ethology and sociobi ology, 4, 11-17, (1983).
doi:10.1016/0162-3095(83)90003-1
[27] Dautenhahn, K., Nehaniv, C. L., Walters, M. L., Robins, B., Kose-
Bagci, H., Mirza, N. A., & Blow, M. KASPAR–a minimally
expressive humanoid robot for human–robot interaction research.
Applied Bionics and Biomechanics, 6, 369-397, (2009).
DOI:10.1080/11762320903123567
[28] Costa, S., Soares, F., & Santos, C. Facial Expressions and Gestures
to Convey Emotions with a Humanoid Robot. In Social Robotics
542-551). Springer International Publishing. (2013).
DOI:10.1007/978-3-319-02675-6_54
[29] Martin, C. L., & Fabes, R. A. The stability and consequences of
young children's same-sex peer interactions. Developmental
psychology, 37, 431-446, (2001). DOI:10.1037/0012-
1649.37.3.431.
[30] Lindsey, E. W. Physical activity play and preschool children's peer
acceptance: Distinctions between rough-and-tumble and exercise
play. Early Education and Development, 25, 277-294, (2014).
DOI:10.1080/10409289.2014.890854
[31] Gonzalez, A. M. Parenting Preschoolers with Disruptive Behavior
Disorders: Does Child Gender Matter? Dissertation. Washington
(2013) University in St. Louis, St Louis, Missouri, USA.
[32] Kim, H. J., Arnold, D. H., Fisher, P. H., & Zeljo, A. Parenting and
pre-schoolers' symptoms as a function of child gender and SES.
Child & Family Behavior Therapy, 27, 23-41, (2005). DOI:
10.1300/J019v27n02_03
[33] Hatfield, E., Cacioppo, J. T. & Rapson, R. L. Emotional Contagion.
Cambridge university press, Cambridge, UK, 1994.
... In 2015, Cameron et al. [3] studied how life-like facial expressions in a humanoid robot affected children's behaviour and attitude towards the robot. The participants played the Simon Says game with the robot. ...
... Along with kinesics, the other most popular communicative channels are speech and facial expressions. Examples of this can be found in the works of Leite et al. [37], Itoh et al. [30], Cameron et al. [3], or Yilmazyildiz et al. [66]. The works presented by Woo et al. [63] and P. Gebhard [24] also use speech and facial expressions to express affect, although they separate themselves from the other works by having the robot's affective state change the utterances that are being generated and not only altering features of the robot's voice. ...
... Regarding the type of affect states that the robot can express, it can be observed that the majority of authors opted to only implement emotions. Researchers such as Cameron et al. [3], Song et al. [56], or Löffler et al. [39] implemented discrete emotions, represented by a label (e.g. happy, sad, angry etc.). ...
Article
Full-text available
Robots that are devised for assisting and interacting with humans are becoming fundamental in many applications, including in healthcare, education, and entertainment. For these robots, the capacity to exhibit affective states plays a crucial role in creating emotional bonding with the user. In this work, we present an affective architecture that grounds biological foundations to shape the affective state of the Mini social robot in terms of mood and emotion blending. The affective state depends upon the perception of stimuli in the environment, which influence how the robot behaves and affectively communicates with other peers. According to research in neuroscience, mood typically rules our affective state in the long run, while emotions do it in the short term, although both processes can overlap. Consequently, the model that is presented in this manuscript deals with emotion and mood blending towards expressing the robot’s internal state to the users. Thus, the primary novelty of our affective model is the expression of: (i) mood, (ii) punctual emotional reactions to stimuli, and (iii) the decay that mood and emotion undergo with time. The system evaluation explored whether users can correctly perceive the mood and emotions that the robot is expressing. In an online survey, users evaluated the robot’s expressions showing different moods and emotions. The results reveal that users could correctly perceive the robot’s mood and emotion. However, emotions were more easily recognized, probably because they are more intense affective states and mainly arise as a stimuli reaction. To conclude the manuscript, a case study shows how our model modulates Mini’s expressiveness depending on its affective state during a human-robot interaction scenario.
... As with adaptivity, engagement already shows in simple actions, such as greeting the user or responding positively to their presence [23]. In this context, nonverbal communication, particularly the ability to display emotions, is frequently taken up in the literature (e.g., [22,44,47,[55][56][57][58][59]). Moreover, we see a link between engagement and proactivity as well as persuasiveness, e.g., expressed in an agent making suggestions, sharing recommendations, or motivating the user to do something. ...
... Hence, the first essential property we can derive from the analysis is that both verbal and nonverbal communication, including different forms of semantic free utterances [36,58], are important for enabling natural and intuitive communication in ACs. Likewise, verbal and nonverbal communication are important in conveying emotions and intentions, making the agent appear credible and desirable for engagement (e.g., [22,44,47,[55][56][57][58][59]). Therefore, the alignment of verbal and nonverbal communication is crucial in making interactions livelier and authentic, especially when speech is the main modality (e.g., [6,22,41,76,77]). ...
Article
Full-text available
The present study systematically reviewed scientific literature addressing the concept of artificial companions (ACs). The dataset, which encompasses 22 years of research, was drawn from multiple interdisciplinary sources and resulted in the development of an interdisciplinary definition of the AC concept. This definition consists of two key characteristics: adaptivity and engagement, the hallmarks of ACs to form emotional bonds and long-term relationships with users. The study also analyzed various design properties associated with ACs, categorized into five groups: adaptivity to the user, adaptivity to the usage context, engagement-facilitating behavior, the agent’s personality, and its appearance. In the third part, the study explored AC scenarios and identified roles that ACs can perform with their associated competencies, user groups, and application areas. The findings of this study are seen as a proposal for future empirical research to test what features in communication and interaction design play a crucial role in shaping the perception of an agent as an AC.
... (a) [29] (b) [2] (c) [20] (d) [22] (e) [55] Figura 6.6: Expresión de emociones mediante apariencias físicas. ...
... En la parte (c) de la Figura 6.6, se puede apreciar que los ojos controlados por servomotores son poco expresivos, porque en la expresión de las emociones de felicidad y tristeza no se aprecia en éstos una diferencia significativa, sin embargo, de manera general con la ayuda de las cejas y la boca sí se pueden distinguir las emociones. Por su parte, en [22] presentan la incorporación de expresiones faciales a ZENO, el cual tiene una cabeza totalmente motorizada para las siguientes emociones: neutral, alegría, tristeza, ira, sorpresa, miedo y disgusto. En la parte (d) de la Figura 6.6, se puede notar que las expresiones de ZENO se asemejan mucho a las expresiones humanas, debido a la inclusión de más detalles, tales como los cambios en el mentón. ...
Chapter
Full-text available
El rápido avance tecnológico ha fomentado que en muchos ámbitos, las personas mantengan interacciones prolongadas con los computadores (teléfonos móviles, computadores personales, robots, etc.). Recientemente, se ha venido popularizando la idea de que esos computadores deben tener un comportamiento que sea fácilmente interpretado por las personas [16], cuya principal ventaja es que no requiere que la persona aprenda a utilizar una interfaz de comunicación, sino que ésta debe ser natural e intuitiva. Las investigaciones han demostrado que las personas responden a los computadores de manera similar a como responden a otras personas, especialmente, si los computadores se comunican utilizando el mismo modo que las personas usan [1], tales como voces (por ejemplo, el asistente virtual “Siri” de Apple). En ese sentido, un computador que puede comunicarse naturalmente con humanos, y además, puede expresar emociones, es de gran importancia para lograr interacciones efectivas [5]. En las personas, las emociones pueden disparar acciones visibles para otros [30], tales como: gestos y expresiones faciales. Particularmente, algunas acciones son frecuentemente combinadas con determinadas emociones [4], lo cual permite conocer el estado emocional de las personas. Por ejemplo, según [49], las emociones de alegría y tristeza pueden ser diferencias por las expresiones faciales de la siguiente manera: en el estado emocional de alegría, aparecen arrugas debajo del parpado inferior, las comisuras de los labios tienden hacia atrás y arriba, y las mejillas se levantan; a diferencia, en el estado emocional de tristeza, hay descenso y unión de las cejas, los ángulos inferiores de los ojos tienden hacia abajo, las cejas adoptan una forma de triángulo, y la mirada tiende hacia abajo. Actualmente, los computadores son capaces de recibir señales humanas, interpretarlas, y presentar retroalimentación con significado, a través de una interfaz amistosa que una persona puede entender [2]. Algunos autores proponen que los computadores se comporten similares a una criatura viva, es decir, deben mostrar reacciones favorables a los usuarios cuando se formen una buena opinión de ellos, y mostrar reacciones de disgusto cuando se formen una mala impresión [15]. Las apariencias de los computadores que generan y expresan emociones varían significativamente, y pueden ser mostradas virtualmente en un monitor [8], o tener contexturas físicas de distintas formas (geométricas [1], animales [27], antropomórficas [22], etc.). Independientemente de la apariencia, en este capítulo se entiende como generación a los procesos que determinan las emociones del computador; y se entiende como expresión a las maneras de exteriorizar esas emociones. En general, la expresión de emociones de los computadores hace más natural la comunicación con personas. Según [10], la expresión de emociones por parte de un computador, puede tener tres propósitos: primero, ayudar a que el computador comunique su estado interno a los usuarios finales; segundo, alentar los comportamientos deseados de las personas; y tercero, ayudar a las personas a conectarse emocionalmente con el computador, es decir, generar empatía. En ese orden de ideas, dejando a un lado la discusión sobre si los computadores son capaces de sentir emociones, es importante resaltar que las emociones que expresan los computadores son útiles si son interpretadas correctamente por las personas con las que interactúa. Algunos beneficios de expresar emociones son mostradas en experimentos, tales como: mejorar la integración en actividades de colaboración [11], facilitar la empatía cuando se expresan emociones reactivas que coincidan con los estados mentales del usuario [6], estimular la expresividad en niños [18], entre otros. Las formas de generar emociones y los modos de expresión son variados, y aunque algunos autores explotan el hecho de que formas abstractas de expresar emociones pueden ser entendidas y procesadas de manera natural por los humanos [7], otros autores consideran en que mientras los computadores no tengan cuerpos físicos no podrán expresar emociones de manera confiable [30]. En ambos casos, la estrategia más utilizada para generar emociones consiste en establecer comportamientos predefinidos, que son invocados en función de la información de los sensores [23]. Otros autores consideran que los usuarios finales deberían tener la posibilidad de crear sus propios comportamientos para expresar emociones, en situaciones y entornos particulares en los cuales sus computadores operan [10]. En ese contexto, el objetivo de este capítulo es presentar las formas de generación de emociones y los distintos modos (unimodales y multimodales) que utilizan los computadores para expresar emociones, y además, identificar los retos de investigación en este ámbito. Para ello, se hace una revisión de la literatura a partir de un modelo (ver Figura 1) que divide la expresión de las emociones en los computadores según el canal que estimula en las personas: visual, auditivo y táctil. El canal visual incluye desde las emisiones de luces y colores, hasta las expresiones faciales y corporales. El canal auditivo incluye sonidos y voces. Finalmente, el canal táctil incluye contacto mediante vibraciones y temperatura.
... For instance, female users have been reported to be more social as compared to male [43]. In addition, the gender, age of and skill of the user have also affected the social engagement and interest of the user during the interaction [64,70]. Therefore, we need to design and evaluate robots that can adapt based on user characteristics in real-time and study their effect on perception, engagement and task performance of the user. ...
Article
Full-text available
As the field of social robotics is growing, a consensus has been made on the design and implementation of robotic systems that are capable of adapting based on the user actions. These actions may be based on their emotions, personality or memory of past interactions. Therefore, we believe it is significant to report a review of the past research on the use of adaptive robots that have been utilised in various social environments. In this paper, we present a systematic review on the reported adaptive interactions across a number of domain areas during Human-Robot Interaction and also give future directions that can guide the design of future adaptive social robots. We conjecture that this will help towards achieving long-term applicability of robots in various social domains.
... The challenges in this area mainly concern the search for a balanced and coherent design between behavior and appearance of the robot, the designing of socially acceptable behaviors, and the development of new methods and tools for designing and evaluating HRI, identifying the needs of individuals and groups of subjects to whom a robot could adapt and respond, avoiding the uncanny valley [10], etc. HRI methods focus mainly on two elements: the first concerns actual acceptability, i.e., all those factors that influence the intention to use a robot (e.g., ease of use, enjoyment, controllability, etc.), evaluated extensively through various models of acceptance of the technology (for example, Almere); the second concerns usability, defined as the effectiveness, efficiency, and satisfaction with which users achieve specific goals in specific environments [10] (ISO 9241-11). Within HRI, this has been translated into research on how various specific factors may influence the acceptance of the robot by the users, such as morphological aspects [11], facial and affective expressions [12], linguistic and cultural differences [13], behaviors [14], personal space [15], or other variables such as age or education [16]. ...
Article
Full-text available
The aim of the SYRIACA project was to test the capability of a social robot to perform specific tasks in healthcare settings, reducing infection risks for patients and caregivers. The robot was piloted in an Intensive Hematological Unit, where the patients’ and healthcare operators’ acceptability of the robot was evaluated. The robot’s functions, including logistics, surveillance, entertainment, and remote visits, were well accepted. Patients expressed interest in having multiple interactions with the robot, which testifies to its engaging potential and that it provides useful services. During remote visits, the robot reduced perceived stress among patients, alleviating feelings of isolation. The successful implementation of the robot suggests its potential to enhance safety and well-being in healthcare settings.
... The general disposition is to design robots that allow humans to anthropomorphize them since anthropomorphism occurs naturally in humans [142], and their appearance highly depends on the task they are required to perform. For example, zoomorphic social robots, like the robotic seal Paro can be beneficial to the mental healthcare of the elderly [143], while humanoid robots with cartoon-like features such as the Zeno robot or the Nao have been extensively used in Child-Robot Interactions (CRI) [144,145]. These robots have limited expressiveness compared to more sophisticated humanoid robots, raising fewer expectations about their cognitive capabilities, and so inverting the negative reaction described by the Uncanny Valley hypothesis [130]. ...
Article
Full-text available
The development of future technologies can be highly influenced by our deeper understanding of the principles that underlie living organisms. The Living Machines conference aims at presenting (among others) the interdisciplinary work of behaving systems based on such principles. Celebrating the 10 years of the conference, we present the progress and future challenges of some of the key themes presented in the robotics workshop of the Living Machines conference. More specifically, in this perspective paper, we focus on the advances in the field of biomimetics and robotics for the creation of artificial systems that can robustly interact with their environment, ranging from tactile sensing, grasping, and manipulation to the creation of psychologically plausible agents.
... This finding of fewer child vocalizations with an agent was consistent with Aeschlimann et al. (2020), who found that preschool-aged children were less likely to provide vocal information to a smart speaker than to an adult researcher. There are two possible explanations for this: either children are less knowledgeable about how to talk to non-human agents (Beneteau et al., 2019;Cheng et al., 2018) or they are less interested in doing so (Cameron et al., 2015). Our findings suggest that the social presence of a human partner may encourage children to provide on-topic responses but may also invite children to voluntarily extend the conversation beyond the reading context. ...
Article
Full-text available
Dialogic reading, when children are read a storybook and engaged in relevant conversation, is a powerful strategy for fostering language development. With the development of artificial intelligence, conversational agents can engage children in elements of dialogic reading. This study examined whether a conversational agent can improve children's story comprehension and engagement, as compared to an adult reading partner. Using a 2 (dialogic reading or non‐dialogic reading) × 2 (agent or human) factorial design, a total of 117 three‐ to six‐year‐olds (50% Female, 37% White, 31% Asian, 21% multi‐ethnic) were randomly assigned into one of the four conditions. Results revealed that a conversational agent can replicate the benefits of dialogic reading with a human partner by enhancing children's narrative‐relevant vocalizations, reducing irrelevant vocalizations, and improving story comprehension.
Article
While there is evidence that human-like characteristics in robots could benefit child-robot interaction in many ways, open questions remain about the appropriate degree of human likeness that should be implemented in robots to avoid adverse effects on acceptance and trust. This study investigates how human likeness, appearance and behavior, influence children’s social and competency trust in a robot. We first designed two versions of the Furhat robot with visual and auditory human-like and machine-like cues validated in two online studies. Secondly, we created verbal behaviors where human likeness was manipulated as responsiveness regarding the robot’s lexical matching. Then, 52 children (7-10 years old) played a storytelling game in a between-subjects experimental design. Results show that the conditions did not affect subjective trust measures. However, objective measures showed that human likeness affects trust differently. While low human-like appearance enhanced social trust, high human-like behavior improved children’s acceptance of the robot’s task-related suggestions. This work provides empirical evidence on manipulating facial features and behavior to control human likeness in a robot with a highly human-like morphology. We discuss the implications and importance of balancing human likeness in robot design and its impacts on task performance, as it directly impacts trust-building with children.
Article
Full-text available
This article addresses whether young children's play-partner choices are stable over time and how these choices influence behavior. Sixty-one children (28 boys and 33 girls; mean age = 53 months) were observed over 6 months, and type of play behavior and sex of play partners were recorded. Children's partner preferences were highly sex differentiated and stable over time, especially when larger aggregates of data were used. Two types of consequences were identified: a binary effect that influenced differences between the sexes and a social dosage effect that influenced variations within the sexes. The binary effect reflected a pattern in which the more both girls and boys played with same-sex partners, the more their behavior became sex differentiated. The social dosage effect reflected a pattern in which variations in levels of same-sex play in the fall contributed significantly to variations in the spring above initial levels of the target behaviors.
Conference Paper
Full-text available
How people use assistive technologies depends on how they relate to them. As technologies such as assistive robots are developed - that have a physical presence, some autonomy, and the ability to adapt and communicate - the relationships that people have with them will become more complex and may take on some of the characteristics of the social relationships that we have with each other. In this paper, we compare the relationships that people form with assistive devices to the bonds we develop with people, pets, and objects. Building on conceptual frameworks provided by social psychology we aim to develop a taxonomy that can provide a consistent framework with which to describe and analyse human-other relationships, hopefully leading to improved design of assistive technologies.
Article
Full-text available
In this study, we validated automated facial coding (AFC) software-FaceReader (Noldus, 2014)-on 2 publicly available and objective datasets of human expressions of basic emotions. We present the matching scores (accuracy) for recognition of facial expressions and the Facial Action Coding System (FACS) index of agreement. In 2005, matching scores of 89% were reported for FaceReader. However, previous research used a version of FaceReader that implemented older algorithms (version 1.0) and did not contain FACS classifiers. In this study, we tested the newest version (6.0). FaceReader recognized 88% of the target emotional labels in the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) and Amsterdam Dynamic Facial Expression Set (ADFES). The software reached a FACS index of agreement of 0.67 on average in both datasets. The results of this validation test are meaningful only in relation to human performance rates for both basic emotion recognition and FACS coding. The human emotions recognition for the 2 datasets was 85%, therefore FaceReader is as good at recognizing emotions as humans. To receive FACS certification, a human coder must reach an agreement of 0.70 with the master coding of the final test. Even though FaceReader did not attain this score, action units (AUs) 1, 2, 4, 5, 6, 9, 12, 15, and 25 might be used with high accuracy. We believe that FaceReader has proven to be a reliable indicator of basic emotions in the past decade and has a potential to become similarly robust with FACS.
Article
Full-text available
In his famous thought experiments on synthetic vehicles, Valentino Braitenberg stipulated that simple stimulus-response reactions in an organism could evoke the appearance of complex behavior, which, to the unsuspecting human observer, may even appear to be driven by emotions such as fear, aggression and even love (Braitenberg 1984). In fact, humans appear to have a strong propensity to anthropomorphize, driven by our inherent desire for predictability that will quickly lead us to discern patterns, cause-and-effect relationships and yes, emotions, in animated entities, be they natural or artificial. But might there be reasons, that we should intentionally "implement" emotions into artificial entities, such as robots? How would we proceed in creating robot emotions? And what, if any, are the ethical implications of creating "emotional" robots? The following article aims to shed some light on these questions with a multi-disciplinary review of recent empirical investigations into the various facets of emotions in robot psychology.
Article
Naturalistic observation at a bowling alley (N = 1,793 balls) shows that bowlers often smiled when socially engaged, looking at and talking to others, but not necessarily after scoring a spare or a strike. In a 2nd study, bowlers (N = 166 balls) rarely smiled while facing the pins but often smiled when facing their friends. At a hockey game, fans (N = 3,726 faces) smiled both when they were socially involved and after events favorable to their team. Pedestrians (N = 663) were much more likely to smile when talking but only slightly more likely to smile in response to nice weather than to unpleasant weather. These 4 studies suggest a strong and robust association of smiling with a social motivation and an erratic association with emotional experience. (29 ref) (PsycINFO Database Record (c) 2006 APA, all rights reserved).
Article
The present study investigates how children from two different cultural backgrounds (Pakistani, Dutch) and two different age groups (8 and 12 year olds) experience interacting with a social robot (iCat) during collaborative game play. We propose a new method to evaluate children’s interaction with such a robot, by asking whether playing a game with a state-of-the-art social robot like the iCat is more similar to playing this game alone or with a friend. A combination of self-report scores, perception test results and behavioral analyses indicate that child–robot interaction in game playing situations is highly appreciated by children, although more by Pakistani and younger children than by Dutch and older children. Results also suggest that children enjoyed playing with the robot more than playing alone, but enjoyed playing with a friend even more. In a similar vein, we found that children were more expressive in their non-verbal behavior when playing with the robot than when they were playing alone, but less expressive than when playing with a friend. Our results not only stress the importance of using new benchmarks for evaluating child–robot interaction but also highlight the significance of cultural differences for the design of social robots.
Article
Research Findings: Two forms of exercise play (toy mediated and non-mediated) and 2 forms of rough-and-tumble (R&T) play (chase and fighting) were examined in relation to preschoolers' peer competence. A total of 148 preschoolers (78 boys, 89 Euro-Americans) were observed during free play at their university-sponsored child care center. The gender makeup of children's play companions (same gender, other gender, or mixed gender) as well as the type of play that children engaged in was recorded. Sociometric interviews assessed how well liked children were by their classmates. Analyses revealed that toy-mediated exercise play with mixed-gender and same-gender peers was associated with boys' and girls' peer acceptance. Girls' non-mediated exercise play and boys' R&T chasing was associated with peer acceptance. Boys who engaged in R&T fighting with same-gender peers were better liked by peers, whereas boys who engaged in R&T chasing with other-gender peers were not liked by peers. Practice or Policy: The results suggest that child gender and the gender of one's playmate are important factors in associations between physical activity play and peer acceptance.
Conference Paper
Expressive behaviour is a vital aspect of human interaction. A model for adaptive emotion expression was developed for the Nao robot. The robot has an internal arousal and valence value, which are influenced by the emotional state of its interaction partner and emotional occurrences such as winning a game. It expresses these emotions through its voice, posture, whole body poses, eye colour and gestures. An experiment with 18 children (mean age 9) and two Nao robots was conducted to study the influence of adaptive emotion expression on the interaction behaviour and opinions of children. In a within-subjects design the children played a quiz with both an affective robot using the model for adaptive emotion expression and a non-affective robot without this model. The affective robot reacted to the emotions of the child using the implementation of the model, the emotions of the child were interpreted by a Wizard of Oz. The dependent variables, namely the behaviour and opinions of the children, were measured through video analysis and questionnaires. The results show that children react more expressively and more positively to a robot which adaptively expresses itself than to a robot which does not. The feedback of the children in the questionnaires further suggests that showing emotion through movement is considered a very positive trait for a robot. From their positive reactions we can conclude that children enjoy interacting with a robot which adaptively expresses itself through emotion and gesture more than with a robot which does not do this.
Conference Paper
This paper presents the results of a perceptual study with ZECA (Zeno Engaging Children with Autism), a robot able to display facial expressions. ZECA is a robotic tool used to study human-robot interactions with children with Autism Spectrum Disorder. This study describes the first steps towards this goal. Facial expressions and gestures conveying emotions such as sadness, happiness, or surprise are displayed by the robot. The design of the facial expressions based on action units is presented. The participants answered a questionnaire intended to verify if these expressions with or without gestures were recognized as such in the corresponding video. Results show that participants were successfully able to recognize the emotion featured in the corresponding video, and the gestures were a valuable addition to the recognition.