Conference PaperPDF Available

Wear Your Heart on Your Sleeve: Users Prefer Robots with Emotional Reactions to Touch and Ambient Moods

Authors:

Abstract

Robots are increasingly being developed as assistants for household, education, therapy, and care settings. Such robots can use adaptive emotional behavior to communicate warmly and effectively with their users and to encourage interest in extended interactions. However, autonomous physical robots often lack a dynamic internal emotional state, instead displaying brief, fixed emotion routines to promote specific user interactions. Furthermore, despite the importance of social touch in human communication, most commercially available robots have limited touch sensing, if any at all. We propose that users’ perceptions of a social robotic system will improve when the robot provides emotional responses on both shorter and longer time scales (reactions and moods), based on touch inputs from the user. We evaluated this proposal through an online study in which 51 diverse participants watched nine randomly ordered videos (a three-by-three full-factorial design) of the koala-like robot HERA being touched by a human. Users provided the highest ratings in terms of agency, ambient activity, enjoyability, and touch perceptivity for scenarios in which HERA showed emotional reactions and either neutral or emotional moods in response to social touch gestures. Furthermore, we summarize key qualitative findings about users’ preferences for reaction timing, the ability of robot mood to show persisting memory, and perception of neutral behaviors as a curious or self-aware robot.
Wear Your Heart on Your Sleeve: Users Prefer Robots with
Emotional Reactions to Touch and Ambient Moods
Rachael Bevill Burns1, Fayo Ojo1,2, and Katherine J. Kuchenbecker1
Abstract Robots are increasingly being developed as assis-
tants for household, education, therapy, and care settings. Such
robots can use adaptive emotional behavior to communicate
warmly and effectively with their users and to encourage
interest in extended interactions. However, autonomous physical
robots often lack a dynamic internal emotional state, instead
displaying brief, fixed emotion routines to promote specific
user interactions. Furthermore, despite the importance of social
touch in human communication, most commercially available
robots have limited touch sensing, if any at all. We propose
that users’ perceptions of a social robotic system will improve
when the robot provides emotional responses on both shorter
and longer time scales (reactions and moods), based on touch
inputs from the user. We evaluated this proposal through an
online study in which 51 diverse participants watched nine
randomly ordered videos (a three-by-three full-factorial design)
of the koala-like robot HERA being touched by a human.
Users provided the highest ratings in terms of agency, ambient
activity, enjoyability, and touch perceptivity for scenarios in
which HERA showed emotional reactions and either neutral
or emotional moods in response to social touch gestures.
Furthermore, we summarize key qualitative findings about
users’ preferences for reaction timing, the ability of robot mood
to show persisting memory, and perception of neutral behaviors
as a curious or self-aware robot.
I. INTRODUCTION
Robots may soon join our daily interactions as home
assistants, companions, educational tutors [1], and therapy
aids [2]. While meeting a new robot is often exciting, the
novelty can wear off quickly [3]. It is important for such
robots to maintain user interest over a sustained period of
time to maximize the benefits of their use, such as completing
an educational game series [4] or a physical-therapy regimen.
One effective way to promote long-term human-robot
interaction (HRI) is to create adaptive robot behaviors that
mirror aspects of human-human interactions [5]. In partic-
ular, robots can convey emotions during social interaction
to increase their perceived naturalness (i.e., how similar the
robot’s behaviors are to what the user expects), attentiveness
(i.e., how much the robot detects its environment), and en-
gagement (i.e., how the robot reacts to the detected input) [6].
In many user studies, the robot is controlled by a hu-
man operator to provide fast and appropriate emotional
responses [7]. However, teleoperation is not a sustainable
method of interaction for autonomous robots. In other re-
search approaches, either the robot’s affective state (i.e., its
simulation of emotion) is a fixed routine, regardless of user
1All authors are with the Max Planck Institute for Intelligent Systems,
Stuttgart, Germany {rburns, kjk}@is.mpg.de
2Fayo Ojo is also with the Johns Hopkins University, Baltimore, USA
fojo1@jhu.edu
Fig. 1. A screenshot from one of the nine videos in our study, which
showcased three levels of immediate reaction to social touch and three long-
term mood responses by HERA, a child-sized zoomorphic robot.
interaction, or the robot’s affective state is instantly changed
by user action, usually to reward certain user behaviors, and
then returns to a default setting [8]. These approaches neither
demonstrate situational awareness from the robot nor adapt
with the user, and therefore they may not promote long-term
interaction. Furthermore, despite the importance of social
touch in human communication, none of these approaches
use social touch as a stimulus.
We investigated user preferences for shorter- and longer-
term robot emotions through a video-watching study where
51 participants evaluated nine options for the richness of a
robot’s response to social touch, as seen in Fig. 1. These
carefully crafted videos showcased robot behaviors ranging
from no touch response at all to full emotional reactions
and moods inspired by nonverbal cues used in human-human
interaction. The nine depicted robot behaviors represented a
full-factorial experimental design with two factors (reactions
and moods) that each have three levels.
In terms of social touch, we found that users felt it was
very important for the robot to immediately react to, or at
least acknowledge, that a touch had occurred; users most
preferred when the robot showed an immediate emotional
reaction. Our qualitative analysis additionally revealed that
if there was no immediate reaction, users still perceived an
emotional mood as an appropriate, albeit delayed, response to
touch. Furthermore, having the robot display a mood, either
neutral or with emotion, improved users’ overall opinions.
In the remainder of this paper, we highlight related work
in Section II, and we describe our user study in Section III.
The study’s quantitative and qualitative results are reported in
Sections IV and V, respectively. We discuss the implications
of our findings in Section VI.
II. REL ATE D WORK
A. Emotion and social touch in social robots
A robot can display emotions to make the user aware of
its internal state, to reinforce a user’s action, or even to guide
the user to a goal behavior [5]. Emotional display directly
improves user interaction: children reacted more expressively
and positively toward a NAO with a teleoperated emotion
model than toward its non-emotive counterpart [9]. The
autonomous Roboceptionist by Kirby et al. used emotions,
moods, and long-term attitudes toward repeat visitors (iden-
tified by a swipe of their university ID cards) during chat-
based interactions [10]. Users significantly changed their
interactions with the robot based on its mood, hinting at the
power of this capability for social robots.
Additionally, while existing commercially available robots
have limited to no touch perception, researchers are in-
vestigating how upcoming robots should react to social
touch. Fitter and Kuchenbecker found that users’ perceptions
of a hand-clapping robot’s pleasantness and energeticness
significantly increased when the robot used facial reactivity
to acknowledge users’ touch contacts [11]. Lehmann et
al. investigated what type of directional movement was
considered an appropriate response to touch contact on the
hand for Pepper and NAO [12]. Participants perceived the
android robot ERICA to be more human-like when it gave an
immediate subtle reaction to touch compared to responding
after a two-second delay [13]. HuggieBot 2.0 haptically
detects when a user initiates and ends a hug [14]. HuggieBot
3.0 also identifies the type of intra-hug gesture the user
performs (e.g., rubbing or patting the robot’s back) and recip-
rocates with an intra-hug gesture of its own [15]. The robot
seal PARO coos pleasantly in response to gentle touches
and cries if it is handled with a high level of force [16].
While there have been several instances of robots providing
an immediate acknowledgement of social touch contacts,
we want to investigate how both shorter- and longer-term
emotional responses that are customized to the user’s touch
input affect their perceptions of the system.
B. Existing computational emotion models
While complex emotion models exist for chat bots and
virtual agents, within the study of emotion usage for au-
tonomous social robots, there has been a strong focus on
human recognition of and responses to static robot emotions,
rather than building dynamic emotion systems that change
over time [17]. However, some models exist that track the
robot’s internal state, such as the TAME framework by
Moshkina et al. [18]. This framework presents an emotion
model composed of traits (T), attitudes (A), moods (M),
and emotions (E). Each of these four categories is affected
by both internal and external factors over varying time
scales, with fundamental traits remaining constant over time,
attitudes changing very slowly, moods changing over the
course of a day, and emotions changing quickly in reaction
to immediate stimuli. Two proof-of-concept user studies with
partial implementations of the framework showed prelimi-
nary success, with the traits and emotion components enabled
in the dog-like robot AIBO [19] and a simple demonstra-
tion of mood and emotion enabled in the humanoid robot
NAO [20]. This second example involved a mock-search-
and-rescue scenario: after the overhead lights dimmed, the
NAO used different approaches to tell the participant that
the area was unsafe and needed to be evacuated. Participants
exited the study area further and faster in the conditions that
used a negative mood through voice and actions compared
to the neutral control condition [20], indicating the power of
affective communication in HRI.
Notably, nearly all robot systems that emote based on user
input focus on vision- or audio-based sensing modalities [6].
We did not find any models that suggested tactile input, such
as social touch from a user, as suitable stimuli. Therefore,
there is a need to systematically investigate the relationship
between social touch as an input and what users perceive to
be appropriate affective robot responses as an output.
III. USE R STUDY
Given the prevalence of robots that display only brief,
static emotions, we wanted to investigate whether the combi-
nation of an immediate reaction and a visually perceivable,
dynamic internal emotional state (i.e., mood) would be a
feature that users would notice, appreciate, or even prefer,
especially in regard to social-touch interaction. We evaluated
this question through a video-watching study. In particular,
we sought to understand the difference between emotionally
neutral and emotional robot behaviors, as well as shorter-
term and longer-term responses. Here, we list our hypotheses
and explain the study.
A. Hypotheses
H1. A robot that responds to physical contacts with imme-
diate positive or negative reactions will be perceived
as more intelligent than a robot that does not respond
to touch. A robot that acknowledges physical contacts
without emotion will be perceived as having moderate
intelligence.
H2. A robot that shows its internal emotional state through
positive and negative ambient moods will be per-
ceived as more intelligent than a robot with no ambient
mood. A robot that displays a neutral ambient mood will
be perceived as having moderate intelligence between
these two extremes.
H3. Increasing levels of both touch reactivity and mood will
be increasingly exciting and engaging to participants.
B. Study design
In order to test these hypotheses, we designed a user study
with two independent variables: robot reactions and robot
mood. We define a reaction as the robot’s immediate response
to a touch, and a mood as the internal emotional state of
the robot, which changes over time based on touch stimuli.
We created a 3 ×3 full-factorial design with three reaction
levels (abbreviated in this paper as none , neutral r, and
emotional R) and three mood levels (none , neutral m, and
emotional M). We created videos for all possible pairings
TABLE I
ACO MPARIS ON OF TH E NIN E ROB OT BE HAVI OR C ON D IT IO NS S HOW N IN T H E ST UDY,EAC H IN CL UD IN G A MO OD L EV EL A N D A RE ACT IO N LE VE L.
Reaction Level: None Reaction Level: Neutral rReaction Level: Emotional R
Mood Level:
None
Condition abbreviation: ••
No immediate reaction to touch.
No ambient mood between touches.
Condition abbreviation: r
Looks at touched location.
No ambient mood between touches.
Condition abbreviation: R
Emotional reaction to touch (positive
or negative).
No ambient mood between touches.
Mood Level:
Neutral m
Condition abbreviation: m
No immediate reaction to touch.
Neutral ambient mood between
touches.
Condition abbreviation: mr
Looks at touched location.
Neutral ambient mood between
touches.
Condition abbreviation: mR
Emotional reaction to touch (positive
or negative).
Neutral ambient mood between
touches.
Mood Level:
Emotional M
Condition abbreviation: M
No immediate reaction to touch.
Emotional ambient mood (positive or
negative) based on prior touches.
Condition abbreviation: Mr
Looks at touched location.
Emotional ambient mood (positive or
negative) based on prior touches.
Condition abbreviation: MR
Emotional reaction to touch (positive
or negative).
Emotional ambient mood (positive or
negative) based on prior touches.
of reaction and mood levels, with a total of nine conditions.
Table I showcases the nine conditions compared in this study,
including the abbreviations used to refer to each condition
and a description of what generally occurred in each video.
Due to COVID-19 regulations, we conducted the study
online and presented each participant with nine prerecorded
videos. We utilized the Haptic Empathetic Robot Animal
(HERA), a social robot by Burns et al. [21], as the sample
robot for our videos. HERA is a commercially available
NAO robot (Aldebaran Robotics) enclosed in a koala suit
to give the appearance of a robot animal, as zoomorphic
robots are known to work well in therapy and care settings
for users of all ages [22], [23], [24]. This koala-like robot has
custom tactile sensors installed across its body parts and is
intended for social touch interaction [25], though we utilized
the simple onboard tactile sensors built into the NAO robot’s
hands and head for this paper.
We designed various physical behaviors for the robot
to perform according to each reaction and mood level.
For the neutral and emotional reaction levels, the robot
reacted immediately after the contact occurred. For the
neutral reaction, it turned its head toward the touch. For a
negative emotional reaction, the robot pulled its arms inward
and let out a cry. For a positive reaction, it cheered and
waved its arms. The audio cues for the emotional reactions
were used with permission from Javed et al. [8]. For the
“none” reaction level, the robot neither moved nor made
any sounds immediately following contact. For the neutral
and emotional mood levels, the robot visually indicated its
mood long after it had been touched and after the immediate
reaction, if applicable. We modeled the mood movements
on human body language and previous robot imitations of
body language [26], [27]. For the neutral mood, it looked
around the room. For the negative emotional mood, the robot
lowered its head, drew its head and arms close to its chest,
and moved slowly. For the positive emotional mood, the
robot raised its head, demonstrated open body posture with
open arms, and moved at a faster pace. For the “none” mood
level, the robot neither moved nor made any sounds after the
immediate reaction sequence was completed.
The nine videos were carefully created to differ only in the
robot’s responses. Fig. 2 provides a timeline to help visualize
this timing and sequence of events. Each video begins with
the robot waving its arm as a greeting at 5 seconds, but the
reaction and mood performances that follow depend on the
condition being shown. At 13 seconds, an experimenter hits
the robot on its left arm, triggering its first reaction for the six
videos belonging to reaction levels rand R. At 30 seconds,
the robot displays its first ambient mood for the six videos
belonging to mood levels mand M. At 43 seconds, the
experimenter pets the robot on its head, triggering the second
reaction. At 60 seconds, the robot displays its second ambient
mood behavior. A simultaneous compilation of all nine video
conditions can be seen in our supplementary video.
C. Participants
Participants were adults who spoke and understood En-
glish well, had normal or corrected-to-normal vision, and
had access to a device with internet and in-browser video
and audio capabilities. As this survey was entirely online,
we could recruit participants regardless of their location. We
advertised our study via relevant email lists and on several
social-media platforms. We also used snowball sampling
by asking participants to share the study information with
interested friends and colleagues. In total, 52 participants
were recruited; one participant’s results were omitted for not
following study instructions.
As we aimed to understand perceptions of our robot
system across a general population, rather than a niche
audience, we recruited a diverse set of participants across
age, gender, home country, and experience with robots. Our
final population (26 female, 24 male, 1 preferred not to
answer) were all adults who ranged in age from 18 to
83 (mean: 37, std. dev.: 17) and came from 18 different
home countries (28 participants from the United States).
Participants self-identified their familiarity with robots based
on pre-defined levels in our demographic questionnaire: 14
had no prior experience, 13 were novices (had seen some
commercial robots), 9 were beginners (had interacted with
some commercial robots), 7 were intermediate (had designed,
0 s 5 s 13 s 30 s
Robot
waves
Show
reaction (•, r, R)
Show
mood (•, m, M)
43 s 60 s
Pet Show
mood (•, m, M)
5 s 5 s
!
5 s
5 s
Hit Show
reaction (•, r, R)
5 s
!
Fig. 2. Timeline illustrating the order of stimuli and responses that viewers saw in each of the nine video conditions. While the robot’s reaction and
mood levels varied by condition, the timing and sequence of events remained the same across all nine videos. This paper’s supplementary video shows all
nine videos together to facilitate comparisons.
Agency
Touch
Perceptivity
Ambient
Activity
Enjoyability
1. This robot seemed intelligent.
6. This robots behavior seemed natural.
2. This robot detected that the person touched it.
3. This robot understood how the person touched it.
4. This robot was active when it was not being touched.
5. This robot showed how it was feeling when it
was not being touched.
7. Watching this robot was enjoyable.
8. I would like to interact with this robot.
H1
H2
H3
Fig. 3. The statements that participants rated for each of the videos,
numbered by presentation order. The boxes show how the eight statements
were grouped into categories, and the arrows show the categories used to
evaluate each hypothesis.
built, and/or programmed some robots), and 8 were experts
(frequently design, build, and/or program robots).
D. Procedure
This study was approved by the Ethics Council of the Max
Planck Society under the Haptic Intelligence Department’s
framework agreement as protocol number F017A. Partic-
ipants did not receive any compensation. After providing
informed consent, they received a link to the study on an
online survey platform.
After completing the demographic questionnaire, the user
was asked to watch and listen to one of the nine videos cor-
responding to the conditions listed in Table I and described
in Section III-B. The videos were shown in random order
to mitigate bias. After watching each video, the participant
rated their agreement with each of the eight statements listed
in Fig. 3 using a sliding scale from 0 (strongly disagree)
to 10 (strongly agree) with slider increments of 0.1. These
statements were carefully designed to address our three hy-
potheses from different angles, as well as to evaluate whether
there were differences between the neutral and emotional
levels of reaction and mood. The connection between each
statement and the hypothesis (or hypotheses) it addresses
can also be seen in Fig. 3. Participants then completed an
open-ended response by writing what they liked and disliked
about the robot behavior shown in the current video. This
process was repeated until the participant had seen, rated,
and commented on all nine videos. One final optional text
box at the end of the survey gave participants the opportunity
to provide any closing comments or suggestions.
The survey included approximately ten minutes of footage
between all nine videos. As the survey was conducted online,
there was no time limit required for completion, and partici-
pants were encouraged to take breaks from watching videos
and answering questions as needed. The median survey
completion time was 38 minutes (minimum: 20 minutes,
mean: 50 minutes, std. dev.: 36 minutes).
This procedure resulted in 72 sliding-scale answers and
either nine or ten open-ended answers per participant. 51
completed surveys created a total dataset of 3,672 sliding-
scale answers and 500 open-ended responses.
IV. QUANT ITATIVE RES ULTS
A. Thematically combining survey items
To reduce noise and increase interpretability, we com-
bined pairs of related survey statements by averaging their
responses for each user. The statements can be seen in Fig. 3
and are labeled by the bold keyword in each statement.
We combined Statements 1 and 6 (intelligent and natural)
to create the “agency” category, which addresses both H1
and H2. Also for H1, we combined participants’ responses
to Statements 2 and 3 (detected and understood) into the
category of “touch perceptivity”. For H2, we combined the
responses for Statements 4 and 5 (active and feeling) into the
category of “ambient activity”. Finally, for H3, we combined
the results of Statements 7 and 8 (enjoyable and interact)
into “enjoyability”.
For all four categories, the statements merged had statis-
tically significant correlations (p < 0.001). The two state-
ments which make up the agency category had a correlation
coefficient of r= 0.78. The statements within the categories
of ambient activity, enjoyability, and touch perceptivity had
correlation coefficients of 0.59,0.93, and 0.78, respectively.
B. Data processing
Next, we checked whether our data were normally dis-
tributed within each category for each video condition. We
agency ambient activity enjoyability touch perceptivity
-5
-4
-3
-2
-1
0
1
2
3
4
5
Level of agreement
agency ambient activity enjoyability touch perceptivity
-5
-4
-3
-2
-1
0
1
2
3
4
5
Level of agreement
agency ambient activity enjoyability touch perceptivity
-5
-4
-3
-2
-1
0
1
2
3
4
5
Level of agreement
agency ambient activity enjoyability touch perceptivity
-5
-4
-3
-2
-1
0
1
2
3
4
5
Level of agreement
Fig. 4. Box-and-whisker comparisons of combined, normalized user ratings of the video conditions. Lines above the boxplots marked with indicate
pairwise comparisons that are NOT significantly different. For each distribution, the central line indicates the median, the box shows the interquartile range
(IQR), the whiskers show the range up to 1.5 times the IQR, and +marks indicate outliers. The legend uses the abbreviations established in Table I.
TABLE II
THE R ES ULTS O F TW O-WAY RANOVA ANALY SI S FO R EAC H C ATEG ORY.
mood reaction mood ×reaction
agency F(2,100) = 59.06,p < 0.001 F(2,100) = 131.86,p < 0.001 F(4,200) = 6.71,p < 0.001
ambient activity F(2,100) = 66.70,p < 0.001 F(2,100) = 53.77,p < 0.001 F(4,200) = 3.62,p= 0.007
enjoyability F(2,100) = 52.80,p < 0.001 F(2,100) = 122.61,p < 0.001 F(4,200) = 6.35,p < 0.001
touch perceptivity F(1.79,89.53) = 31.71,p < 0.001 F(1.42,71.04) = 130.64,p < 0.001 F(3.19,159.34) = 12.95,p < 0.001
found that not all of the responses were normally distributed.
For example, for the first video condition ••, in which the
robot gave no feedback at all, several participants rated
the statements with a 0, a polarized answer at the bottom
end of the provided rating scale. Therefore, we used the
logit transformation to increase the normality of our data
distributions:
logit (x) = ln x
1x.(1)
Here, xis a value between 0and 1, and the function
stretches out values at the two edges of this range. Since
the transformation is undefined for 0and 1, prior to merging
the statements into their respective categories, any values
rated as 0were changed to 0.1, and ratings of 10 were
changed to 9.9. The data were then divided by 10 before
applying the logit transformation; the numerical range of
our participants’ responses therefore shifted from 0to 10 to
a new range of 4.595 (logit(0.01)) to 4.595 (logit(0.99)).
Anderson-Darling tests confirmed that all four categories had
sufficiently normal data distributions after transformation.
Fig. 4 uses box-and-whisker plots to showcase the combined,
normalized responses of participants for the four statement
categories separated by video condition.
C. RANOVA analysis
To prepare for repeated-measures analysis of variance
(RANOVA), we then checked the four categories of data for
sphericity. Mauchly’s Test of Sphericity indicated that the
assumption of sphericity had not been violated for the agency
category (x2(9) = 10.892, p = 0.283), the ambient activity
category (x2(9) = 12.875, p = 0.169), or the enjoyability
category (x2(9) = 4.746, p = 0.856). However, sphericity
was violated for the touch perceptivity category; therefore,
it was treated with a Greenhouse-Geisser correction (ˆϵ=
0.797).
We then conducted four two-way RANOVAs. For all four
categories, there was a significant main effect of mood and
a significant main effect of reaction on the category’s rating,
and there was also always a significant interaction between
mood and reaction. We report the Fstatistics and pvalues
for the main effects and interactions in Table II.
For the agency and touch perceptivity categories, pairwise
comparisons across the mood levels and across the reactions
with Bonferroni correction (α= 0.0083) showed that there
were significant differences between all three mood levels
and all three reaction levels. For the ambient activity cate-
gory, there was no significant difference between the rand
Rreaction levels. For the enjoyability category, there was no
significant difference between the mand Mmood levels.
D. Posthoc testing
After looking at the two independent variables separately
with RANOVA testing, we conducted pairwise comparisons
of all nine conditions listed in Table I using Tukey’s Test.
All pairings had a significant difference unless otherwise
noted. The results of this posthoc testing can be seen in
Fig. 4. Since the pairwise comparisons were predominantly
statistically significant, we have chosen to highlight the pairs
that were not significantly different.
For all four categories, there were two pairwise compar-
isons for which the difference was not significant: mr vs.
Mr and mR vs. MR. This result indicates that so long as
the visible reactions were the same, differing visible mood
levels did not change how the user perceived the robot. Our
category-specific findings are as follows.
Agency: No additional non-significant comparisons.
Ambient activity: The pair rvs. Ralso had no signifi-
cant difference. If the robot did not perform a mood (i.e.,
no ambient activity) but reacted neutrally or emotionally
to touch, the type of reaction did not affect the user’s
impressions of the robot’s ambient activity.
Enjoyability: No additional non-significant pairs.
Touch perceptivity: There was also no significance be-
tween Rvs. mR, nor between Rvs. MR. Therefore,
there was no significant difference in this category between
any of the three conditions with the Rreaction level. If the
robot performed emotional reactions, then the mood level did
not affect its touch perceptivity ratings.
V. QUAL ITATIVE RES ULTS
Participants gave 500 qualitative short answers: 459 from
the nine video conditions and 41 from the optional closing
feedback text box. Their written responses provided rich
qualitative information that complemented our quantitative
results, as summarized below.
A. Lack of reaction to touch is unsettling and unintelligent
If there was no immediate reaction, participants felt that
the robot did not realize it had been touched, or they
thought it could not discern whether the touch was positive
or negative. Participants felt the robot was “unaware of
any differences in the types of touch” (Participant 41, or
P41) and that “it did not seem to understand the person’s
action” (P46). Furthermore, participants found the robot’s
behavior unsettling if the robot moved ambiently but did not
immediately react to touch. “I did not like that the robot did
not react at all to being slapped by the gloved hand, said
P18. Participants called the lack of reaction “unsettling” (P1,
P12) and “unnatural” (P3, P26, P35). “It feels a lot more
lifeless when it doesn’t respond”, wrote P30.
B. Delayed reaction (i.e., mood) is better than no reaction
Participants really wanted the robot to react to touches
and became upset if the robot did not. This phenomenon
occurred in the conditions •• and m. However, it occurred
far less frequently in M. Instead, the participants credited
the emotional moods as very delayed responses to touch.
Rather than call the robot unsettling, several participants said
that the robot’s reactions (i.e., the emotional moods) were too
slow. P10 said the robot was “like a slow computer”. P20
wrote, “I liked the reactions to the hitting and the stroking.
They seemed to convey actual emotion. I didn’t like that the
reactions were delayed”.
C. Neutral behaviors perceived as curiosity and awareness
People provided unique comments for conditions with a
neutral mood and/or neutral reaction. Rather than the robot’s
emotions, these comments were focused on the robot’s level
of awareness about itself and its surroundings. Participants
felt that the robot “seem[ed] more aware about its envi-
ronment” (P11). They referred to the robot as “curious”
(P9, 28, P37), “naive” (P7), and “looking for” or “craving
interaction” (P35, P37). Some participants also took the
neutral reaction as a questioning gesture, with the robot
“look[ing] for an explanation as to why it was touched” (P3)
or “demanding for an answer” (P28). Some participants even
anthropomorphized what the robot might be thinking during
the mr condition, writing “Why did you do that? You woke
me up” (P14) and “Why did you hit me?!? Let me think,
was it something that I did?” (P49).
D. Mood amplifies reaction and shows persisting memory
While participants strongly preferred when the robot
showed a reaction, having a mood further improved their
impressions of the robot. P10 echoed this sentiment, saying
“I liked that it reacted immediately, but I even liked it more
when it started doing some follow-up actions. It felt as if
the robot thought about what just happened a bit more and
became sadder or happy, just like humans.”
Participants also appreciated that an emotional mood
demonstrated that the touch interaction left a lasting im-
pression on the robot. Even in the Mcondition, with no
immediate reaction, P1 wrote, After being hit, its overall
mood changed to grumpy...I like that it seemed to remember
something about the person, which made it more realistic”.
When there was no mood, participants felt that the robot’s
behavior was less believable, with P10 calling it “a forgetful
robot” and P42 writing, “I don’t like that when being hit,
after some seconds [the robot] seems pretty good with it”.
E. Emotional reaction sounds were polarizing
The emotional reaction was the only reaction level with
sound. Interestingly, the sounds evoked rather polarizing
results. Some participants praised the sounds, calling them
“very endearing” (P30) and writing comments such as, “I
liked the sound! Cool that sounds without words can still be
so clearly understood” (P38) and “Wow! The sound adds so
much!” (P5). However, others wrote that they disliked the
audio, with P31 calling it “creepy” and P33 writing, “The
voice was [a] bit brash and loud...I wasn’t sure if it liked
to be touched or if it wanted me to leave it alone”. Some
participants also suggested that the sounds could be improved
to better represent their emotional intents. Likewise, the
emotional reaction movements could also be further tailored.
VI. DISCUSSION
A. Evaluating our hypotheses
As indicated in Fig. 3, we evaluated H1 using both the
agency and touch perceptivity categories and found that our
predictions about immediate reactions matched well with the
results. Participants most prefer an immediate emotional
reaction to touch. Within each mood level, the robot
was rated with the highest agency and touch perceptivity
values when it displayed emotional reactions. Additionally,
the emotional reactions received high touch perceptivity
ratings regardless of the mood level displayed. The emotional
reactions scored significantly higher than neutral reactions,
which in turn scored significantly higher than no reaction. In
our qualitative findings, participants perceived the robot as
unsettling and unable to understand touch if it did not react,
and they counted an emotional mood as a delayed emotional
reaction.
Similarly, we evaluated H2 using the agency and ambient
activity categories and found our predictions about ambient
moods to be partially correct. If the robot showed no reaction
to touch contacts, then the emotional mood scored higher
than neutral mood, which scored higher than no mood. If
the robot performed either neutral or emotional reactions
to touch, then having some visible mood, either neutral or
emotional, was rated significantly higher than conditions
with no mood at all. However, there was no significant
difference in ratings between the neutral and emotional
mood. Qualitatively, participants viewed neutral behaviors
as the robot being curious, and they noted that emotional
moods enabled the robot to show a lasting memory about
its touch interaction history. Therefore, we conclude that
demonstrating some mood, either neutral (perceived as
curious or exploratory) or emotional, is better than no
mood at all.
We evaluated H3 using the enjoyability category to mea-
sure engagement and interest. Within each mood level,
increasing reaction levels resulted in significantly higher
enjoyability. With respect to increasing mood levels, if there
was no reaction, then there was a significant difference
in ratings between all mood levels. If there was a visible
reaction (either neutral or emotional), then having a visible
mood was rated significantly higher than having no mood,
but there was no significant difference between the two
visible mood options. Participants most enjoyed the two
conditions mR and MR.
Overall, both the quantitative and qualitative results
strongly support the importance of a robot responding to
user touches on both shorter and longer time scales, using
both an immediate emotional reaction and a visible mood.
B. Limitations
Though promising results were achieved, this study is
not without limitations. First, some improvements could
be made regarding the use of sounds in this study. The
emotional reactions were accompanied by sounds, but the
emotional moods were not. This difference may have affected
participants’ ratings. Future studies could feature some happy
humming or sad mumbling for the emotional moods. How-
ever, the qualitative answers from participants show us that
the emotional moods were clearly understood even without
audio. Additionally, the sounds used for the emotional re-
action vocalizations could be refined. If participants did not
fully understand the emotional intent of the sounds, as P33
commented, then this ambiguity may have affected how they
rated the survey statements. Future studies could incorporate
a validation study to ensure the robot’s sounds are perceived
as intended.
Additionally, this study was conducted online rather than
in person due to COVID-19 restrictions. Using videos of the
robot prohibited direct physical interaction but provided us
with a very controlled study. Each participant saw and rated
the same emotional outputs in response to the same touch
stimuli, and we did not have to worry about an in-person
robot performing incorrectly during a trial. Participants were
all presented with the eight survey statements in the same
order, which could potentially have caused semantic bias.
However, we chose this consistent ordering to make the
survey as straightforward as possible for participants and
to reduce the risk of respondent fatigue. Using an online
approach also enabled us to recruit a wide range of diverse
participants. While this diversity helped us to gain an under-
standing of user perspectives across many ages, countries,
and robot expertise levels, not controlling these factors may
have produced biases in our results. For example, previous
research has found that users may react differently to robots
or have different interaction preferences according to their
age [28]. Future work could investigate whether the findings
we presented here still occur within narrower populations.
C. Implications for future research
In this study, the robot’s reaction to touch did not depend
on its current mood. However, this behavior could be made
even more realistic. As P49 pointed out, After receiving the
headstroke, the robot seemed happier, but the movements
seemed a little exaggerated for someone who had just
received a slap a few seconds before”. One could change
the intensity of the robot’s emotional reaction based on the
current mood level. For example, a robot who had just been
slapped could react less enthusiastically to stroking than a
robot who had not recently received a negative touch.
Additionally, in this study, we used only three levels of
ambient mood behaviors (negative, neutral, and positive)
to present participants with distinct videos. In the emo-
tional mood condition M, the robot quickly changed moods
within one minute rather than over a slower, more realistic
timescale, in order to show a mood change in a short video.
To more closely imitate natural behavior, rather than repre-
senting mood using a set of discrete levels, we plan to utilize
a more continuous approach by modeling the robot’s internal
affective state as a second-order dynamic system [29]. This
approach will enable us to fine-tune parameters controlling
the robot’s personality and to customize how its mood
continuously changes over time.
Finally, while this study was a demonstration that utilized
videos, we look forward to conducting future studies with
in-person participants. Using the externally mounted tactile
sensors designed by Burns et al. [25], we plan to conduct
an in-person study in which a NAO robot uses our afore-
mentioned emotion model to display immediate reactions
and longer-term moods in response to a variety of common
social-touch gestures.
In summary, to investigate what impact robot reactions and
moods play in user perceptions of robots receiving social
touch, we conducted a video study with nine conditions.
Indeed, the participants gave us a clear message: they prefer
robots that respond to touch with immediate emotional
reactions, and with visible moods between touches. In order
to increase user interest and promote understanding of the
robotic system, the robot should update its behavior based on
the touch input it receives from its social interaction partner,
the user. Having a robot react to touch in an emotional
and customizable way, both with an immediate response and
through a longer-term change in its mood, has the potential
to promote a plethora of new experiences for human-robot
interaction.
ACK NO WL E DG MEN TS
The authors thank Sophia Haass for programming the
robot’s animations, Joey Burns for his help with the survey
and supplementary video, and the International Max Planck
Research School for Intelligent Systems (IMPRS-IS) for
supporting Rachael Bevill Burns.
REFERENCES
[1] G. Gordon, S. Spaulding, J. K. Westlund, J. J. Lee, L. Plummer,
M. Martinez, M. Das, and C. Breazeal, “Affective personalization of
a social robot tutor for children’s second language skills, in Proc.
AAAI Conf. on Artificial Intelligence, vol. 30. Phoenix, USA: AAAI
Press, 2016, pp. 1–7.
[2] R. Pakkar, C. Clabaugh, R. Lee, E. Deng, and M. J. Matari´
c,
“Designing a socially assistive robot for long-term in-home use for
children with autism spectrum disorders,” in Proc. IEEE Int. Symp.
on Robot and Human Interactive Communication (RO-MAN). New
Delhi, India: IEEE, 2019, pp. 1–7.
[3] I. Leite, C. Martinho, A. Pereira, and A. Paiva, As time goes by: long-
term evaluation of social presence in robotic companions, in Proc.
IEEE Int. Symp. on Robot and Human Interactive Communication
(RO-MAN). Toyama, Japan: IEEE, 2009, pp. 669–674.
[4] B. Scassellati, L. Boccanfuso, C.-M. Huang, M. Mademtzi, M. Qin,
N. Salomons, P. Ventola, and F. Shic, “Improving social skills in
children with ASD using a long-term, in-home social robot,” Science
Robotics, vol. 3, no. 21, pp. 1–9, 2018.
[5] C. Breazeal, “Role of expressive behaviour for robots that learn
from people,” Philosophical Trans. of the Royal Society B: Biological
Sciences, vol. 364, no. 1535, pp. 3527–3538, 2009.
[6] F. Cavallo, F. Semeraro, L. Fiorini, G. Magyar, P. Sinˇ
c´
ak, and P. Dario,
“Emotion modelling for social robotics applications: a review,” Journal
of Bionic Engineering, vol. 15, no. 2, pp. 185–203, 2018.
[7] S. Jeong, D. E. Logan, M. S. Goodwin, S. Graca, B. O’Connell,
H. Goodenough, L. Anderson, N. Stenquist, K. Fitzpatrick, M. Zisook,
et al., “A social robot to mitigate stress, anxiety, and pain in hospital
pediatric care,” in Comp. ACM/IEEE Int. Conf. on Human-Robot
Interaction (HRI). New York, USA: IEEE, 2015, pp. 103–104.
[8] H. Javed, R. Burns, M. Jeon, A. M. Howard, and C. H. Park, A robotic
framework to facilitate sensory experiences for children with autism
spectrum disorder: A preliminary study, ACM Trans. on Human-
Robot Interaction (THRI), vol. 9, no. 1, pp. 1–26, 2019.
[9] M. Tielman, M. Neerincx, J.-J. Meyer, and R. Looije, Adaptive
emotional expression in robot-child interaction,” in Proc. ACM/IEEE
Int. Conf. on Human-Robot Interaction (HRI). Bielefeld, Germany:
IEEE, 2014, pp. 407–414.
[10] R. Kirby, J. Forlizzi, and R. Simmons, Affective social robots,
Robotics and Autonomous Systems, vol. 58, no. 3, pp. 322–332, 2010.
[11] N. T. Fitter and K. J. Kuchenbecker, “How does it feel to clap hands
with a robot?” Int. J. Social Robotics, vol. 12, no. 1, pp. 113–127,
2020.
[12] H. Lehmann, A. Rojik, K. Friebe, and M. Hoffmann, “Hey, robot! An
investigation of getting robot’s attention through touch,” in Proc. Int.
Conf. on Social Robotics (ICSR). Springer, 2023, pp. 388–401.
[13] M. Shiomi, T. Minato, and H. Ishiguro, “Subtle reaction and response
time effects in human-robot touch interaction, in Proc. Int. Conf. on
Social Robotics (ICSR). Springer, 2017, pp. 242–251.
[14] A. E. Block, S. Christen, R. Gassert, O. Hilliges, and K. J. Kuchen-
becker, “The six hug commandments: Design and evaluation of a
human-sized hugging robot with visual and haptic perception,” in Proc.
ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI). New York,
NY, USA: ACM, Mar. 2021, pp. 380–388.
[15] A. E. Block, H. Seifi, O. Hilliges, R. Gassert, and K. J. Kuchenbecker,
“In the arms of a robot: Designing autonomous hugging robots with
intra-hug gestures,” ACM Trans. on Human-Robot Interaction (THRI),
vol. 12, no. 2, pp. 1–49, 2022.
[16] B. D. Argall and A. G. Billard, A survey of tactile human–robot
interactions,” Robotics and Autonomous Systems, vol. 58, no. 10, pp.
1159–1176, 2010.
[17] R. Stock-Homburg, “Survey of emotions in human–robot interactions:
Perspectives from robotic psychology on 20 years of research, Int. J.
Social Robotics, vol. 14, pp. 389–411, 2022.
[18] L. Moshkina, S. Park, R. C. Arkin, J. K. Lee, and H. Jung, “TAME:
Time-varying affective response for humanoid robots,” Int. J. of Social
Robotics, vol. 3, no. 3, pp. 207–221, 2011.
[19] L. Moshkina and R. C. Arkin, “Human perspective on affective robotic
behavior: a longitudinal study, in Proc. IEEE/RSJ Int. Conf. on
Intelligent Robots and Systems (IROS). Edmonton, Canada: IEEE,
2005, pp. 1444–1451.
[20] L. Moshkina, “Improving request compliance through robot affect, in
AAAI Conf. on Artificial Intelligence. Toronto, Canada: AAAI Press,
2012, pp. 2031––2037.
[21] R. B. Burns, H. Seifi, H. Lee, and K. J. Kuchenbecker, A haptic em-
pathetic robot animal for children with autism,” in Comp. ACM/IEEE
Int. Conf. on Human-Robot Interaction (HRI). IEEE, 2021, pp. 1–4.
[22] T. Saito, T. Shibata, K. Wada, and K. Tanie, “Relationship between
interaction with the mental commit robot and change of stress reaction
of the elderly, in Proc. IEEE Int. Symp. on Computational Intelligence
in Robotics and Automation, vol. 1. Kobe, Japan: IEEE, 2003, pp.
119–124.
[23] Y. S. Sefidgar, K. E. MacLean, S. Yohanan, H. M. Van der Loos, E. A.
Croft, and E. J. Garland, “Design and evaluation of a touch-centered
calming interaction with a social robot,” IEEE Trans. on Affective
Computing, vol. 7, no. 2, pp. 108–121, 2015.
[24] K. Isbister, P. Cottrell, A. Cecchet, E. Dagan, N. Theofanopoulou,
F. A. Bertran, A. J. Horowitz, N. Mead, J. B. Schwartz, and P. Slovak,
“Design (not) lost in translation: A case study of an intimate-space
socially assistive “robot” for emotion regulation, ACM Trans. on
Computer-Human Interaction (TOCHI), vol. 29, no. 4, pp. 1–36, 2022.
[25] R. B. Burns, H. Lee, H. Seifi, R. Faulkner, and K. J. Kuchenbecker,
“Endowing a NAO robot with practical social-touch perception,
Frontiers in Robotics and AI, vol. 9, no. 840335, pp. 1–17, Apr. 2022.
[26] N. Dael, M. Mortillaro, and K. R. Scherer, “Emotion expression in
body action and posture.” Emotion, vol. 12, no. 5, p. 1085, 2012.
[27] A. Beck, L. Ca˜
namero, and K. A. Bard, “Towards an affect space
for robots to display emotional body language,” in Proc. IEEE Int.
Symp. on Robot and Human Interactive Communication (RO-MAN).
Viareggio, Italy: IEEE, 2010, pp. 464–469.
[28] M. Biswas, M. Romeo, A. Cangelosi, and R. B. Jones, “Are older
people any different from younger people in the way they want
to interact with robots? scenario based survey,” J. Multimodal User
Interfaces, vol. 14, no. 1, pp. 61–72, 2020.
[29] R. B. Burns and K. J. Kuchenbecker, A lasting impact: Using second-
order dynamics to customize the continuous emotional expression of
a social robot,” Workshop paper presented at the HRI Workshop on
Lifelong Learning and Personalization in Long-Term Human-Robot
Interaction (LEAP-HRI), Stockholm, Sweden, Mar. 2023.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Social touch is essential to everyday interactions, but current socially assistive robots have limited touch-perception capabilities. Rather than build entirely new robotic systems, we propose to augment existing rigid-bodied robots with an external touch-perception system. This practical approach can enable researchers and caregivers to continue to use robotic technology they have already purchased and learned about, but with a myriad of new social-touch interactions possible. This paper presents a low-cost, easy-to-build, soft tactile-perception system that we created for the NAO robot, as well as participants’ feedback on touching this system. We installed four of our fabric-and-foam-based resistive sensors on the curved surfaces of a NAO’s left arm, including its hand, lower arm, upper arm, and shoulder. Fifteen adults then performed five types of affective touch-communication gestures (hitting, poking, squeezing, stroking, and tickling) at two force intensities (gentle and energetic) on the four sensor locations; we share this dataset of four time-varying resistances, our sensor patterns, and a characterization of the sensors’ physical performance. After training, a gesture-classification algorithm based on a random forest identified the correct combined touch gesture and force intensity on windows of held-out test data with an average accuracy of 74.1%, which is more than eight times better than chance. Participants rated the sensor-equipped arm as pleasant to touch and liked the robot’s presence significantly more after touch interactions. Our promising results show that this type of tactile-perception system can detect necessary social-touch communication cues from users, can be tailored to a variety of robot body parts, and can provide HRI researchers with the tools needed to implement social touch in their own systems.
Article
Full-text available
Knowledge production within the interdisciplinary field of human–robot interaction (HRI) with social robots has accelerated, despite the continued fragmentation of the research domain. Together, these features make it hard to remain at the forefront of research or assess the collective evidence pertaining to specific areas, such as the role of emotions in HRI. This systematic review of state-of-the-art research into humans’ recognition and responses to artificial emotions of social robots during HRI encompasses the years 2000–2020. In accordance with a stimulus–organism–response framework, the review advances robotic psychology by revealing current knowledge about (1) the generation of artificial robotic emotions (stimulus), (2) human recognition of robotic artificial emotions (organism), and (3) human responses to robotic emotions (response), as well as (4) other contingencies that affect emotions as moderators.
Chapter
Touch is a key part of interaction and communication between humans, but has still been little explored in human-robot interaction. In this work, participants were asked to approach and touch a humanoid robot on the hand (Nao – 26 participants; Pepper – 28 participants) to get its attention. We designed reaction behaviors for the robot that consisted in four different combinations of arm movements with the touched hand moving forward or back and the other hand moving forward or staying in place, with simultaneous leaning back, followed by looking at the participant. We studied which reaction of the robot people found the most appropriate and what was the reason for their choice. For both robots, the preferred reaction of the robot hand being touched was moving back. For the other hand, no movement at all was rated most natural for the Pepper, while it was movement forward for the Nao. A correlation between the anxiety subscale of the participants’ personality traits and the passive to active/aggressive nature of the robot reactions was found. Most participants noticed the leaning back and rated it positively. Looking at the participant was commented on positively by some participants in unstructured comments. We also analyzed where and how participants spontaneously touched the robot on the hand. In summary, the touch reaction behaviors designed here are good candidates to be deployed more generally in social robots, possibly including incidental touch in crowded environments. The robot size constitutes one important factor shaping how the robot reaction is perceived.
Article
Hugs are complex affective interactions that often include gestures like squeezes. We present six new guidelines for designing interactive hugging robots, which we validate through two studies with our custom robot. To achieve autonomy, we investigated robot responses to four human intra-hug gestures: holding, rubbing, patting, and squeezing. Thirty-two users each exchanged and rated sixteen hugs with an experimenter-controlled HuggieBot 2.0. The robot’s inflated torso’s microphone and pressure sensor collected data of the subjects’ demonstrations that were used to develop a perceptual algorithm that classifies user actions with 88% accuracy. Users enjoyed robot squeezes, regardless of their performed action, they valued variety in the robot response, and they appreciated robot-initiated intra-hug gestures. From average user ratings, we created a probabilistic behavior algorithm that chooses robot responses in real time. We implemented improvements to the robot platform to create HuggieBot 3.0 and then validated its gesture perception system and behavior algorithm with sixteen users. The robot’s responses and proactive gestures were greatly enjoyed. Users found the robot more natural, enjoyable, and intelligent in the last phase of the experiment than in the first. After the study, they felt more understood by the robot and thought robots were nicer to hug.
Article
We present a Research-through-Design case study of the design and development of an intimate-space tangible device perhaps best understood as a socially assistive robot, aimed at scaffolding children’s efforts at emotional regulation. This case study covers the initial research device development, as well as knowledge transfer to a product development company toward translating the research into a workable commercial product that could also serve as a robust “research product” for field trials. Key contributions to the literature include: (1) sharing of lessons learned from the knowledge transfer process that can be useful to others interested in developing robust products (whether commercial or research) that preserve design values, while allowing for large scale deployment and research; (2) articulation of a design space in HCI/HRI (Human Robot Interaction) of intimate space socially assistive robots , with the current artifact as a central exemplar, contextualized alongside other related HRI artifacts.
Article
Though substantial research has been dedicated towards using technology to improve education, no current methods are as effective as one-on-one tutoring. A critical, though relatively understudied, aspect of effective tutoring is modulating the student's affective state throughout the tutoring session in order to maximize long-term learning gains. We developed an integrated experimental paradigm in which children play a second-language learning game on a tablet, in collaboration with a fully autonomous social robotic learning companion. As part of the system, we measured children's valence and engagement via an automatic facial expression analysis system. These signals were combined into a reward signal that fed into the robot's affective reinforcement learning algorithm. Over several sessions, the robot played the game and personalized its motivational strategies (using verbal and non-verbal actions) to each student. We evaluated this system with 34 children in preschool classrooms for a duration of two months. We saw that (1) children learned new words from the repeated tutoring sessions, (2) the affective policy personalized to students over the duration of the study, and (3) students who interacted with a robot that personalized its affective feedback strategy showed a significant increase in valence, as compared to students who interacted with a non-personalizing robot. This integrated system of tablet-based educational content, affective sensing, affective policy learning, and an autonomous social robot holds great promise for a more comprehensive approach to personalized tutoring.
Conference Paper
Children with autism and their families could greatly benefit from increased support resources. While robots are already being introduced into autism therapy and care, we propose that these robots could better understand the child's needs and provide enriched interaction if they utilize touch. We present our plans, both completed and ongoing, for a touch-perceiving robot companion for children with autism. We established and validated touch-perception requirements for an ideal robot companion through interviews with 11 autism specialists. Currently, we are evaluating custom fabric-based tactile sensors that enable the robot to detect and identify various touch communication gestures. Finally, our robot companion will react to the child's touches through an emotion response system that will be customizable by a therapist or caretaker.
Article
The diagnosis of Autism Spectrum Disorder (ASD) in children is commonly accompanied by a diagnosis of sensory processing disorders. Abnormalities are usually reported in multiple sensory processing domains, showing a higher prevalence of unusual responses, particularly to tactile, auditory, and visual stimuli. This article discusses a novel robot-based framework designed to target sensory difficulties faced by children with ASD in a controlled setting. The setup consists of a number of sensory stations, together with two different robotic agents that navigate the stations and interact with the stimuli. These stimuli are designed to resemble real-world scenarios that form a common part of one's everyday experiences. Given the strong interest of children with ASD in technology in general and robots in particular, we attempt to utilize our robotic platform to demonstrate socially acceptable responses to the stimuli in an interactive, pedagogical setting that encourages the child's social, motor, and vocal skills, while providing a diverse sensory experience. A preliminary user study was conducted to evaluate the efficacy of the proposed framework, with a total of 18 participants (5 with ASD and 13 typically developing) between the ages of 4 and 12 years. We derive a measure of social engagement, based on which we evaluate the effectiveness of the robots and sensory stations to identify key design features that can improve social engagement in children.