Conference PaperPDF Available

A robot counseling system — What kinds of topics do we prefer to disclose to robots?

Authors:

Figures

Content may be subject to copyright.
A Robot Counseling System
–What kinds of topics do we prefer to disclose to robots?–
Takahisa Uchida1,2, Hideyuki Takahashi1,2, Midori Ban3, Jiro Shimaya1,2,
Yuichiro Yoshikawa1,2and Hiroshi Ishiguro1,2
Abstract Our research goal was to develop a robot coun-
seling system. It is important for a counselor to promote self-
disclosure of clients to reduce their anxiety feelings. However,
when a counselor is human, clients sometimes hesitate to
disclose intrusive topics due to embarrassment and self-esteem
issues. We hypothesized that a robot counselor, on account
of its unique kind of agency, could remove mental barriers
between the counselor and the client, and promote in-depth self-
disclosure about negative topics. In this study, we prepared two
robots (an android and a desktop robot) as robot counselors.
First, we confirmed that subjects eagerly self-disclosed to
these prepared robots from the numbers of spoken words
about self-disclosure in preliminary experiment. And next, we
conducted the experiment to verify whether it is possible to
expose more of subjects’ weakness to robots than humans. The
experimental result suggested that robots can draw out subjects’
self-disclosure about negative topics than the human counselor.
I. INTRODUCTION
The goal of this research was to construct a counseling
system using robots. In counseling, counselors are required
to reduce clients’ anxiety and stress by interacting with them.
In order to promote anxiety and stress reduction in clients
through a dialogue, encouraging clients’ self-disclosure can
be an effective strategy. Self-disclosure is an act of notifying
others of one’s personal information [1]. Jourard insisted
that self-disclosure was a sign of personality health [1].
Since then various studies have found that there is a correla-
tion between indicators of mental health and self-disclosure
(e.g., [2]). Furthermore, it is reported that depression and
physical symptoms are reduced ([3], [4], [5]) by discussing
with others matters related to negative events. In other words,
it can be said that encouraging self-disclosure about negative
events greatly contributes in reducing anxiety and stress
among clients.
However, clients do not always aggressively practice self-
disclosure of negative topics. For example, it is pointed out
that disclosing personal events related to negative topics is
usually considered as lowering one’s relative position with
This work was supported by JST Ishiguro ERATO Project
1Takahisa Uchida, Hideyuki Takahashi, Jiro Shimaya,
Yuichiro Yoshikawa and Hiroshi Ishiguro are with Graduate
School of Engineering Science, Osaka University, Osaka, Japan
uchida.takahisa@irl.sys.es.osaka-u.ac.jp,
takahashi@irl.sys.es.osaka-u.ac.jp,
shimaya.jiro@irl.sys.es.osaka-u.ac.jp,
yoshikawa@irl.sys.es.osaka-u.ac.jp and
ishiguro@irl.sys.es.osaka-u.ac.jp
2JST ERATO
3Midori Ban is with Doshisya University Faculty of Psychology, Kyoto,
Japan ban@ams.eng.osaka-u.ac.jp
(a) ERICA (b) CommU
Fig. 1. Robots
respect to the partner through exposure of his/her weakness
by self-disclosing negative content [6]. Things concerning
such negative topics cannot necessarily be uttered positively
in consideration of their own facial aspects.
In this study, therefore, we consider to use a robot to solve
this dilemma. In the case of human counselor, especially in
the first meeting, clients are likely to avoid the disclosure of
negative content for reasons such as avoiding for lowering
their relative position. On the other hand, if the counselor is
a robot, the client’s impression that the robots are linked to
his/her own society, that is, the perception of the closeness
of relationship of the robot with the client, may be extremely
small. For these reasons, we hypothesized that by adopting a
robot as a counselor, clients may be able to expose their own
weaknesses with less hesitation than a human counselor; and
thus, the robot counselor could effectively encourage clients’
self-disclosure of negative topics.
In this study, we conducted two experiments. In the first
experiment (preliminary investigation), we quantified the
amount of self-discourse to human, robots, and a loudspeaker
respectively by counting clients’ spoken words to confirm the
validity of prepared two counseling robots. And in the second
experiment (main investigation), we verified the hypothesis
that it is possible to expose more of self’s weakness to robots
than humans. As robot consolers , we prepared two different
types of consoling robots, a female android (Fig. 1(a)) and a
cute small child-like robot (fig. 1(b)). And then, we verified
whether the difference of consoling robots affected preferred
topics of client’s self-disclosure. To investigate the relation
between the kind of self-disclosure and the kind of agent
we used questionnaires about variety kind of self-discloure
2017 26th IEEE International Symposium on
Robot and Human Interactive Communication (RO-MAN)
Lisbon, Portugal, Aug 28 - Sept 1, 2017.
978-1-5386-3517-9/17/$31.00 ©2017 IEEE 207
topics and about agents’ impressions. Finally, based on
these experiments, we discussed what kinds of aspects work
effectively for constructing a counseling system using robots.
II. PRE LIM INARY EXPERIMENT
This experiment’s aim was to verify whether the subject
actually self-discloses to some extent to the robots we used.
A. Condition
We prepared four conditions; human condition, android
condition, small robot condition, and sound only condition.
In these experiments, five people (three male, two female,
average age 19.2) participated in human condition, five
people (four male, one female, average age 19.2) participated
in the android condition, six people (three male, three female,
average age 22.2) participated in the small robot condition,
and four people (one male, three female, average age 18.3)
participated in the sound only condition. Each agent asked
some questions and then the subjects answered the questions.
In the android condition, we used ERICA [8] (Fig. 1(a)).
As ERICA speaks, it moves its lips, head, and torso in
synchrony with the prosodic features of the voice. The lip,
head, and torso movements of ERICA are automatically
generated from its voice (using the systems developed by [9]
and [10]). In the small robot condition, we used CommU
(Fig. 1(b)). This is a desktop robot (about 30 cm) with a
cute little body that resembles a child. It is also equipped
with speakers in the chest, and it opens and closes the mouth
while it utters words. We used a loudspeaker in sound only
conditions.
With respect to voice of agents except for the human,
we used speech synthesis softwares. In the android only
and sound only conditions, we used VOICE TEXT ERICA
of HOYA CORPORATION1to utter words. Moreover, in
the small robot condition, we used AITalk Chihiro2to utter
words.
B. Procedure
At first, the subjects faced one of the four agents across the
table, as shown in Fig. 2, and after an appropriate greeting,
the agent asked five questions about subjects’ self-disclosure.
In this experiment, we adopted the inter-subject design. From
the question items created by Niwa [11] for Japanese people,
two items from the question group (Level I) on hobby, that is
low self-disclosure difficulty level, and three from the group
on the subject’s negative character and ability (level IV),
that is high disclosure difficulty, were selected. The agents
informed the subject that he/she could refuse to answer the
questions anytime. The question items are shown below.
1http://voicetext.jp/
2http://www.ai-j.jp/
Fig. 2. 4 conditions
0
50
100
150
200
250
300
350
400
Human ERICA CommU Sound only
Number of characters
Fig. 3. Length of Self-disclosure
Level I: Hobby
Favorite things
How to spend your holiday
Events that you are looking for
Experiences which you enjoyed lately
What you are crazy about recently
Hobby
What would you like to do as a hobby
Level IV: Negative personality and ability
Experiences wherein you hurt somebody
Personality that you dislike
Experiences that you hated because of your
disability
Sickness about your ability
Experience wherein the goal could not be
achieved because of lack the capacity
Inferiority with ability
Disappointed experience because of your limited
ability
C. Result
Fig. 3 shows the amount of utterances (the number of
characters) that the subject made when answering to each
agent’s questions about self-disclosure. The error bars show
the standard error of the mean.
208
D. Discussion
In the figure 3, the human condition, the android condition,
the small robot condition, and the sound only condition are
presented in the order of the amount of speech uttered by
the subjects. Moreover, the speech quantity in the sound only
condition was particularly small. From these results, it can
be said that robots can draw out certain self-disclosure as
compared to humans.
III. EXPERIMENT2
The purpose of this experiment was to verify the hypoth-
esis that clients tend to expose more their own weak points
to the robot than human counselors, and robots can draw out
clients’ self-disclosure about negative topics. In Experiment
2, we evaluated only the human, the android, and the small
robot condition, eliminating the sound only condition, based
on the results of the preliminary experiment.
A. Condition
Fifteen subjects (seven male, eight female, average age
20.5) participated in this experiment. In the human condition,
the android condition, and the small robot condition, we used
the same agents and the same systems and speech synthesis
of each robot as in the preliminary experiment.
B. Procedure
At first, the subjects faced the human, the android, and
the small robot as shown in Fig. 4, and then were offered
greetings in order. We randomized the position of the agents
and the order of the greeting. The script is described below.
Human: Nice to meet you. My name is Midori. Thank
you very much.
Android: Nice to meet you. My name is ERICA.
Thank you very much.
Small Robot: Nice to meet you. My name is CommU.
Thank you very much.
Thereafter, the subject was asked to go to a separate
room and respond to the questionnaire. Questionnaires were
divided into two types. In the first type, the subjects had
to answer about their impressions of each agent, and in the
second type, the subjects had to select with which agent they
wanted to talk about each given 45 items of self-disclosure.
Fig. 4. Scene of experiment2
Self
Self
Self Other
Other
Other
1
4
7
Fig. 5. IOS
7
1Leader
Follower
Fig. 6. Leader-Follower Scale
We also measured the impression of the agents to inves-
tigate the influence of the felt closeness or relationship with
each agent on the attitude for self disclosure. Namely, we
used IOS scale (The Inclusion of Other in the Self scale) [12],
Leader-Follower scale and Agency scale [13]. IOS scale
is a measure of relationship closeness between self and
other (Fig.5). Leader-Follower scale is a measure of leader-
follower relationship between a subject and an agent. As
shown in Fig. 6, illustration is depicted at the center of each
illustration, and it expresses what kind of positional relation
exists between the subject and each agent. For example, in
Fig. 6, number four means equivalent positional relationship,
number one means the positional relationship wherein agent
is in front of the subject, and the number seven means
the positional relationship wherein the agent is behind the
subject. The Agency scale is an impression evaluation scale
(Fig. 7) that can map a target agent on two axes, intellectual
level and emotional level, separately. From the Agancy scale,
we can know what kind of agency the agent has.
We used 45 topics listed in the Enomoto Self-Disclosure
Questionnaire-45 (ESDQ-45) [14] as the self-disclosure top-
ics, which are representative self-disclosure contents for
Japanese people. For each topic, the subjects were asked to
select one preferrable agent from three agents (the human,
the android, and the small robot) as the interlocutor to be
disclosed it. Examples of the 45 topic items are shown below.
209
Fig. 7. Agency Scale
0
1
2
3
4
5
6
IOS Follower
IOS & Follower Scale
Human Android CommU
Fig. 8. Result of IOS Scale and Leader-Follower Scale
Studying interests, Current goal, Things about living
and fulfillment, Hobby, Efforts to improve appear-
ance appeal, Badly injured experiences, Troubles in
friendship, Emptiness and anxiety in life, Loneliness
and alienation, Complaints and dissatisfaction with
society, Opinions on literature and art, Professional
appropriateness, Sports sense, Things to ask a friend,
Past love affairs experience.
C. Result
First, the results of three impression evaluation question-
naires (IOS, Leader - Follower scale, Agency scale) are
shown in Fig. 8, Fig. 9.
In addition, 10 subjects (two male, eight female, average
age 20.4), who were different from those of Experiment
2, evaluated the positive degree of each topic (Enomoto’s
45 topics) [14]. We categorized top 15 topics as positive
topics, middle 15 as neutral topics, and bottom 15 as negative
topics. Fig. 10 shows the selected ratio of each agent for the
disclosure of the negative topics, the neutral topics, and the
Fig. 9. Result of Agency Scale
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Negative Neutral Positive
Selected Proportion of Each Agent
Human Android CommU
Fig. 10. Category of Positive/Neutral/Negative
positive topics.
D. Discussion
Regarding the impression evaluation for each agent, as
shown in Fig. 8, the distance of the relationship with each
agent felt by the subjects with respect to the IOS scale was
the nearest with the human, and the distance between the
android and the small robot was almost of the same degree.
In addition, as shown in the Leader-Follower scale, the
subject and the android had the same level of relationship, the
human had a slightly higher degree of Leader, the CommU
had a higher degree of Follower. Also, looking at Fig. 9, we
observed that the Android had a low emotion score but high
intelligence score, while human and small robots condition
had high emotional but low intelligence scores.
Next, when we analyzed how many agents were selected
as counselors (Fig.10) after classifying self-disclosure items
into three categories of positive, negative, and neutral, the
positive topics had a large percentage of human selections,
and the proportion of robots increased as the topics became
neutral-negative topics. Therefore, in order to investigate
210
**p<.01
**p<.01
Fig. 11. Prefered Category in Robots
which topics (positive, neutral, negative topics of self-
disclosure) in the robot condition (the android condition and
the small robot condition) were preferred, we carried out
analysis of variance (ANOVA). Significant difference was
confirmed at the significance level p= 0.05, thus sub effect
tests was conducted by using the Ryan method. The result
is shown in Fig. 11. It can be confirmed that there was a
tendency to self-disclose negative and neutral contents to the
robot.
Furthermore, regarding self-disclosure of negative topics,
the results of the categories of emotional aspects classified
in advance by ESDQ-45 [14] (Fig. 12) revealed that the
small robot had the highest score for ”experiences badly
hurt the heart” and ”points that seem to be emotionally
immature,” while the android had the highest score for
”jealous experience.” Regarding the self-disclosure items on
negative topics, the items on the emotional aspect showed
that the robot could draw self-disclosure more effectively
than the human. A possible reason could be the distance
between the robot and the subject, as indicated by the robots’
IOS score and the degree of the Follower score. Therefore,
an impression that they are downgraded than him/her may
be able to draw out self-disclosure on his emotional, that is,
weakness aspect.
E. Future Work
Finally, we will now discuss the limitations of this re-
search. In the experiments, we compared the human, the
android, and the small robot assuming the scene of the
first meeting. In other words, the evaluation related to self-
disclosure was only based on the first impression. In order
to build a robot counseling system, it is important to take
into consideration not only the first impression but also what
kinds of dialogue the robots narrated. In the future, we would
like to conduct an advance research on what kind of dialogue
the robot should utter as a counselor.
In addition, the result of this experiment is insufficient to
conclude what kinds of differences in characteristics exited
between the android and the small robot. The result of the
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Selected Proportion of Each Agent
Fig. 12. Emotional Aspect
Agency scale (Fig. 9) revealed that the intelligent index was
higher for the android than for the small robot, and the
emotion index was higher for the small robot than for the
android, but the relationship between self-disclosure items
was not clear. In the future, we would also like to conduct
a research on the differences in robot characteristics.
Furthermore, in this experiment, we adopted a female
experimenter as a human agent. Therefore, the result of the
human condition in this experiment may be peculiar to this
experiment only. Going forward, it is necessary to explore
other conditions even in human condition (for example,
include men as counselors).
Finally, in the experiment 2, the subjects were required to
choose one agent to disclose something about each topic.
Thus, it cannot be declared whether the subject actively
wanted to talk or not. Therefore, in future experiments, it
is necessary to evaluate not only which agent the subjects
want to talk to but also the willingness of the subject to
discuss the topics.
IV. CONCLUSION
In this paper, with the aim of introducing a robot for
counseling, we conducted experiments with the human, the
androids, and the small robot as a counselor and examined
the extent of study participants’ self-disclosure about subjects
and topics. As a result, when using the robots as a counselor,
the possibility of extracting self-disclosure amount similar
to human counselors was shown. In addition, as a result of
investigating by self-disclosure items, a higher tendency to
disclose negative topics to the robots than the human was
confirmed, especially with respect to items concerning emo-
tions. In the future, we would like to conduct a survey that
takes into consideration the subjects’ willingness to discuss
and the importance of self-disclosed topics, and clarify what
kinds of differences are likely to appear depending on the
type of robots.
211
ACKNOWLEDGMENT
This research was supported by the Japan Science and
Technology Agency, ERATO ISHIGURO Symbiotic Human-
Robot Interaction Project.
REF ERE NCES
[1] S. M. Jourard, “Self-disclosure: An experimental analysis of the
transparent self.” 1971.
[2] P. C. Cozby, “Self-disclosure: a literature review.Psychological
bulletin, vol. 79, no. 2, p. 73, 1973.
[3] S. Cohen and T. A. Wills, “Stress, social support, and the buffering
hypothesis.” Psychological bulletin, vol. 98, no. 2, p. 310, 1985.
[4] J. W. Pennebaker and S. K. Beall, “Confronting a traumatic event: to-
ward an understanding of inhibition and disease.” Journal of abnormal
psychology, vol. 95, no. 3, p. 274, 1986.
[5] R. L. Silver and C. B. Wortman, “Coping with undesirable life events,”
Human helplessness: Theory and applications, vol. 279, p. 375, 1980.
[6] E. Hatfield, “The dangers of intimacy,Communication, intimacy, and
close relationships, pp. 207–220, 1984.
[7] L. J. Wood, K. Dautenhahn, A. Rainer, B. Robins, H. Lehmann, and
D. S. Syrdal, “Robot-mediated interviews-how effective is a humanoid
robot as a tool for interviewing young children?” PloS one, vol. 8,
no. 3, p. e59448, 2013.
[8] D. F. Glas, T. Minato, C. T. Ishi, T. Kawahara, and H. Ishiguro, “Erica:
The erato intelligent conversational android,” in Robot and Human
Interactive Communication (RO-MAN), 2016 25th IEEE International
Symposium on. IEEE, 2016, pp. 22–29.
[9] C. T. Ishi, C. Liu, H. Ishiguro, and N. Hagita, “Evaluation of formant-
based lip motion generation in tele-operated humanoid robots,” in
Proc. of the IEEE/RSJ International Conference on Intelligent Robots
and Systems, 2012, pp. 2377–2382.
[10] K. Sakai, T. Minato, C. T. Ishi, and H. Ishiguro, “Speech driven
trunk motion generating system based on physical constraint,” in Robot
and Human Interactive Communication (RO-MAN), 2016 25th IEEE
International Symposium on. IEEE, 2016, pp. 232–239.
[11] S. Niwa and S. Maruno, “Development of a Scale to Assess the Depth
of Self-disclosure,” Personality Research, vol. 18, no. 3, pp. 196–209,
2010. (in Japanese)
[12] A. Aron, E. N. Aron, and D. Smollan, “Inclusion of other in the
self scale and the structure of interpersonal closeness.” Journal of
personality and social psychology, vol. 63, no. 4, p. 596, 1992.
[13] H. Takahashi, M. Ban, and M. Asada, “Semantic differential scale
method can reveal multi-dimensional aspects of mind perception,
Frontiers in Psychology, vol. 7, 2016.
[14] Hiromoto Enomoto, “Psychological study of self-disclosure, Kitaoji
Shoboh,” 1997. (in Japanese)
212
... Whether the listener is a human or robot, the actual amount of self-disclosure did not differ [18]. In fact, robots were preferred to humans for self-disclosure of negative emotional topics [19]. ...
... For women, there was no difference in their willingness to self-disclose and the amount of actual disclosure, regardless of whether the listeners were robots or humans [18]. Furthermore, it has been reported that robots are preferred over humans for negative and emotional topics [19]. Bethel et al. [34] interviewed school students about their experience of bullying using robots. ...
... This was referred to as "self-disclosure of integrated life experience" in [24]. In addition, people treat robots as communication partners and self-disclose to them, as they would humans [18,19,34,35]. Thus, we propose the following hypotheses: ...
Article
Full-text available
Self-disclosure of life experiences from the viewpoint of integrity is considered beneficial to the psychological health of older adults. It has been shown that people tend to self-disclose more to people they like. Compared to a consistent invariant reward, an improvement in the rewarding behavior of a person has been shown to have a greater positive impact on an individual’s liking for the person. Based on these previous studies, we explored the psychological impact of self-disclosure of integrated life experiences on the elderly and the effect of the change in the robot’s listening attitude on the elderly’s self-disclosure. We conducted an experiment in which 38 elderly participants were asked to self-disclose their life experiences to a robot for approximately 20 min. The participants interacted with either a robot with a consistently positive listening attitude or a robot that initially had a neutral listening attitude that changed to a positive listening attitude. The results showed that self-disclosure of integrated life experiences to the robot had a psychological impact on improving self-esteem. In addition, changes in the robot’s listening attitude were found to promote self-disclosure and enhance its impact on self-esteem.
... The attribution of opinions (i.e., the ability to make judgments) may indicate that appearance influences intelligence. Previous studies also reported that the appearance of robots and agents affects the impressions of humans (Uchida et al., 2017b). The results of this study may contribute to the clarification of the relationship between them. ...
Article
Full-text available
In recent years, the development of robots that can engage in non-task-oriented dialogue with people, such as chat, has received increasing attention. This study aims to clarify the factors that improve the user’s willingness to talk with robots in non-task oriented dialogues (e.g., chat). A previous study reported that exchanging subjective opinions makes such dialogue enjoyable and enthusiastic. In some cases, however, the robot’s subjective opinions are not realistic, i.e., the user believes the robot does not have opinions, thus we cannot attribute the opinion to the robot. For example, if a robot says that alcohol tastes good, it may be difficult to imagine the robot having such an opinion. In this case, the user’s motivation to exchange opinions may decrease. In this study, we hypothesize that regardless of the type of robot, opinion attribution affects the user’s motivation to exchange opinions with humanoid robots. We examined the effect by preparing various opinions of two kinds of humanoid robots. The experimental result suggests that not only the users’ interest in the topic but also the attribution of the subjective opinions to them influence their motivation to exchange opinions. Another analysis revealed that the android significantly increased the motivation when they are interested in the topic and do not attribute opinions, while the small robot significantly increased it when not interested and attributed opinions. In situations where there are opinions that cannot be attributed to humanoid robots, the result that androids are more motivating when users have the interests even if opinions are not attributed can indicate the usefulness of androids.
... To illustrate, selfdisclosure in a chat with a chatbot or a person result in similar positive emotional, relational and psychological outcomes [46]. The same might be true for robots, e.g., related to relief of distress through self-disclosure to a robot [45], even though this still has to be investigated more thoroughly. In one case, people who experienced strong negative affect through a negative mood induction benefitted more from talking to a robot compared to just writing their thoughts and feelings down [47]. ...
Article
Full-text available
When encountering social robots, potential users are often facing a dilemma between privacy and utility. That is, high utility often comes at the cost of lenient privacy settings, allowing the robot to store personal data and to connect to the internet permanently, which brings in associated data security risks. However, to date, it still remains unclear how this dilemma affects attitudes and behavioral intentions towards the respective robot. To shed light on the influence of a social robot’s privacy settings on robot-related attitudes and behavioral intentions, we conducted two online experiments with a total sample of N = 320 German university students. We hypothesized that strict privacy settings compared to lenient privacy settings of a social robot would result in more favorable attitudes and behavioral intentions towards the robot in Experiment 1. For Experiment 2, we expected more favorable attitudes and behavioral intentions for choosing independently the robot’s privacy settings in comparison to evaluating preset privacy settings. However, those two manipulations seemed to influence attitudes towards the robot in diverging domains: While strict privacy settings increased trust, decreased subjective ambivalence and increased the willingness to self-disclose compared to lenient privacy settings, the choice of privacy settings seemed to primarily impact robot likeability, contact intentions and the depth of potential self-disclosure. Strict compared to lenient privacy settings might reduce the risk associated with robot contact and thereby also reduce risk-related attitudes and increase trust-dependent behavioral intentions. However, if allowed to choose, people make the robot ‘their own’, through making a privacy-utility tradeoff. This tradeoff is likely a compromise between full privacy and full utility and thus does not reduce risks of robot-contact as much as strict privacy settings do. Future experiments should replicate these results using real-life human robot interaction and different scenarios to further investigate the psychological mechanisms causing such divergences.
... For example, effective psychological therapies often require full disclosure of the patient's darkest fears and secrets, which can be difficult to achieve with human therapists as clients struggle with feelings of embarrassment or shame. Significantly, self-disclosure to a robot does not seem to evoke the same kind or extent of resistanceresearch has found that people engage in more self-disclosure, particularly on negative topics, interacting with robot therapists relative to human therapists (Takahashi, Takahashi, Ban, Shimaya, Yoshikawa, & Ishiguro, 2017). This may be why people have been confiding in and seeking mental health advice from ChatGPT, a popular AI language model chatbot released in late 2022 (Broderick, 2023). ...
Article
Full-text available
Although research in cultural psychology has established that virtually all human behaviors and cognitions are in some ways shaped by culture, culture has been surprisingly absent from the emerging literature on the psychology of technology. In this perspective article, we first review recent findings on machine aversion versus appreciation. We then offer a cross-cultural perspective in understanding how people might react differently to machines. We propose three frameworks – historical, religious, and exposure – to explain how Asians might be more accepting of machines than their Western counterparts. We end the article by discussing three exciting human–machine applications found primarily in Asia and provide future research directions.
... A mediator robot that delivers messages to family members in a way preferred by the elderly disclosers was shown to have the potential to suppress their anxiety about self-disclosure on loss experiences. Moreover, this study successfully demonstrated the effectiveness of the messaging options developed in Study 1. Self-disclosure in one-to-one human-robot interactions has been studied in HRI (e.g., [50][51][52]); however, few studies have been conducted on social robots that act as third-party mediators for human messaging. In the case of the social mediator robot discussed in this study, there is always a person behind the robot, and the user communicates with that person while being influenced by the (mediator) robot's personality and interaction capabilities. ...
Article
Full-text available
Encouraging the self-disclosure of the elderly is important for preventing their social isolation. In this article, we discuss a use case in which social robots are employed to mediate remote communication between elderly individuals and their family members or friends. This research aims to elaborate design guidelines for social mediator robots concerning how robots should convey messages from elderly individuals to their recipients. We particularly considered human–robot interactions in which elderly individuals can choose the robot’s behavior (i.e., messaging options) based on their preference. If the robot is implemented with effective messaging options, the elderly’s anxiety about self-disclosing information they usually feel reluctant to share with others (e.g., loss experiences) may be mitigated. An online survey of 589 elderly participants showed that the messaging options for the mediator robot should be designed in three types: requesting-support, concealing, and recording. The study results also suggest that each of the messaging options should be chosen according to the relationships between the factors of recipients, disclosers’ personal characteristics, and dialog topics. Furthermore, an empirical human–robot interaction study conducted with 36 elderly participants suggested that the anxiety of elderly disclosers was significantly lower when they could apply their preferable messaging options to self-disclosure than the case when the robot did not provide any messaging options to them. Thus, the effectiveness of the messaging options designed through this study was demonstrated.
Article
The goal of this study is to maintain dialogue motivation between users through the mediation of a dialogue robot. Previous studies proposed artificial agents that present dialogue topics to facilitate relationship-building among individuals for a short period. However, sustaining dialogue motivation between users using such agents has not been investigated. This study proposes an algorithm for presenting topics about other people to sustain dialogue motivation between users for certain periods. Specifically, we designed a dialogue robot that discusses topics of common preference, emphasizing high information content between users. To validate our approach, we applied the proposed algorithm to a dialogue robot and conducted experiments involving university and graduate students belonging to the same community. The results confirmed that our proposed method prevented them from decreasing their engagement in dialogue even after one month of their interaction with the robot. Additionally, topics with high information content were more likely to be remembered by users. In other words, these findings indicate that when a robot introduces a topic with high information content, participants might perceive it as unusual. This perception prompts them to retain the topic with clarity; as a consequence, it might enhance the participants’ willingness to engage in dialogue with each other. Based on the findings of this research, using highly informative topics in dialogue can be an effective strategy for cultivating long-term relationships among individuals. This also insight emphasizes the potential role of robots in facilitating human relationship-building. Future studies need to examine the effectiveness of the proposed method with various communities.
Article
To improve the quality of life (QOL) of the elderly, we implemented a counseling robot that responds according to the results of AI-based QOL estimation during the interaction. Based on the results of a one-week interaction experiment with young and elderly participants, we concluded that the proposed method has the potential to improve the mental aspect of QOL, and the need for validation through large-scale experiments became clear.
Chapter
Mother, who gives birth, usually faces a mood disorder called postpartum or postnatal depression. It appears immediately after the third week of the baby's birth. However, during the first year of delivery, women can suffer anytime with this situation, and it could lead to a couple of years after birth. Few men as a father can face this condition. If it is not monitored immediately, it triggers severe and permanent disorders such as anger issues, isolation, stress, or anxiety. A significant increase has been observed in postpartum depression incidents with harmful consequences on children as well as parents regarding their physical and emotional well-being. This research paper analysed the literature to evaluate the psychotherapies that can be followed as self-help. We also evaluated automated psychotherapy systems and meta-analysed mobile applications that are available online to cope with postpartum depression. We discussed the acceptability of a therapeutic mobile application for reducing depression during parenting and postpartum period for the patients themselves. Finally, a combination of cognitive behavioural therapy and interpersonal psychotherapy; an algorithm, we proposed in this paper as a base to develop the mobile application that can help control and reduce depression during a postpartum situation.KeywordsTherapeutic mobile applicationPostnatal depressionPostpartumDigital cognitive behavioural therapyComputerised interpersonal psychotherapySelf-helpSelf-therapy
Article
During the last decade, children have shown an increasing need for mental wellbeing interventions due to their anxiety and depression issues, which the COVID-19 pandemic has exacerbated. Socially Assistive Robotics have been shown to have a great potential to support children with mental wellbeing-related issues. However, understanding how robots can be used to aid the measurement of these issues is still an open challenge. This paper presents a narrative review of child-robot interaction (cHRI) papers (IEEE ROMAN proceedings from 2016–2021 and keyword-based article search using Google Scholar) to investigate the open challenges and potential knowledge gaps in the evaluation of mental wellbeing or the assessment of factors affecting mental wellbeing in children. We exploited the SPIDER framework to search for the key elements for the inclusion of relevant studies. Findings from this work (10 screened papers in total) investigate the challenges in cHRI studies about mental wellbeing by categorising the current research in terms of robot-related factors (robot autonomy and type of robot), protocol-related factors (experiment purpose, tasks, participants and user sensing) and data related factors (analysis and findings). The main contribution of this work is to highlight the potential opportunities for cHRI researchers to carry out measurements concerning children’s mental wellbeing.
Article
Full-text available
As humans, we tend to perceive minds in both living and non-living entities, such as robots. From a questionnaire developed in a previous mind perception study, authors found that perceived minds could be located on two dimensions “experience” and “agency.” This questionnaire allowed the assessment of how we perceive minds of various entities from a multi-dimensional point of view. In this questionnaire, subjects had to evaluate explicit mental capacities of target characters (e.g., capacity to feel hunger). However, we sometimes perceive minds in non-living entities, even though we cannot attribute these evidently biological capacities to the entity. In this study, we performed a large-scale web survey to assess mind perception by using the semantic differential scale method. We revealed that two mind dimensions “emotion” and “intelligence,” respectively, corresponded to the two mind dimensions (experience and agency) proposed in a previous mind perception study. We did this without having to ask about specific mental capacities. We believe that the semantic differential scale is a useful method to assess the dimensions of mind perception especially for non-living entities that are hard to be attributed to biological capacities.
Conference Paper
Full-text available
The development of an android with convincingly lifelike appearance and behavior has been a long-standing goal in robotics, and recent years have seen great progress in many of the technologies needed to create such androids. However, it is necessary to actually integrate these technologies into a robot system in order to assess the progress that has been made towards this goal and to identify important areas for future work. To this end, we are developing ERICA, an autonomous android system capable of conversational interaction, featuring advanced sensing and speech synthesis technologies, and arguably the most humanlike android built to date. Although the project is ongoing, initial development of the basic android platform has been completed. In this paper we present an overview of the requirements and design of the platform, describe the development process of an interactive application, report on ERICA's first autonomous public demonstration, and discuss the main technical challenges that remain to be addressed in order to create humanlike, autonomous androids.
Article
Full-text available
Robots have been used in a variety of education, therapy or entertainment contexts. This paper introduces the novel application of using humanoid robots for robot-mediated interviews. An experimental study examines how children's responses towards the humanoid robot KASPAR in an interview context differ in comparison to their interaction with a human in a similar setting. Twenty-one children aged between 7 and 9 took part in this study. Each child participated in two interviews, one with an adult and one with a humanoid robot. Measures include the behavioural coding of the children's behaviour during the interviews and questionnaire data. The questions in these interviews focused on a special event that had recently taken place in the school. The results reveal that the children interacted with KASPAR very similar to how they interacted with a human interviewer. The quantitative behaviour analysis reveal that the most notable difference between the interviews with KASPAR and the human were the duration of the interviews, the eye gaze directed towards the different interviewers, and the response time of the interviewers. These results are discussed in light of future work towards developing KASPAR as an 'interviewer' for young children in application areas where a robot may have advantages over a human interviewer, e.g. in police, social services, or healthcare applications.
Article
Full-text available
In 2 studies, the Inclusion of Other in the Self (IOS) Scale, a single-item, pictorial measure of closeness, demonstrated alternate-form and test–retest reliability; convergent validity with the Relationship Closeness Inventory (E. Berscheid et al, 1989), the R. J. Sternberg (1988) Intimacy Scale, and other measures; discriminant validity; minimal social desirability correlations; and predictive validity for whether romantic relationships were intact 3 mo later. Also identified and cross-validated were (1) a 2-factor closeness model (Feeling Close and Behaving Close) and (2) longevity–closeness correlations that were small for women vs moderately positive for men. Five supplementary studies showed convergent and construct validity with marital satisfaction and commitment and with a reaction-time (RT)-based cognitive measure of closeness in married couples; and with intimacy and attraction measures in stranger dyads following laboratory closeness-generating tasks. In 3 final studies most Ss interpreted IOS Scale diagrams as depicting interconnectedness. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Conference Paper
Generating natural motion in robots is important for improving human-robot interaction. We developed a tele-operation system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present work, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization. Lip height control is evaluated in two types of humanoid robots (Telenoid-R2 and Geminoid-F). Subjective evaluation indicated that the proposed audio-based method can generate lip motion with naturalness superior to vision-based and motion capture-based approaches. Partial lip width control was shown to improve lip motion naturalness in Geminoid-F, which also has an actuator for stretching the lip corners. Issues regarding online real-time processing are also discussed.
Article
日本人の若者が,他者との親密な関係を構築していくうえでどのくらい深い自己を開示しながら相互作用を行っているのかを検討できる自己開示尺度は,これまで作成されていない。本研究では,自己開示の深さを測定する自己開示尺度を作成し,尺度の精度を検討するために,299人の大学生を対象に質問紙調査を行った。分析の結果,本尺度は (1) 趣味(レベルI),困難な経験(レベルII),決定的ではない欠点や弱点(レベルIII),否定的性格や能力(レベルIV)などという,深さが異なる4つのレベルの自己開示を測定でき,(2) 開示する相手との関係性に応じて自己開示の深さが異なることを敏感に識別でき,(3) 親和動機および心理的適応度を測定する既存尺度から理論的に予想される結果においても高い相関が見出され,妥当性の高い尺度であることが確認された。