Flowchart for our self-disclosure AI chatbot.

Flowchart for our self-disclosure AI chatbot.

Source publication
Article
Full-text available
Social robots may become an innovative means to improve the well-being of individuals. Earlier research has shown that people easily self-disclose to a social robot, even in cases where it was unintended by the designers. We report on an experiment considering self-disclosing in a diary journal or to a social robot after negative mood induction. An...

Context in source publication

Context 1
... impression of the results is depicted in Figure 2. People worried most about: The complete set-up of the self-disclosure AI chatbot is shown in Figure 3. The sing, movie, poem and weather options were not used in the actual experiment. ...

Similar publications

Article
Full-text available
Nowadays with COVID-19 ongoing epidemic outbreak, containment for weeks was one of the most effective measures adopted to deal with the spread of the virus until a vaccine could be efficient. Over that period, increased anxiety, depression, suicide attempts, and post-traumatic stress disorder are accumulated. Several studies referred to the need of...

Citations

... Several studies address self-disclosure to social robots in single sessions [e.g., 3,[69][70][71][72][73], however, few studies to date have addressed self-disclosures to robots in long-term settings [e.g., 74]. Previous studies describe that in single interactions, people's subjective perceptions of their self-disclosures to robots tend to align objectively well with their actual disclosures. ...
... For example, James Pennebaker writing disclosure paradigm [75,76] helps people to facilitate their emotions when writing about their own experiences. Previous studies have reported that people in a bad mood benefited more from disclosing to a robot than participating in writing disclosures in a journal [70] or on social media [77]. Another good example is affect labelling, a simple and implicit emotional regulation technique aimed at explicitly expressing emotions, or in other words-putting feelings into words [78]. ...
Article
Full-text available
While interactions with social robots are novel and exciting for many people, one concern is the extent to which people’s behavioural and emotional engagement might be sustained across time, since during initial interactions with a robot, its novelty is especially salient. This challenge is particularly noteworthy when considering interactions designed to support people’s well-being, with limited evidence (or empirical exploration) of social robots’ capacity to support people’s emotional health over time. Accordingly, our aim here was to examine how long-term repeated interactions with a social robot affect people’s self-disclosure behaviour toward the robot, their perceptions of the robot, and how such sustained interactions influence factors related to well-being. We conducted a mediated long-term online experiment with participants conversing with the social robot Pepper 10 times over 5 weeks. We found that people self-disclose increasingly more to a social robot over time, and report the robot to be more social and competent over time. Participants’ moods also improved after talking to the robot, and across sessions, they found the robot’s responses increasingly comforting as well as reported feeling less lonely. Finally, our results emphasize that when the discussion frame was supposedly more emotional (in this case, framing questions in the context of the COVID-19 pandemic), participants reported feeling lonelier and more stressed. These results set the stage for situating social robots as conversational partners and provide crucial evidence for their potential inclusion in interventions supporting people’s emotional health through encouraging self-disclosure.
... The same might be true for robots, e.g., related to relief of distress through self-disclosure to a robot [45], even though this still has to be investigated more thoroughly. In one case, people who experienced strong negative affect through a negative mood induction benefitted more from talking to a robot compared to just writing their thoughts and feelings down [47]. This result indicates that self-disclosure towards robots also serves the relief of distress motive. ...
Article
Full-text available
When encountering social robots, potential users are often facing a dilemma between privacy and utility. That is, high utility often comes at the cost of lenient privacy settings, allowing the robot to store personal data and to connect to the internet permanently, which brings in associated data security risks. However, to date, it still remains unclear how this dilemma affects attitudes and behavioral intentions towards the respective robot. To shed light on the influence of a social robot’s privacy settings on robot-related attitudes and behavioral intentions, we conducted two online experiments with a total sample of N = 320 German university students. We hypothesized that strict privacy settings compared to lenient privacy settings of a social robot would result in more favorable attitudes and behavioral intentions towards the robot in Experiment 1. For Experiment 2, we expected more favorable attitudes and behavioral intentions for choosing independently the robot’s privacy settings in comparison to evaluating preset privacy settings. However, those two manipulations seemed to influence attitudes towards the robot in diverging domains: While strict privacy settings increased trust, decreased subjective ambivalence and increased the willingness to self-disclose compared to lenient privacy settings, the choice of privacy settings seemed to primarily impact robot likeability, contact intentions and the depth of potential self-disclosure. Strict compared to lenient privacy settings might reduce the risk associated with robot contact and thereby also reduce risk-related attitudes and increase trust-dependent behavioral intentions. However, if allowed to choose, people make the robot ‘their own’, through making a privacy-utility tradeoff. This tradeoff is likely a compromise between full privacy and full utility and thus does not reduce risks of robot-contact as much as strict privacy settings do. Future experiments should replicate these results using real-life human robot interaction and different scenarios to further investigate the psychological mechanisms causing such divergences.
... The same study [38] showed the benefits of employing social robots for minimising social tension and anxieties, describing that participants with higher social anxiety felt less anxious and demonstrated less tension when knowing that they would interact with a robot as opposed to a human interlocutor. These results are in-line with two studies that found that people in bad mood benefited more from self-disclosing to a robot than participating in writing disclosure using a journal [39] or self-disclosing on social media [40]. Other emotional states might influence people's perceptions and behaviours towards robots. ...
... Furthermore, consistent with previous results (e.g., [39], [40], [43]), our results suggest that participants who experienced negative emotional states, such as lower mood before the interaction and higher levels of loneliness, were more likely to self-disclose more towards the robot. These findings suggest that individuals may use social robots as a form of an emotional outlet when experiencing negative emotional states. ...
Conference Paper
Self-disclosing to others can benefit emotional well-being, but socio-emotional barriers can limit people’s ability to do so. Self-disclosing towards social robots can help overcome these obstacles as robots lack judgment and can establish rapport. To further understand the influence of affective factors on people’s self-disclosure to social robots, this study examined the relationship between self-disclosure behaviour towards a social robot and people’s emotional states and their perception of the robot’s responses as comforting (i.e., being emphatic). The study included 1160 units of observation collected from 39 participants who conversed with the social robot Pepper (SoftBank Robotics) twice a week for 5 weeks (10 sessions in total), answering three personal questions in each session. Results show that perceiving the robot’s responses as more comforting was positively related to self-disclosure behaviour (in terms of disclosure duration in seconds, and disclosure length in number of words), and negative emotional states, such as lower mood, and higher feelings of loneliness and stress, were associated with higher rates of self-disclosure towards the robot. Additionally, higher rates of introversion significantly predicted higher rates of self-disclosure towards the robot. The study reveals the meaningful influence of affective states on how people behave when talking to social robots, especially when experiencing negative emotions. These findings may have implications for designing and developing social robots in therapeutic contexts.
... Self-disclosure after a bad experience is best served by social robots, more so than by other media such as writing or a WhatsApp (version 2.11.109) or WeChat (version 8.0. 16) group [42,43]. Social robots can be perceived as the physical representation of AI teammates. ...
... Even though the level of self-disclosure can vary depending on factors such as perceived trustworthiness, the context of the interaction, and the type of information being disclosed, individuals are generally willing to disclose personal information to social robots, particularly when the robot is designed to provide social support or companionship [89]. Social robots have been shown to invite self-disclosure of negative mood better than other media [42,43] and fit the environment of digital gaming, AI characters, and Virtual Reality. Therefore, we believe that social robots employed after gaming can facilitate self-disclosure without affecting other players' real physical space. ...
... The questions utilized for Valence before human-robot interaction (Vb) and Valence after human-robot interaction (Va) were derived from the relevant studies conducted by Duan et al. and Luo et al. [42,43], respectively, and were administered both prior to and following the interaction between participants and KING-bot. These inquiries employed positive and negative indicators to evaluate alterations in users' emotional states. ...
Article
Full-text available
Electronic sports show significant user churn caused by a toxic gaming atmosphere, and current GUI-based interventions are insufficient to address the issue. Based on the theoretical framework of Perceiving and Experiencing Fictional Characters, a new hybrid interaction interface and paradigm combined with tangibles is proposed to counter negative mood. To support the frustrated users of Massive Online Battle Arena (MOBA) games, we added AI teammates for better personal performance and social robots for the disclosure of negative mood. We hypothesized that AI teammates' invisibility and anonymity would mitigate negative emotions; an effect amplified by the presence of social robots. A comparative experiment was conducted with 111 participants. Social robots for emotion-oriented coping improved user mood but AI teammates for problem-oriented coping did so better, although their higher levels of experienced anonymity may not have been preferred. Unexpectedly, conversing with a robot after playing with an AI teammate brought the mood back to that experienced when talking to a robot alone, while increasing the distancing tendencies. With this in mind, AI and social robots can counter the negative atmosphere in MOBA games, positively contributing to game design and empathic human-computer interaction.
... Self-disclosure after a bad experience is best served by social robots, more so than by other media such as writing or a WhatsApp (version 2.11.109) or WeChat (version 8.0.16) group [42,43]. Social robots can be perceived as the physical representation of AI teammates. ...
... Even though the level of self-disclosure can vary depending on factors such as perceived trustworthiness, the context of the interaction, and the type of information being disclosed, individuals are generally willing to disclose personal information to social robots, particularly when the robot is designed to provide social support or companionship [89]. Social robots have been shown to invite self-disclosure of negative mood better than other media [42,43] and fit the environment of digital gaming, AI characters, and Virtual Reality. Therefore, we believe that social robots employed after gaming can facilitate self-disclosure without affecting other players' real physical space. ...
... The questions utilized for Valence before human-robot interaction (Vb) and Valence after human-robot interaction (Va) were derived from the relevant studies conducted by Duan et al. and Luo et al. [42,43], respectively, and were administered both prior to and following the interaction between participants and KING-bot. These inquiries employed positive and negative indicators to evaluate alterations in users' emotional states. ...
Article
Full-text available
Electronic sports show significant user churn caused by a toxic gaming atmosphere, and current GUI-based interventions are insufficient to address the issue. Based on the theoretical framework of Perceiving and Experiencing Fictional Characters, a new hybrid interaction interface and paradigm combined with tangibles is proposed to counter negative mood. To support the frustrated users of Massive Online Battle Arena (MOBA) games, we added AI teammates for better personal performance and social robots for the disclosure of negative mood. We hypothesized that AI teammates’ invisibility and anonymity would mitigate negative emotions; an effect amplified by the presence of social robots. A comparative experiment was conducted with 111 participants. Social robots for emotion-oriented coping improved user mood but AI teammates for problem-oriented coping did so better, although their higher levels of experienced anonymity may not have been preferred. Unexpectedly, conversing with a robot after playing with an AI teammate brought the mood back to that experienced when talking to a robot alone, while increasing the distancing tendencies. With this in mind, AI and social robots can counter the negative atmosphere in MOBA games, positively contributing to game design and empathic human–computer interaction.
... Robots are portable and could be flexibly applied in different locations and times. Duan et al. (2021) show evidence that young participants who experienced intense negativity preferred to self-disclose to robots over writing down their experiences. Negative affect reduced after talking to the robot. ...
... Compared to a physical workshop, participants found it preferable and engaging in joining treatment delivered by a robot. The results align with Duan et al. (2021), revealing that young participants with intense negative mood chose to self-disclose to robots over writing down their experiences. Young adults feel safe in their sharing with robots because they know that the robot does not judge (Alves-Oliveira et al., 2022). ...
... Notably, positive effects for valence were only present in the dataset that included negative outliers (cf. Duan et al. 2021). ...
Article
Full-text available
Young adults undergoing psychological changes are particularly vulnerable. Recent social isolation impedes interpersonal help while stress from family, school, work, and society has brought negative effects on mental health, even in otherwise healthy young adults. Recent research has shown that daily creativity contributes to well-being. To circumvent issues of contamination, we tried a NAO robot guiding a Loving-kindness Meditation (LKM) and Walking Meditation (WM). By improving mental states (i.e. positive valence and state openness), we stimulated creative behavior to reduce negative mood. Participants (N = 142) were healthy individuals, aged between 18 and 34, joining a one-time laboratory experiment. They responded to two rounds of questionnaires, with a 10 min intervention guided by audio or a NAO robot in between each round. A control group with participants with no treatment (i.e. taking a 10 min rest) was added for comparison. Both audio-guided LKM and WM successfully evoked state openness, with the former also exerting a positive effect on valence. Valence and state openness were positively correlated, and both were associated with a higher willingness to create. With positive valence, young adults likely perform better on convergent thinking. The result may potentially lead to negative mood reduction. The discussion emphasizes the importance of designing specific characteristics of social robots in accordance with the task's context.
... Participants' moods got better after talking to the robot and across sessions, they found the robot's responses to be more comforting over time, and they also reported feeling less lonely over time (48). Another interesting example includes two studies that found that people in bad mood benefited more from disclosing to a robot than participating in writing disclosure using a journal (140) or on social media (141). ...
Preprint
Full-text available
People often engage in various forms of self-disclosure and social sharing with others when trying to regulate the impact of emotional distress. Here we introduce a novel long-term mediated intervention aimed at supporting informal caregivers to cope with emotional distress via self-disclosing their emotions and needs to a social robot. Research has shown that informal caregivers often struggle in managing the emotional and practical demands of the caregiving situation, and also highlights the lack of social support and paucity of social interaction some experience. Accordingly, we were interested in the extent of informal caregivers' self-disclosure behaviour towards a social robot (Pepper, SoftBank Robotics) over time, and how (social and usability-related) perceptions of the robot develop over time. Moreover, we wished to examine how this intervention made informal caregivers feel (in terms of reported mood, perceptions of the robot as comforting, feelings of loneliness, and stress), and the extent to which interacting with the robot affected these individuals' emotion regulation. Informal caregivers conversed with the social robot Pepper 10 times across 5 weeks about general everyday topics. Our results show that informal caregivers self-disclosed increasingly more to the robot across time and perceived it as increasingly social and competent over time. Furthermore, participants' moods positively changed after interacting with the robot, which they perceived more comforting over time. Participants also reported feeling increasingly less lonely and stressed. Finally, our results showed that after self-disclosing to the robot for 5 weeks, informal caregivers reported being more accepting of their caregiving situation, reappraising it more positively, and experiencing fewer feelings of blame towards others. These results set the stage for situating social robots as conversational partners in social settings, as well as highlight how communicating with social robots holds potential for providing emotional support for people coping with emotional distress.
... For example, James Pennebaker writing disclosure paradigm [70,71] helps people to facilitate their emotions when writing about their own experiences. Interestingly, previous studies found that people in bad mood benefited more from disclosing to a robot than participating in writing disclosure using a journal [65] or on social media [72]. Another good example is affect labelling, a simple and implicit emotional regulation technique aimed at explicitly expressing emotions, or in other words -putting feelings into words [73]. ...
Preprint
Full-text available
Since interactions with social robots are novel and exciting for many people, one concern is the extent to which people’s behavioural and emotional engagement with robots might develop from initial interactions with a robot, when a robot’s novelty is especially salient, and sustained over time. This challenge is particularly noticeable in interactions designed to support people’s wellbeing, with limited evidence for how social robots can support people’s emotional health over time. Accordingly, this research is aimed at studying how long-term repeated interactions with a social robot affect people’s self-disclosure behaviour toward the robot, perceptions of the robot, and how it affected factors related to well-being. We conducted a mediated long-term online experiment with participants conversing with the social robot Pepper 10 times over 5 weeks. We found that people self-disclose increasingly more to a social robot over time, and found the robot to be more social and competent over time. Participants’ moods got better after talking to the robot and across sessions, they found the robot’s responses to be more comforting over time, and they also reported feeling less lonely over time. Finally, our results stress that when the discussion theme was supposedly more emotional, participants felt lonelier and stressed. These results set the stage for addressing social robots as conversational partners and provide crucial evidence for their potential introduction as interventions supporting people’s emotional health through encouraging self-disclosure.
... Akiyoshi et al. [43] conducted an experiment in which participants disclosed their recent problems to a robot and found that a robot with a conversational system that elicits human self-disclosure assuages anger. In addition, through experiments, Duan et al. [44] found that self-disclosure to a robot is effective in alleviating negative emotions. In this study, it was confirmed that among the participants who felt strongly negative after being exposed to shocking video footage, the emotions of those who talked to the robot after watching the video changed more positively, compared to those who wrote down their feelings. ...
Article
Full-text available
Self-disclosure of life experiences from the viewpoint of integrity is considered beneficial to the psychological health of older adults. It has been shown that people tend to self-disclose more to people they like. Compared to a consistent invariant reward, an improvement in the rewarding behavior of a person has been shown to have a greater positive impact on an individual’s liking for the person. Based on these previous studies, we explored the psychological impact of self-disclosure of integrated life experiences on the elderly and the effect of the change in the robot’s listening attitude on the elderly’s self-disclosure. We conducted an experiment in which 38 elderly participants were asked to self-disclose their life experiences to a robot for approximately 20 min. The participants interacted with either a robot with a consistently positive listening attitude or a robot that initially had a neutral listening attitude that changed to a positive listening attitude. The results showed that self-disclosure of integrated life experiences to the robot had a psychological impact on improving self-esteem. In addition, changes in the robot’s listening attitude were found to promote self-disclosure and enhance its impact on self-esteem.
... Their findings showed that people who interacted with the robot self-disclosed more and experienced less anger than those who did not use the robot. Duan et al. [29] ran an empirical study to compare self-disclosing in a diary journal or to a social robot after negative mood induction, targeting a population of people with depression. Their results showed that people who felt strongly negative after the negative induction benefited the most from talking to the robot, rather than writing down their feelings in the journal. ...
Preprint
The last decade has shown a growing interest in robots as well-being coaches. However, cohesive and comprehensive guidelines for the design of robots as coaches to promote mental well-being have not yet been proposed. This paper details design and ethical recommendations based on a qualitative meta-analysis drawing on a grounded theory approach, which was conducted with three distinct user-centered design studies involving robotic well-being coaches, namely: (1) a participatory design study conducted with 11 participants consisting of both prospective users who had participated in a Brief Solution-Focused Practice study with a human coach, as well as coaches of different disciplines, (2) semi-structured individual interview data gathered from 20 participants attending a Positive Psychology intervention study with the robotic well-being coach Pepper, and (3) a participatory design study conducted with 3 participants of the Positive Psychology study as well as 2 relevant well-being coaches. After conducting a thematic analysis and a qualitative meta-analysis, we collated the data gathered into convergent and divergent themes, and we distilled from those results a set of design guidelines and ethical considerations. Our findings can inform researchers and roboticists on the key aspects to take into account when designing robotic mental well-being coaches.