| Pie charts to illustrate the proportions of the high and low connection groups, with respect to age (top) and gender (bottom).

| Pie charts to illustrate the proportions of the high and low connection groups, with respect to age (top) and gender (bottom).

Source publication
Article
Full-text available
Researchers continue to devise creative ways to explore the extent to which people perceive robots as social agents, as opposed to objects. One such approach involves asking participants to inflict ‘harm’ on a robot. Researchers are interested in the length of time between the experimenter issuing the instruction and the participant complying, and...

Context in source publication

Context 1
... who suggested that they felt an attachment/bond to Pepper were classified as "High Connection," and those who did not were allocated into the "Low Connection" group. In the high and low connection groups, there was a similar proportion of age groups and genders (See Figure 8 for illustration). ...

Similar publications

Conference Paper
Full-text available
Gender is increasingly being explored as a social characteristic ascribed to robots by people. Yet, research involving social robots that may be gendered tends not to address gender perceptions, such as through pilot studies or manipulation checks. Moreover, research that does address gender perceptions has been limited by a reliance on the human g...
Preprint
Full-text available
Gender is increasingly being explored as a social characteristic ascribed to robots by people. Yet, research involving social robots that may be gendered tends not to address gender perceptions, such as through pilot studies or manipulation checks. Moreover, research that does address gender perceptions has been limited by a reliance on the human g...

Citations

... Previous studies that investigated relationship formation and disclosure with artificial agents followed conceptual frameworks for inducing rich disclosures and forming meaningful connections [e.g., 3,52,53,89]. For example, a study by [53] presented an implementation of 36 questions as a method to generate interpersonal closeness [see 90; "36 questions to love"] and elicit self-disclosure from human users to a chatbot. ...
... In future research, the open-ended questions incorporated in the study should be analysed using qualitative methods to deepen the understanding of human disclosures to social robots and explore subjective responses provided by participants [89]. Additionally, qualitatively analysing the content of the disclosures will allow for a more in-depth exploration of the nuances and frames within the self-disclosures. ...
Article
Full-text available
While interactions with social robots are novel and exciting for many people, one concern is the extent to which people’s behavioural and emotional engagement might be sustained across time, since during initial interactions with a robot, its novelty is especially salient. This challenge is particularly noteworthy when considering interactions designed to support people’s well-being, with limited evidence (or empirical exploration) of social robots’ capacity to support people’s emotional health over time. Accordingly, our aim here was to examine how long-term repeated interactions with a social robot affect people’s self-disclosure behaviour toward the robot, their perceptions of the robot, and how such sustained interactions influence factors related to well-being. We conducted a mediated long-term online experiment with participants conversing with the social robot Pepper 10 times over 5 weeks. We found that people self-disclose increasingly more to a social robot over time, and report the robot to be more social and competent over time. Participants’ moods also improved after talking to the robot, and across sessions, they found the robot’s responses increasingly comforting as well as reported feeling less lonely. Finally, our results emphasize that when the discussion frame was supposedly more emotional (in this case, framing questions in the context of the COVID-19 pandemic), participants reported feeling lonelier and more stressed. These results set the stage for situating social robots as conversational partners and provide crucial evidence for their potential inclusion in interventions supporting people’s emotional health through encouraging self-disclosure.
... show that customers are satisfied with ServBots' empathic skills (Kwon et al. 2018). They perceive them as autonomous social agents (Kerruish 2021) and, in turn, empathize with ServBots (Hofree et al. 2014), increasing the sense of engagement and interpersonal interaction (Leite et al. 2013a, b;Riddoch and Cross 2021). Several studies showed that ServBots could express their feelings, shame customers who mistreat them (Kerruish 2021), and elicit customers' empathy toward them (De Jong et al. 2021;Malinowska 2021;Mattiassi et al. 2021;Schmetkamp 2020). ...
... Customers' perception of ServBots as social agents is well demonstrated in studies where customers are asked to mistreat ServBots. For instance, 50% of participants refused to comply with the research instructions to hit the ServBots with a mallet (Riddoch and Cross 2021). The other half of the participants who complied with the instructions felt very uncomfortable. ...
... The other half of the participants who complied with the instructions felt very uncomfortable. When asked why they refused to hit the ServBot or felt uncomfortable doing it, participants answered that they felt an emotional connection with the ServBot that refrained from hurting it (Riddoch and Cross 2021). Customers also do not perceive a ServBot as a simple machine but as a social agent through emotional capacity and anthropomorphic features (Carlson et al. 2019). ...
Article
Full-text available
Interactional justice (e.g., empathy) plays a crucial role in service recovery. It relies on human social skills that would prevent it from automation. However, several considerations challenge this view. Interactional justice is not always necessary to recover service, and progress in social robotics enables service robots to handle social interactions. This paper reviews service recovery and social robotics literature and addresses whether service robots can use interactional justice as frontline employees do during service recovery. Results show service robots can replicate interactional justice norms, although with some considerations. Accordingly, we propose a research agenda for future studies.
... 2. Studies where the robot, or virtual agent, is already observably involved in a preexisting activity when it appears to participants (including idling behaviors like simulated breathing, random head movements, etc.e.g. [4,52,73,98]) and/or observably adjusts to the human's approach or physical co-presence (tracking their gaze, waving, producing a non-delayed greeting, approaching them, etc.e.g. [7,30,34,40,46,67]). ...
... Por dos razones. Por un lado, porque los humanos somos animales sociales hasta el punto de ser capaces de sentirnos conectados a miembros de otras especies, como perros, gatos o caballos, o incluso a robots (Riddoch y Cross, 2021). Por otro lado, porque, debido a un proceso de domesticación, el perro ha quedado cognitiva y emocionalmente "sintonizado" hacia los humanos. ...
Article
Full-text available
Un estudio con más de 1600 familias, publicado el pasado año 2021, encontró una asociación significativa entre la presencia de un perro en el hogar y un mejor desarrollo emocional y social de los niños. Una revisión sistemática de 2017, tras examinar 22 estudios, concluyó en el mismo sentido. Parece, pues, que la convivencia con un perro es beneficiosa para el desarrollo socio-emocional infantil. En este artículo exploramos el posible mecanismo que explicaría el fenómeno: la existencia de una relación de apego bidireccional niño-perro, en la que ambos ejercen como cuidador del otro. Estudios recientes explorando el comportamiento social de los perros así lo confirman.
... Previous studies that investigated relationship formation and disclosure with artificial agents followed conceptual frameworks for inducing rich disclosures and forming meaningful connections [e.g., 3,47,48,83]. For example, a study by [48] presented an implementation of 36 questions as a method to generate interpersonal closeness [see 84, "36 questions to love"] and elicit selfdisclosure from human users to a chatbot. ...
Preprint
Full-text available
Since interactions with social robots are novel and exciting for many people, one concern is the extent to which people’s behavioural and emotional engagement with robots might develop from initial interactions with a robot, when a robot’s novelty is especially salient, and sustained over time. This challenge is particularly noticeable in interactions designed to support people’s wellbeing, with limited evidence for how social robots can support people’s emotional health over time. Accordingly, this research is aimed at studying how long-term repeated interactions with a social robot affect people’s self-disclosure behaviour toward the robot, perceptions of the robot, and how it affected factors related to well-being. We conducted a mediated long-term online experiment with participants conversing with the social robot Pepper 10 times over 5 weeks. We found that people self-disclose increasingly more to a social robot over time, and found the robot to be more social and competent over time. Participants’ moods got better after talking to the robot and across sessions, they found the robot’s responses to be more comforting over time, and they also reported feeling less lonely over time. Finally, our results stress that when the discussion theme was supposedly more emotional, participants felt lonelier and stressed. These results set the stage for addressing social robots as conversational partners and provide crucial evidence for their potential introduction as interventions supporting people’s emotional health through encouraging self-disclosure.
... Therefore, when facing novel agents, we apply the schema and knowledge we are most familiar with: the "human model" Wiese et al., 2017). This reasoning is in line with the "like me" account of Melzoff, and recent literature shows that this account can be applied to human-robot interaction (Riddoch & Cross, 2021). In this context, empirical studies have investigated whether humans would indeed interpret the behavior of artificial agents by ascribing to them mental states as automatically as they do toward other human agents (Abu-Akel et al., 2020;Gallagher et al., 2002;Marchesi et al., 2019; for a review, see . ...
... Therefore, when facing novel agents, we apply the schema and knowledge we are most familiar with: the "human model" Wiese et al., 2017). This reasoning is in line with the "like me" account of Melzoff, and recent literature shows that this account can be applied to human-robot interaction (Riddoch & Cross, 2021). In this context, empirical studies have investigated whether humans would indeed interpret the behavior of artificial agents by ascribing to them mental states as automatically as they do toward other human agents (Abu-Akel et al., 2020;Gallagher et al., 2002;Marchesi et al., 2019; for a review, see . ...
... Studies where the robot, or virtual agent, already displays idling behaviors when it appears to participants (simulated breathing, random head movements, etc.e.g. [14], [15]) and/or observably adjusts to the human's approach or physical co-presence (e.g. [6], [16]- [18]). ...
Conference Paper
Full-text available
From an experiment which replicated the interaction opening delays often observed in laboratory or "in-the-wild" HRI studies, where robots often require several seconds before springing to life after they are in co-presence with a human, we suggest that the very first moments of physical co-presence between a participant and a robot are not anecdotal nor peripheral. We hold that a robot oriented to by participants as "alive" or "activated" is not the same kind of entity as a robot which first appears to these participants as an immobile object: it doesn't afford the same action possibilities. Using two examples from our corpus, we highlight that the intertwining between participants' actions and the very first behaviors or the motionlessness displayed by the robot produces a priori unpredictable sequential trajectories, which are susceptible to configuring the timing and the manner in which the robot emerges as a social agent during HRI experiments.
... The questions children asked, their spontaneous declarations, or their surprising turns of phrase have been instrumental in guiding our research efforts as they appear to reach to a crucial question: What is it really like to meet a robot?. We see this as being indicative of the value of the more qualitative approaches to robotics, which are currently under applied [15], and as a path for children to meaningfully contribute to research directions on issues perhaps more representative of a future in which daily CRI is more commonplace. ...
Conference Paper
Full-text available
This reflective piece highlights some unexpected outcomes observed during selected Child-Robot Interaction (CRI) studies. As these were peripheral to the investigations underway, they were not included in related publications, yet they have been instrumental in directing subsequent research. We advise new researchers of the value of an open interactive environment in CRI studies, and careful observation of interactions, even when adjacent to the research question.
Article
Full-text available
La educación comienza a hacer uso de la inteligencia artificial emocional a través de robots educativos antropomorfizados. La evidencia respalda que los estudiantes (hombres y mujeres) son capaces de crear vínculos emocionales con estos agentes. Sin embargo, cada vez se están encontrando más casos de desinhibición abusiva en este tipo de interacciones, como degradaciones racistas o sexistas, abuso de poder y violencia. Algunos investigadores alertan sobre las consecuencias negativas que este tipo de conductas pueden tener a largo plazo, tanto para la educación ética de los estudiantes como para los robots que aprenden de estas conductas. A pesar de su relevancia desde una perspectiva social y educativa, existen pocos estudios que intenten comprender los mecanismos que subyacen a estas prácticas inmorales o colectivamente dañinas. El objetivo de este artículo es revisar y analizar las investigaciones que han tratado de estudiar el comportamiento antiético del ser humano a través de su interacción con los robots sociales antropomórficos. Se realizó un estudio bibliométrico descriptivo siguiendo los criterios de la declaración PRISMA. Los resultados muestran que, bajo ciertas circunstancias, la antropomorfización y la atribución de intencionalidad a los agentes robóticos podría ser desventajosa, provocando actitudes de rechazo, deshumanización e incluso violencia. Sin embargo, una visión más realista tanto de las capacidades y limitaciones de estos agentes como de los mecanismos que guían la conducta humana podría ayudar a aprovechar el gran potencial de esta tecnología para promover el desarrollo moral y la conciencia ética de los estudiantes.