Conference PaperPDF Available

Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment

Authors:

Abstract

This paper aims to introduce a dynamic four-component framework towards the development of trustworthy robots for children which are aligned with the United Nations convention of the rights of the child. The development of this framework is based on a pilot project on Artificial Intelligence and Child’s Rights initiated by UNICEF and it draws upon (i) the insights from a set of empirical cross-cultural participatory studies we conducted with 𝑁 = 76 children from Japan, Uganda and Greece (aged 6-16 years old) and a survey with 𝑁 = 171parents in the US and Japan; and (ii) the existing literature on child development and child-robot interaction interpreted through the lens of the policy guidelines on Artificial Intelligence and Child’s Rights as proposed by UNICEF. Our analysis identified four robot-specific technical components for consideration regarding social robots for child’s rights, namely (i) connectivity; (ii) autonomy; (iii) embodiment; and (iv) social embeddedness. We illustrate these components with indicative examples based on the general principles of children’s fundamental rights. The proposed framework aims to support transparency in future implications of robots for children and to facilitate researchers, developers, and all relevant stakeholders, including children, to contribute to the development of an ecosystem with robots that promote children’s rights.
This paper is a work in progress and is in draft form. The authors are working on developing it further. For now, you
can cite it as a We Robot 2021 paper presentation. Feel free to reach out to authors for an updated version. 8/24/2021
Social Robots and Children’s Fundamental Rights: A Dynamic
Four-Component Framework for Research, Development, and Deployment
VICKY CHARISI,European Commission, Joint Research Centre, Spain
SELMA ŠABANOVIĆ,Indiana University Bloomington, USA
URS GASSER, Berkman Klein Center, Harvard University, USA
RANDY GOMEZ, Honda Research Institute, Japan
This paper aims to introduce a dynamic four-component framework towards the development of trustworthy robots for children
which are aligned with the United Nations convention of the rights of the child. The development of this framework is based on a pilot
project on Articial Intelligence and Child’s Rights initiated by UNICEF and it draws upon (i) the insights from a set of empirical
cross-cultural participatory studies we conducted with
𝑁=
76 children from Japan, Uganda and Greece (aged 6-16 years old) and a
survey with
𝑁=
171parents in the US and Japan; and (ii) the existing literature on child development and child-robot interaction
interpreted through the lens of the policy guidelines on Articial Intelligence and Child’s Rights as proposed by UNICEF. Our analysis
identied four robot-specic technical components for consideration regarding social robots for child’s rights, namely (i) connectivity;
(ii) autonomy; (iii) embodiment; and (iv) social embeddedness. We illustrate these components with indicative examples based on the
general principles of children’s fundamental rights. The proposed framework aims to support transparency in future implications of
robots for children and to facilitate researchers, developers, and all relevant stakeholders, including children, to contribute to the
development of an ecosystem with robots that promote children’s rights.
CCS Concepts: Human-centered computing HCI theory, concepts and models;Empirical studies in HCI.
Additional Key Words and Phrases: Embodied articial intelligence, Social robots, Child’s rights, Participatory user studies
ACM Reference Format:
Vicky Charisi, Selma Šabanović, Urs Gasser, and Randy Gomez. 2021. Social Robots and Children’s Fundamental Rights: A Dynamic Four-
Component Framework for Research, Development, and Deployment. In Proceedings of WeRobot2021, September 23–25 (WeRobot2021).
ACM, New York, NY, USA, 30 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn
Both authors contributed equally to this research.
The views expressed are purely those of the author and may not in any circumstances be regarded as stating an ocial position of the European
Commission
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not
made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components
of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to
redistribute to lists, requires prior specic permission and/or a fee. Request permissions from permissions@acm.org.
©2021 Association for Computing Machinery.
Manuscript submitted to ACM
1
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
1 INTRODUCTION
Our lives are increasingly algorithmically mediated by Articial Intelligence (AI) systems, and social robots are one
of their emerging areas of application. For the purposes of this paper, we use the notion of “social robots” to refer to
embodied, AI-enabled systems with embedded sensors and actuators, capable of perceiving and processing human
social signals, such as human intention communication, and complex social structures, and responding to them in
a socially meaningful way. We acknowledge that typically the term “social” commonly refers to biological entities,
and to qualities such as theory of mind and shared intentionality [
88
] that enable the development of human social
cognition. However, the use of the term “social” as an attribution to robotic artefacts is a well-established convention
in the eld of human-robot interaction [
29
]. This convention has been mainly based on two facts that are supported
by the relevant literature. First, the embodiment of robotic artefacts and their physically interactive presence within
the human environment can successfully stimulate specic fundamental building blocks of human social cognition,
such as shared attention and empathy [
74
]. Second, which partially results from the rst, humans tend to attribute
anthropomorphic characteristics, such as intentionality, to robotic artefacts even if they are aware that robotic artefacts
are machines designed by humans [
104
]. Robots are therefore often perceived and behaved towards as social by their
human interaction partners.
As prototypes and commercial products, social robots are increasingly reaching vulnerable populations including
children, who are in a critical period of their development with rapid changes in their cognitive, meta-cognitive and
socio-emotional abilities [
59
]. Additionally, children’s predisposition for attribution of anthropomorphic features to
inanimate objects may aect their perceptions and acceptance of robots in ways dierent from adults [
95
]. Children
develop through social interactions, and evidence shows they can perceive robots as part of their social groups [
78
],
which means that robots can aect their development.
Due to the eectiveness in engaging children’s social cognition, robots can be used to successfully scaold children’s
learning and entertainment and the demand for social robots that interact with children is likely to increase in the
coming years. However, the profound impact that this technology has on children contribute to an increase in concerns
and pressing considerations which rst need to be addressed. The societally appropriate and benecial adoption of social
robots for children will only be possible through responsible design, development and deployment, which eventually
would lead to an ecosystem of trustworthy robots that support and prioritize children’s fundamental rights.
While in the eld of autonomous systems and robotics there is already work on methodologies for embedding ethics
and human values by design (e.g. [
21
,
26
,
101
], one of the rst global initiatives that aims to systematize the design of
AI-based systems that respect and promote children’s fundamental rights is UNICEF’s work on Policy Guidelines for AI
and Child’s Rights [
32
]. This initiative reects on the existing convention of the Rights of the Child to identify elements
that are relevant to AI-based technologies and indicate actions that should be considered in order for AI systems to be
aligned with the Rights of the Child. Towards this direction, UNICEF initiated a piloting phase for the application of the
proposed guidelines in policies or AI-based products especially designed for children. In this wider context, we were
invited to pilot a set of guidelines for the social robot Haru [
42
], which is currently being developed by the HONDA
Research Institute and we describe in more detail in section 3.2. One of our core priorities for the project was children’s
active participation and the creation of opportunities for children’s voice to be heard [
28
]. The project involved a set of
empirical studies as well as activities for the inclusion of the community of child-robot interaction (e.g. [
25
]). Based on
our activities during this project, we sought to merge existing knowledge and methods from the eld of child-robot
2
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
interaction with novel insights from our empirical work in order to synthesize a framework for the design, research
and deployment of social robots that are aligned with the children’s fundamental rights.
2 GOAL AND GENERAL APPROACH
This paper aims to introduce a framework for research, design and deployment of social robots that are aligned with
and promote children’s fundamental rights. The development of the framework was based on scientic activities we
carried out in the context of piloting UNICEF’s requirements for AI and child’s rights [
32
], with a particular focus on
children’s narratives and desiderata regarding robotic systems. Additionally, we considered the existing relevant work
in the eld of child-robot interaction interpreted through the lens of children’s fundamental rights.
We conducted a series of small-scale empirical participatory research studies with children aged 6-16 from Tokyo,
Uganda, and Greece with the active participation of their educators. It is beyond the scope of the paper to present
detailed methodologies and results of each pilot study. Instead, we present a summary of the main points of the
studies in order to contextualize the conception and development of our framework. For those studies, we followed an
ethnography-based, iterative approach to identify the underlying principles for robot design by considering children’s
individual and cultural characteristics. We also tried to address issues of possible biases by considering the contributions
of a multi-disciplinary team with an increased geographical distribution.
Regarding the organization of this paper, based on our stance to prioritize children’s voices, we rst present the
summary of the empirical participatory pilot studies. This is followed by a review of the relevant existing work in the
eld of child-robot interaction which we interpret in the context of the general principles of children’s fundamental
rights. Based on the input from our empirical studies with children, the relevant literature and the UNICEF’s guidelines,
we introduce our four-component framework for the facilitation of robot design and development with the consideration
of children’s rights. We conclude with a discussion of our approach and with proposals for possible future research
directions regarding robots and children by considering children’s fundamental rights.
3 EMPIRICAL PILOT STUDIES
The empirical studies performed in our project included children, and at times their parents and educators, through
participatory activities that gave them opportunities to commence and develop a conversation and a reection on
various issues in relation to their understanding of social robots. For our empirical participatory studies we focused on
three requirements as proposed by UNICEF, namely
Prioritize fairness and non-discrimination for children
Protect children’s data and privacy
Provide transparency, explainability and accountability for children
Our main methodological approach for the exploration of the requirement of fairness, inclusion and non-discrimination
was based on the paradigm of participatory research with children, including their educators; for our better under-
standing about the protection of children’s privacy in the context of robotic technology, we performed a participatory
research with children and a survey with parents; lastly, for the evaluation of our system, regarding the requirement
of robot explainability, we conducted an experimental, controlled proof-of-concept empirical study. Throughout the
process of those pilot studies, we created a highly collaborative structure with regular interactions among all the
involved researchers, developers and educators who participated in all the iterations of the research and the technical
developments of the robot. An underlying common element for all our studies was the use of the Haru robot (for a
3
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
description of the robot, see section 3.2. In this section, we elaborate the overall methodology of the empirical pilot
studies, and we present a summary of the context and the goal of each pilot study, the methods, and some indicative
priliminary results of each study. A detailed description of the rst pilot study appears as a published paper in [
24
],
while the rest of the ndings will be published in the near future.
3.1 Methodology
We adopted an ethnography-inspired methodological approach which included a small targeted sample of children in
school contexts with the participation of their educators, as well as a survey performed with parents. In order for us
to promote inclusion and non-discrimination for our understanding of children’s perceptions of robotic artefacts, we
purposefully included participants from four countries: Uganda, Greece, Japan, and the United States. The fact that we
included children from a rural area in Bududa, Uganda came with particular methodological and ethical challenges,
since, to our knowledge, this is the rst study conducted in a rural African area on the specic topic. Apart from the
ethical approval of the study from the local oce of Bududa District, we tried to address these challenges via our
immediate interaction with the children and their educators before, during and after the conduct of our participatory
study as indicated by Spiel et al. [
84
]. To reduce methodological bias, we provided detailed and structured protocols for
the deployment of the activities which were co-designed together with the local educators in order to align with the
local cultures and resources of the participating schools. For all the studies with children we obtained the informed
consent forms by the parents or legal representatives as well as assent by the participant children.
More specically, we conducted four small-scale pilot studies with
𝑁=
24 children in Uganda,
𝑁=
33 children
in Japan,
𝑁=
16 children in Greece, and a survey with
𝑁=
71 parents in the United States and
𝑁=
100 parents in
Japan. Of those, two studies were based on participatory design techniques [
19
] with the participation of children’s
educators in the form of Participatory Action Research [
11
], one was a proof-of-concept controlled experimental pilot
study, and the last one was based on data collected via a survey. For the participatory studies, we used the technique of
story-telling, which was a familiar technique for all our participant children, it required the minimum of the resources
and infrastructure and it was within the educational culture of all our participating schools. According to the principles
of the Participatory Action Research paradigm, we co-designed the story-telling activities together with the educators
and we used stories from children’s everyday situations, imaginary situations and robot-related situations. The activities
were designed to be conducted in 4 sessions per school.
For the data collection we used the resources available in each school. Our nal data-set included video-data and
children’s inputs in a written or digital form. We conducted thematic content analysis and we identied themes that
were specic for each study and themes that appear globally in all studies. In this paper we only elaborate on the global
themes that appear across studies as discussed in 3.7.
3.2 The Haru Robot
In all our studies, Honda’s Haru robot was used as a common example of a social robot for participants. Haru is an
experimental tabletop robot for investigating embodied mediation through the robot’s expressiveness-centered design [
?
]. Haru’s design philosophy is focused on empathic interaction through its expressive behavior to explore the potential
of supporting long-term, sustainable human-robot interaction. The robot has a total of 5 degrees of freedom (i.e., base
rotation, neck forward-and-backward movement, rotation and tilting of outer eyes, and push-and-pull of the inner
eye module). Haru’s eyes include a TFT screen for visual media display. Moreover, it uses two sets of addressable LED
matrices in the border of the eyes to convey attention and in the spherical body to display its mouth. The robot can
4
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
communicate using text-to-speech (TTS) and non-verbal vocalization through its internal speaker. A software tool
to build animation routines through the combination of movements and audio-visual modalities is provided to allow
researchers to design nuanced robotic expressions [
?
]. Lastly, the Haru robot comes with an AI perception suite for
understanding the world (i.e., human and environment) and a programmable decision-making module for programming
and automating the interaction process.
Fig. 1. The Haru Robot which was selected by UNICEF as one of the application to pilot the Policy Guidelines for AI and Child’s
Rights.
One of the target use populations for the Haru robot is children, in diverse contexts including homes and classrooms,
interacting with the robot individually and in groups. It is therefore our aim to use the studies presented here to guide
the further development of the robot for interaction with children.
3.3 Empirical pilot study 1: Children’s perceptions of fairness in robotic scenarios in Japan and Uganda
In order for us to embed elements of fairness into our robotic system, we considered inclusion as a fundamental element
for our pilot study. Previous research indicates that children in dierent societies may perceive fairness in dierent
ways [
77
]. With our study, we aimed to explore children’s perceptions of the notion of fairness in robotic scenarios in
Japan and in Uganda. We were specically focusing on including under-represented populations in terms of ethnic and
cultural background, socio-economic level, and cases that might be unusual to include in child-robot interaction studies.
Methods
This study was conducted in two phases. First, we conducted a Participatory Action Research study with
story-telling activities for
𝑁=
4
𝑠𝑒𝑠 𝑠𝑖𝑜 𝑛𝑠
at a school in Japan with
𝑁=
20 children (15-16years old) and
𝑁=
4
𝑠𝑒𝑠 𝑠𝑖𝑜 𝑛
at a school in Uganda with
𝑁=
24 children (6-16 years old) (see g. 2). We used storytelling to facilitate children’s
narratives on fairness and to identify areas of alignment and disconnect. The protocols for the two schools was the
same but the educators had the freedom to adapt elements of the protocol to the specic cultural context. This was
followed by the second phase of the participatory design study in Japan with
𝑁=
10 of the children that participated
in the rst phase. Based on the story-telling results of the rst phase, we empirically evaluated a tablet-based system
which could be used by children to design story-telling activities and robot behaviours for the physical Haru Robot (g.
2). Following this study, we developed a prototype which included a child-friendly screen-based platform to facilitate
children’s designs of the Haru robot as a story-teller. The system consists of a tablet-based interface of a virtual robot
which functions as a low-cost mediator for the physical robot design (g. 3).
𝑁=
10 children in Japan participated in a
single session and were asked to modify an existing story we had already prepared and implemented to the physical
robot. With this session, we aimed to gain a rst understanding of the feasibility of our prototype in order for us to use
it as a low-cost method for inclussivity in robot designs for children.
5
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
Fig. 2. Participatory Action Research studies in Bududa, Uganda and Tokyo, Japan
Fig. 3. Participatory design study with the Haru Robot in Tokyo, Japan
Summary of initial results
We transcribed children’s narratives and we performed a thematic data analysis. Initial
results indicate that in the rst phase of story-telling activities both groups, in Tokyo and in Bududa, referred to similar
aspects of fairness or unfairness, namely psychological, physical and systemic; however, children in Tokyo focused more
on psychological and mental aspects (e.g. being ignored by the robot) while children in Uganda emphasised physical
and material aspects of fairness (e.g. not getting access to a space, being physically constrained). Both groups increased
their emphasis on mental aspects in robot-related scenarios. In addition, we observed that children’s narratives in both
groups referred to a combination of robot-features such as its connectivity and its embodiment. All children expressed
their interest to further explore fairness and unfairness as experienced by children with dierent cultural backgrounds
and the need for inter-group contact. This study provided a set of children’s robot-related desiderata as a basis for the
development of design principles and technical requirements for robot companions that prioritize fairness for children.
6
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
In the second phase of this pilot study, we observed that children were able to navigate our system and use it in an
eected way. The activities were designed to be collaborative and the setup we designed supported children’s small
group collaboration during the design phase. Our plan was to distribute the platform to the school in Uganda as well,
but COVID-19 restrictions did not allow the conduction of the study at the school in Uganda. This has been included in
our future steps. The system is currently being designed to support the connectivity principles for the system which
aims to function as a tool for the connectivity of the two groups of children (in Tokyo and Uganda) as indicated by
the participant children in the rst phase of the pilot study. Lastly, the thematic data analysis revealed that children’s
narratives about fairness include aspects of the robot’s connectivity, embodiment and autonomy as well as scenarios
that indicated dierent forms of the robot’s integration into children’s everyday routines. We considered these themes
as the seed for further research.
3.4 Empirical pilot study 2: Children’s perceptions of robots in relation to inclusion, privacy and fairness
in Greece
Following our participatory study in Japan and Uganda on the topic of fairness in robotic scenarios, we designed and
conducted an additional participatory study to explore further notions that are related to the requirements proposed
by UNICEF in the context of robot scenarios, namely inclusion, privacy and non-discrimination. Since the rst pilot
was conducted with children in Japan and Uganda, in order for us to achieve increased geographical and cultural
distribution, we invited children from Greece to take part in our second pilot study. The main goal of this pilot study
was for us to understand weather the methods and the initial indications we had from the rst study could be used in a
dierent context for a wider set of notions related to the child’s rights in the context of robotics.
Methods
We recruited
𝑁=
16 children aged 15 years old in Greece and we used the same methods and protocols as
our previous study in Japan and Uganda. This study was conducted in four sessions which were based on our existing
protocols as designed for the study on fairness. However, in the current study we aimed to introduced additional
notions related to the child’s rights. The additional topics would emerge from an introductory session. The protocols
were adapted through a co-design session with the local educators. In all sessions, we used story-telling activities to
introduce children to the current developments in social robotics (session 1), and to facilitate children’s narratives on
non-discrimination (session 2), inclusion (session 3) and privacy (session4) in robot-related scenarios.
Summary of initial results
We transcribed the narratives of the participant children and We performed a thematic
content analysis for all the sessions. For the introductory session, we analysed children’s reference to various elements
of the rights of the child in relation to robotic technology. A preliminary analysis shows that children’s narratives
include aspects of non-discrimination, privacy, inclusion, education, accountability, well-being and safety (g. 4. Based
on those initial results we selected the focus of the rest of the sessions (fairness and non-discrimination, inclusion and
privacy). Because of the dierent foci of the sessions, we developed an annotation scheme with two parts. The rst part
included themes specic for the topics of inclusion, privacy and fairness while the second part included robot-features
namely its level of connectivity and autonomy, its embodiment and the ways that a robot can be embedded in children’s
social lives. Figure 5 shows the percentage of appearance of each of those themes in children’s narratives per session.
We observe that the integration of the robot in the social fabric of children’s everyday life seem the topic that emerged
more in all sessions. To refer to this theme, we introduce the term “social embeddedness” which we explain in more
detail in sections 3.7 and 5. A detailed analysis of this pilot study will be published as a separate paper.
7
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
Fig. 4. Content thematic analysis: Percentage of the themes (
𝑁=
25) emerged during the discussion of the importance of children’s
rights in the context of robotics
Fig. 5. Content thematic analysis: Percentage of themes occurence per session. Session1: Introduction; Session2: Non-discrimination;
Session3: Inclusion; Session4: Privacy
.
3.5 Empirical pilot study 3: Parents’ perceptions of privacy in robotics context in Japan and the US
Privacy is one of the fundamental rights of the child, as dened by the United Nations. In prior literature on the
acceptability of social robots for in-home storytelling activities, privacy (i.e. the potential for robots to collect and
disseminate sensitive information) was a concern parents had in relation to the potential use of robots by children
[
?
]. Along with understanding children’s perspectives on social human-robot interaction, we also wanted to better
understand how parents, who are likely to have a central role in giving children access to robots, thought about robot
design and privacy. With this in mind, we performed online surveys with parents in the US and Japan, asking specically
about their perceptions of the robot Haru, its capabilities, and how they might want to use Haru in ways that conserve
their child’s, and family’s, privacy. These surveys will be followed up with more in depth interviews in the two countries.
8
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
Methods
We sent surveys to participants who self-identied as parents on MTurk (for US participants) and Crowd-
Works (for Japanese participants). We had 71 respondents on MTurk (M=39, F+ 32) and 100 (M=47,F=49,NA=4) respon-
dents on CrowdWorks. Prior to answering the surveys, participants viewed a brief introductory video of the Haru
robot. The survey asked them about their understanding of Haru’s sensing capabilities, the way it collects, stores, and
disseminates data about people it interacts with, and where participants would want to use the robot.
Summary of initial results
In the US, the most common places parents wanted to use Haru was in the living room
(n=41), oce (n=30), and kitchen (n=23), and least common was the bathroom (n=6) and child’s nursery (n=2). This is
despite the fact that parents often use various types of digital monitoring equipment (e.g. cameras, motion sensors) in
nurseries. Parents generally thought Haru would store data internally (n=42), and much less in the cloud (n=10). They
mostly thought users and owners of the robot would have access to the data (n=31). The most common reason they
pointed out for picking the specic area of the house they wanted to use the robot in was because there was "little
to no risk of privacy invasion" (n=16), while "invasion of privacy" (n=24) was the most common reason for choosing
not to put the robot in an area. Parents in Japan similarly expected data to mostly be stored within the robot (n=49),
and thought that data would be accessed by the family (n=59) or the company (n=55). In Japan, the robot’s desired
placement was also the living room (n=85), followed by the guest room (n=42) and kid’s room (n=35), showing less
reticence to have robots interacting with kids in more private spaces of the home.
These results suggest that parents, particularly in the US, are concerned about privacy in relation to social robot use
with children in the home. Furthermore, some of their expectations of how data is stored and used pose challenges
to robot design and existing technological capabilities, particularly the idea that data be stored and processed mainly
within the robot itself, which can be limiting to the robot’s capabilities. Overall, following parents’ preferences suggests
robots should be designed to operate in multi-user settings, such as living rooms, which will require them to be able to
distinguish multiple family members from each other and act accordingly. Privacy sensitive designs will also require
less use of cloud services to store and process data, which is a direction many current robot design employ.
3.6 Empirical pilot study 4: Children’s perspectives of robot explainability in a child-robot interaction
proof-of-concept study
Transparency of the decision-making of an AI system is important for the development of trust especially in the case
of child users. For this reason explainable AI has been widely acknowledged as a crucial feature for the practical
deployment of AI models [
7
]. An explainable AI system is one that produces details or reasons to make its functioning
clear or easy for humans to understand. However, in the case of child users, we need to understand what types of
explanations are suitable for them, depending on the context and on the child’s developmental level, and when it is the
optimum moment to provide the explanation. Towards this end, we built upon our previous studies on child-robot
collaborative problem solving [
22
], in which the Haru robot interacted with children to support their problem-solving
skills with the use of the Tower of Hanoi task. We designed three types of task-related explanations for the Haru robot
which were given (or not) to the child for the support of the problem-solving process. This research will contribute to
the development of child-friendly explainable robots for collaborative problem-solving tasks.
Methods
We built upon our previous studies [
22
] to design a proof-of-concept empirical study with
𝑁=
13 children
in Japan in the setting of child-robot collaborative problem solving task with the Tower of Hanoi task (g. 6). We
designed three types of task-related explanations for the Haru robot which were given (or not) to the child for the
support of the problem-solving process as indicated in table 1. We used mechanical explanations to link cause and eect
for the task solution, contrastive explanations to contrast two solutions (i.e. why p rather than q) [
65
] and counterfactual
9
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
explanations to indicate the result of an alternative choice [
94
]. In a post-intervention session, we asked the participant
children about their opinion on the explanations of the robot. We collected two types of data: (i) behavioural data of
task performance which consisted of logged data during the task performance as well as the child’s initiated interaction
with the robot and (ii) post-intervention interview data about children’s perceptions of the explainable robot. In this
paper we focus on the second type of data namely the children’s narratives during the post-intervention interviews.
Fig. 6. Proof-of-concept study on explainability in robots for collaborative problem-solving activities in Tokyo, Japan
Table 1. Types of explanatory interventions
Child’s previous perfor-
mance
Pedagogical goal Explanation of the algorithm
Optimal and Fast No intervention No explanation
Optimal and Slow Improve the learner’s condence Mechanical explanation
Suboptimal and Fast Introduce the learner into planning Contrastive explanation
Suboptimal and Slow Explain the reason a movement is wrong Counterfactual explanation
Summary of initial results
We performed a thematic content analysis to extract children’s perceptions of a robot
that provides explanations during a collaborative problem-solving task. While most of the children found the robot’s
mechanical explanations useful mainly because this was connected to their better task performance, there were children
that considered explanations as a robot feature that makes the robot’s decision-making more transparent, which they
connect with their better understanding of the intentions of the robot. One of the children indicated that “understanding
the robot’s intentions regarding the solution of the Tower of Hanoi task was useful for our collaboration”. This suggests
that the robot explanation was not only perceived as a task-related help behaviour of the robot but also it aected the
child’s perception about the potential collaboration with the robot. In addition to the verbal interaction with the robot
and the role of the explanations, one child mentioned that “With Haru, there was more movement, a little more laughter. I
like it better when there is movement”. The embodiment of the specic robot allows the movement of the neck and the
10
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
eyes which seem to aect children’s perceptions about their interaction with the robot. When asked whether the robot
could be integrated in other activities than the problem-solving task they experience with the physical robot, one of
the children mentioned “I would like to do the morning greeting exercise with Haru. When I come home, my siblings
are at school and my parents are at work, so I’m all alone. I want Haru to do my homework with me”. Although this
child interacted with the robot in a very controlled setting within a specic context, there was an emerging expectation
of the integration of a robot into other daily activities such as daily routines at the school or leisure and homework
activities at home. This nding is in accordance to ndings in the existing work in the eld of child-robot interaction
which raises concerns about the design decisions regarding the transparency of a robotic system and the approaches
for robot explainability towards child users.
3.7 Discussion of the empirical studies
In this section we presented a summary of the pilot participatory studies we conducted during the piloting of the
UNICEF’s guidelines on AI and Child’s Rights. The detailed presentation of the methods and the results of each pilot
study was beyond the scope of this paper. The summative presentation of the studies aimed to provide the basic
information and the context upon which the conceptualization and the development of our proposed framework
occurred. The results of the rst pilot have been published in [24].
These pilot studies had a threefold role. First, they helped us understand children’s perspectives on social robots
in relation to a set of child’s fundamental rights and certain features of the system that support those rights. More
specically, we explored the ways in which we can support children’s inclusion and non-discrimination in the design
of robotic systems, we investigated the parents’ perspectives on privacy issues that emerge with the use of robots in
children’s everyday activities and we tested the impact of certain types of robot explanations on children’s perceptions
and acceptance of the Haru robot as a learning companion for problem-solving activities.
Second, the design and the conduction of those pilot studies contributed to our further understanding on method-
ological approaches for performing cross-cultural studies, by including children from typically under-represented
geographical regions and cultural backgrounds, such as our participants from Bududa in Uganda. While researchers in
the eld of child-robot interaction have already indicated methodological practices for cross-cultural studies [
9
], to our
knowledge, this is one of the rst attempts that children from a rural area of Africa are included. For this reason, an
ethnography-inspired approach was considered as a suitable methodological paradigm and inclusion of and constant
interaction with the local educators in the form of Participatory Action Research seemed to be important for the design
and the conduction of the pilot study in Africa.
Third, we prioritized and facilitated children’s participation in the current dialogue for social robots and children’s
fundamental rights and we provided the opportunity for children who are not typically included in child-robot interaction
studies to approach the topic, formulate their own opinion which then was considered as an input for robot design.
Lastly, we considered the results of our studies as an input for the development of technical requirements and a
theoretical framework for social robots that would align with children’s fundamental rights and minimize possible
emerging risks for child-users. Considering children’s narratives, we observed certain robot features that appear
regularly, across studies and across cultures such as the degree of the robot’s connectivity, whether the robot functions
autonomously, the role of the embodiment (e.g. sensors and appearance of the various modules), and the ways that the
robot behaviour is designed to be integrated in the children’s social context. Table 2 indicates the four components
with the corresponding themes that appeared in the narratives of the children and a quotation example per component.
11
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
In section
??
, we introduce the proposed framework and we indicated connections to the Rights of the Child in the
context of robotic systems as we discuss in 4.1.
Table 2. Themes, categories and indicative examples based on the thematic content analysis across pilot studies
Theme Categories Examples
Connectivity Degree of connectivity
“The robot knows the location of my home and
checks the route for me” (Japan, 15)
Autonomy
Perception, decision-making, con-
trol
“The robot could understand and help me. It could
be my friend” (Greece, 15)
Embodiment Head, body, limbs, motion, speech
“The robot can carry disabled children to the school
classroom” (Uganda, 12)
Social embedded-
ness
Single user, small group, large group
“The robot can see what you can do with your
friends” (Japan, 15)
Limitations and future work
While these pilot studies provide a number of rst indications regarding children’s
perceptions on topics that relate to their fundamental rights in the context of social robots, they were small-scale
studies which do not allow the generalization of the results. Our future work includes the involvement of a larger
number of children from additional geographical regions with a special emphasis on the inclusion of children from
under-represented areas and cultural backgrounds. However, conducting child-robot interaction studies in remote
geographical areas comes with certain challenges because of the various limitations regarding the local infrastructure
and internet connection. One of the solutions to address this issue is the development of low-cost hardware, upon which
we are currently working. As a starting point, we plan to use a hybrid device which combines a low-cost screen-based
representation of the Haru robot on a device with limited amount of sensors. From a methodological perspective, the
fact that the our knowledge with research on child-robot in rural areas in Africa is still limited, required an increased
number of iterations. Based on the method of story-telling and the protocols we developed for this study, we aim to
iterate with additional emerging aspects in relation to child’s rights and robotics. Our ambition is to support children
from under-represented and disadvantaged areas to have an active role in the development of AI and robotics that will
probably aect them in the future.
4 CONSIDERATION ON CHILD’S FUNDAMENTAL RIGHTS IN THE CONTEXT OF CHILD-ROBOT
INTERACTION
4.1 Social robots for children
In this section, we relate insights from prior studies on social robot interaction with children to the United Nations
principles of children’s rights [
92
]. The consideration of ethically-aligned robots is not a new area for research and
policy. However, since citizens and legal entities will increasingly be subject to actions and decisions taken by or with
the assistance of AI systems, governmental institutions are pressing for the development of regulatory frameworks
for AI. The recently-published White Paper on AI of the European Commission [
33
] explains that it is important to
assess whether the current EU legislation can be enforced adequately to address the risks that AI systems create or
whether adjustments are needed to specic legal instruments. Towards this end, further initiatives such as the European
Commission’s draft on Articial Intelligence Act have already started to consider possible directions for a regulatory
framework for AI. With a focus on children, UNICEF has recently indicated that although AI is a force for innovation, it
also poses risks for children and their rights. However, the majority of AI policies, strategies and guidelines make only
12
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
a cursory mention of children. To help ll this gap, UNICEF is currently exploring approaches to uphold children’s
rights in the context of AI and to create opportunities for children’s participation.
Although UNICEF’s initiative, to our knowledge, is one of the rst and most systematic initiatives for the consideration
of children for AI policies, it considers guidelines that cover a wide range of AI-based technologies. However, embodied
AI and robots bring unique opportunities but also robot-specic risks for children; while this calls into question how
existing protection regimes might be applied, it is still unclear how rules for children’s protection and interaction with
robots should look like. We therefor provide an initial discussion of the how the United Nations Convention of the
Rights of the Child (UNCRC) [91] could apply to interactions with social robots.
The UNCRC covers all aspects of a child’s life and sets the civil, political, economic, social and cultural rights that all
children everywhere are entitled to. This Convention was adopted by the United Nations General Assembly in 1989 and
is the most widely ratied human rights treaty in the world. The convention consists of 54 articles, four of which play a
fundamental role in realising all the rights in the Convention for all children and are known as "General Principles".
These are the following:
Non-discrimination (article 2): All children should enjoy their rights and should never be subjected to any
discrimination;
Best interest of the child (article 3): In all actions concerning children whether undertaken by public or private
social welfare institution, courts of law, administrative authorities or legislative bodies, the best interest of the
child shall be a primary consideration;
Right to life survival and development (article 6): State parties shall ensure to the maximum extend possible the
survival and development of the child;
Right to be heard (article 12): State parties shall assure to the child who is capable of forming his or her own
views the rights to express those views freely in all matters aecting the child, the view of the child being given
due weight in accordance with the age and maturity of the child.
Along with the UNCRC, we consider closely the UN General Comment 25 (UNGC25) [
90
] and UNICEF AI and child’s
rights documents [
32
], which were published more recently and in which we nd a consensus in relation to the types
of children’s rights relevant to digital technologies and AI. Starting from the specic principles pointed out in these
documents, we discuss the risks and the opportunities presented by the use of social robots in relation to what we know
from the child-robot interaction (cHRI) literature.
4.2 Non-discrimination
The UNCRC states that children’s rights should be respected without discrimination based on their race, sex, language,
ethnic origin, and other attributes. The UN’s Comment 25 further denes non-discrimination as “equal and eective
access to digital technology” and also points to potential challenges to this principle due to algorithmic bias, exposure
to hateful and hurtful digital communications, and the potential for access to content that propagates negative racial,
gender, etc. stereotypes through digital technology.
One way to address discrimination and embedded bias in robotic systems is to ensure diversity in the groups that are
involved in their design and development. This is hardly a radical notion. Yet, in the eld of child-robot interaction, the
majority of empirical studies include participant children with certain common characteristics, such as their national
and/or cultural origins, while the eld has only recently started to exhibit sensitivity to the inclusion of children
13
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
from underrepresented populations more diverse regions of the world, or more varied socio-economic and cultural
backgrounds (e.g. [24])
Inclusion of children with diverse backgrounds and from various contexts in the design and evaluation of robots can
help address algorithmic bias from the perspective of providing more diverse data sets, which has recently been shown
to create signicant issues in digital systems that rely on them [
18
], including mis-identication of users who belong
to groups less represented in the data. Robots already have certain perception related bias in relation to children – in
general it can be more dicult to perceive children’s speech and behaviours, and it can also be more challenging with
specic types of children (e.g. availability of language of interaction in robot) [57].
More inclusive design and evaluation can serve to ensure that children with diverse capabilities are able to participate
in interactions with robots by providing relevant interaction modalities. The design of robots who can perceive and
produce sign language [
1
], interact with and teach language to deaf children [
76
], as well as robots who can interact
with and provide social scaolding for neurodiverse children [
27
,
51
] is a step in this direction. Robots can also provide
new ways for children with diverse capabilities to interact with social others and their environment – telepresence
robots can provide opportunities for greater social inclusion to children in remote communities [
67
], or can provide
mobility to those who might not be able to attend school in person due to illness (e.g. [
69
,
97
]). In this way, robots can
be used to increase access and inclusion of more diverse groups of children.
Social robot bias can also aect children through their physical presence, which may represent certain social
stereotypes in the robot’s appearance and behaviour. Researchers have shown that gender stereotyping is salient in
robots [
34
], even among children [
66
], and can have an eect on children’s own perception of gender steroetypes more
generally [
83
]. Other forms of stereotyping, such as that based on race [
10
] or ethnicity [
35
], also occurs with robots.
Incorporating social stereotypes in robot design can serve to support a robot’s function in the short term (e.g. a female
robot might be seen as more empathetic than a male-presenting robot), but such design shortcuts could also have
negative longer term eects on children’s understanding of people’s diverse capabilities and their own social gender
identity.
When thinking of access, it is also important to consider the robot as a component in a broader socio-technical
infrastructure, which requires other technologies (e.g. wireless connection) and people (e.g. experts with both technical
and local knowledge) to use robots eectively in the child’s location. The inclusion of a broader range of locations and
child and adult experts in the design and development of robotic technologies is therefore imperative if robots are to
become a robust and widely available resource around the world.
4.3 Best interests of the child
In the design and deployment of robots, we need to prioritize how a robot’s embodiment, social presence, and other
aspects of robotics can best serve children’s needs and also consider where there might be areas in which robots might
inadvertently cause children to come to harm, or where their interactions with robots might prioritize others’ gain.
When considering the best interests of the child, the UN Comment 25 identies the need for the child to “receive
appropriate information, and be protected from harm including through content blocking as needed". The UNGC25
discuss “protection from exploitation” of children using digital media, which include concerns about digital media
control by corporations; access and use of data created through engagement with digital tech; and no neuro- and
aective marketing to children. With this in mind, UNICEF suggests we need to establish “proportional and appropriate
limits to participation focused on harm reduction,” and ensure that there is “no limitation or punishment to children
based on views expressed.
14
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
We can consider how cHRI may lead to the collection of data on children’s behaviours, ideas, etc. by corporations
through robots and used for their prot. Children may engage with robots in various types of contexts, including for
education, entertainment, and social interaction. During such interactions, it is possible to collect data on children’s
attitudes, beliefs, responses to dierent stimuli, and also to inuence children’s behaviour and ideas. Children might be
particularly vulnerable to such inuence as they might engage with robots more as a peer or social companion rather
than a technology, and divulge data that adults may not [
15
,
16
]. To counter this possibility, it is important to think
of ways to limit the collection of data from children during everyday interactions, and regulate its use to prioritize
children’s benet over that of other stakeholders.
We should also consider the eects of robot errors in functioning when they interact with children. The negative
impacts of mis-identication or bias in interactions can be particularly acute in interactions with social robots, which
engage children in aective and socially salient interactions. A robot that is not be able to understand a child with a
certain accent will not be able to provide equitable service to this child. It may also case psychological harm, as the child
might interpret the robot’s inability to interact with them as their own insuciency or fault, rather than a technical
glitch [89].
It is also possible that a robot might give incorrect information or guidance to child due to its own limited knowledge
capabilities. This calls us to engage with the potential limits of research on trust in cHRI. While a focus of much of
HRI design is on getting people to trust robots, it is particularly important to consider how we might allow children to
understand the limits and in some cases even limit their trust of the technology. In order to allow robots to be used with
a child’s best interests in mind, we should further consider developing a curriculum for teaching digital literacy with
robots to children so they can better understand and place in their lives their interactions with these new technologies.
4.4 Right to life, survival and development
The UN’s General Comment 25 explains that it is important to ensure children are protected from “inappropriate
content, online abuse and exploitation”, and that technology should not be used as a substitute for personal interaction.
It is also important to ensure availability of appropriate use strategies for stage of child development. UNICEF more
broadly suggests that we need to have mechanisms for assessing and continually monitoring the impact of AI systems
on children in AI policies and strategies. Digital technologies can promote children’s right to culture, leisure and play
by, according to UNGC25, providing “access to broader array of cultural resources” and an “attractive and engaging
medium for children.” The UNGC25 points out that such interactions should be need to involve “data protection, privacy
and safety by design.
Children have been identied as a promising potential user group for robots because of their openness to novelty
and active imaginations, as well as their strong tendency to anthropomorphize, which can all support interactions with
robots. In designing robots, we must consider the potential harms that might come to a child due to the robot’s existence
between the world of the animate and the inanimate – it’s “third ontology” in the terminology of HRI researchers
[
55
,
58
]. Due to its design and embodiment, the robot might appear to the child to be animate, but might exhibit more
mechanistic behaviours, or lack certain responses or functions that are exhibited by animate beings. In terms of child
development, it’s important that the robot design make the machine nature of the robot clear to the child, to take into
account potential eects on the child’s moral and social development [53, 54, 81].
An area of signicant development in cHRI is educational robotics, which pertains directly to a child’s “right to
education” as a means to their development. UNGC25 explains that there is a need to “extend educational programs
to include digital technology and related challenges and opportunities and make access to education more available”
15
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
through digital technology. Robotics as a type of AI system has several opportunities for use in such an expanded
view of children’s education at various ages. Robots themselves are being developed as education technology and and
a way for more diverse students to get engaged in STEM [
14
,
70
]. Robots also gure in some research as a way for
children to access educational materials through a dierent mode of interaction, with the robot as a peer or tutor, or a
way to present educational materials in a more socially interactive and physically co-present way in comparison to
computers or ipad applications [
64
,
98
]. As mentioned above, robots can also provide tele-education opportunities for
children to access remote locations through the robot. It is, of course, always important to consider the robot not as a
replacement for human instruction, even though robots have been shown to be more ecacious in learning outcomes
than non-embodied digital technologies, but as an addition to instructional interactions with peers and teachers.
There is also the possibility that robots can help children expand their horizons. Robots as entertainment and
education tools can provide children with access to novel experiences of other places, cultures, and species [
5
,
45
] or
provide new forms of learning [
22
]) and play [
4
,
102
,
103
]. Researchers have recently started thinking about how robots
can be designed with a "growth mindset” in mind [31]], and how they can support children’s creativity [46].
However, as we develop such educational and entertainment cHRI, we should consider whether the inevitably limited
interaction capabilities of robots will end up limiting children’s autonomy and development in ways we cannot yet
foresee. For example, the design of a storytelling robot that displays only Western fairytales can restrict rather than
support diverse forms of play and cultural expression in cross-cultural communication, while a collaborative robotic
platform for storytelling that takes in input from children of many cultures can expand access and global understanding.
In addition, we need to consider the impacts of robot bias towards children, not only of robots designed for children
to use but of robots that children might have access to in their homes, schools and public spaces more generally.
Voice-activated assistants already interact with children, but do not necessarily adapt the type of data the provide or
collect to their younger users. If we consider the possibility of having autonomous cars in the future, how will they
be able to interact with children, either as pedestrians on the street, or as passengers, taking into consideration their
own needs and perspectives (e.g. [
23
]? Lastly, it is also important to consider the need to support children’s autonomy
throughout their interactions with robots, and make sure that the robot does not end up with the role of task-master,
not allowing the child to procrastinate, create their own leisure time and pacing.
4.5 Respect for the views of the child
The UN AI guidelines document suggests that the meaningful participation of children in the design of hew technologies
and in the development of policies relating to AI technology design and use, needs to be supported. This type of
work should include, per the UNGC25, eorts to make sure children have equitable and age-appropriate access to
information: the "availability of information based on child’s stage of development, language and cultural background;
plurality of content in dierent formats; and diverse and not just commercial content". The UNGC25 also suggest that
children should receiving “training and support... to participate in digital interaction” and be able to appropriately and
successfully express their views about and with the use of digital media.
Researchers are increasingly engaging children in the design of robots. Work in STEM education provided early
results here, pointing to novel uses for robots that go beyond physical assistance and enable social interaction and
strengthen interpersonal ties [
40
,
82
]. In our own educational research, we’ve seen the importance of providing children
with opportunities to adapt and personalize robots to their own interests and needs [
44
]. We have also seen the potential
of including children in the design of robots to give them a greater sense of self-ecacy and contribution to society, as
well as to help them learn the connections between social factors and technical design [
45
]. In one classroom in the
16
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
United States, children designed telepresence robots that could patrol hallways as a way to address concerns about
the rise of gun violence in American schools [
43
]. This suggest that children’s participation is important not only
for improving the design of technology to better suit their needs, but that such participatory activities have in their
own right a potential benet for the intellectual, mental, and social development of children. To support this type of
development, researchers have started identifying robot design and evaluation methods specically suited to use with
children (e.g. [4, 6, 23, 103]).
We can consider providing transparency, explainabilty, and accountability for children regarding digital technology to
be part of the eort for respecting and voicing their views. In this regard, robots provide opportunities for interpretable,
physical and tangible access to digital technology not available with other AI systems. Robots can be used as a more
natural interface for more child-friendly explainability / digital content / multi-modal channels of communication,
and researchers should explore which aspects of robotic technologies are particularly useful in scaolding children’s
understanding of the functions of AI technology. For example, physical posture and movement can be used to clarify
what information a camera is collecting from; or the robot can clarify through conversation and interaction how data
collected from the child might be used, and explicitly ask for permission for particular uses.
It is also possible for robots to provide opportunities for children of all ages to “participate in digital interaction...
[through] age appropriate communication strategies.” As robots can interact with children using verbal and nonverbal
physical cues, they can provide a more widely understandable and richer mode of communication than screen-based or
verbal-only technologies. The morphological diversity of robots, which spans minimalist mechanistic interfaces as well
as zoomorphic and human-like interfaces constructed in abstract, cartoon-like, and realistic forms, suggests that robots
can be adapted to dierent age groups and cognitive needs. For example, robots for autistic children or for younger
children can be more mechanomoprhic or cartoon-like in form, whereas robots for typically developing children who
are older (e.g. over 8 years of age) could have more realistically human-like features.
Finally, robots can assist in “promoting children’s data agency.” Along with following a responsible approach to
collecting and handling data for and about children, designers can adopt a privacy-by-design approach and consider
protections for children both at the individual and at the group levels. It is important for children to not feel like they
are growing up under constant surveillance by articial systems, and that their agency and autonomy are constrained
by AI, as this would diminish their well-being and potential for full development, according to UNICEF’s guidelines.
With this in mind, social robots should be designed in such a way that children have control over the data that is
collected by the robot and the way it is used further on, including who it is disseminated to (e.g. parents, teachers).
Along with their data, their actions and autonomy, or their right to be alone and make their own decisions, should not
be unduly constrained by the technical capabilities and designed functions of the robot [
81
]. Finally, children should
not be proled through the types of interactions they have with robots (e.g. empathy, kindness, abuse) in ways that
can have longterm consequences for their own understanding of their identity or their social standing. In protecting
children’s agency and dignity in their interaction with AI system, we should also be wary of the potential for robotic
embodiment to strengthen the persuasive nature of AI-based recommendations which can guide children’s behavioral
and attitudinal choices [8, 100].
5 A FOUR-COMPONENT FRAMEWORK FOR SOCIAL ROBOT DESIGN SENSITIVE TO CHILDREN’S
RIGHTS
In this section, we introduce our four-component framework for social robots that allign with children’s fundamental
rights. Considering the insights emerged from our empirical studies as described in section
??
, the existing ndings
17
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
of cHRI literature in section 4.1 and the UN’s child rights general principles, we identify four technical robot-specic
non-binary components to be taken into consideration for the design, research and deployment of social robots for
children. The framework aims to be used as a tool to facilitate the mitigation of possible emerging challenges for
children as users of social robots and the expansion of the advantages the use of social robots might oer for children’s
development and well-being.
Fig. 7. A four-component framework on social robots for children
We acknowledge that the mitigation of possible risks and the use of robots as a social infrastructure to expand
the opportunities of child’s development is a complex endeavour that requires the contribution and the interaction
of multiple actors, the consideration of complex social and cultural contexts and most importantly the unpredictable
potential of each child’s rapid development. However, based on the Gibsonian theory of aordances [
39
], the interaction
between the environment (including objects, tools and social constructs) and humans has a mutual impact on each
other. The impact of the devices we design is even larger in the cases we consider the complexity of the human social
fabric. Robot designers that wish to address this complexity need to consider both explicit and implicit interactions
among humans and between a social robot and a human or groups of humans [52].
While in this way social robots are becoming a powerful social infrastructure to expand human capabilities, they
need to be designed in a transparent way allowing for the end-user to be aware of certain fundamental features of the
robotic artefact. This becomes even more crucial when the robot is meant to be used by children.
That being said, our proposed framework (g. 7) focuses on four fundamental characteristics of robotic systems
namely (i) the degree of connectivity of the robotic system, (ii) the degree of autonomy of the dierent modules of the
system such as the decision-making module, (iii) the characteristics of the embodiment of the robotic system in terms of
appearance and the aordances it conveys and (iv) the degree and kind of social embededdness of the robotic system,
which is dened as its potential integration into the social fabric of a child’s social world. For a child-centered approach
of robot-design, these components can be used as a compass for designers’ and developers’ self-reection regarding
their design decisions in relation to those components and the impact on the four principles of children’s rights. In
addition, the graphical representation of the framework (g. 7) (or its possible child-friendly adaptation) provides the
potential to be used as an interface between designers and developers from the one side, and the end-users (e.g. children,
18
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
parents or educators) from the other side. In this way, it can be used as a means to support transparency in the design
process of robots for child users and as an entry point for the consideration of children’s voice.
In the following subsections we discuss each component in the light of the four principles of the child’s fundamental
rights and the corresponding policy requirements proposed by UNICEF. The aim of this section is only to provide
an initial illustration of the framework with some indicative examples; it is beyond the scope of the paper to include
an exhaustive list of cases of each component in relation to the general principles of children’s rights. Taking as a
starting point children’s narratives during our empirical studies, we discuss each component and we introduce the
corresponding elements; this is followed by a connection of each component with the relevant principles of the rights
of the child, possible emerging challenges, the ways to mitigate these challenges and the potential use of the specic
component for the support of children’s development and well-being.
5.1 Connectivity
“The robot knows the location of my home and checks the route for me” ( Japan, 15). This quotation from our pilot study in
Japan reveals the participant’s desiderata for a robot that would be connected with external infrastructure in order to
provide certain services to the child. The connectivity of the modules of a robotic system may vary. Social robots can
be designed to only function locally or for its modules to be connected with an additional infrastructure, robotic or
not. Remote-brained robots were introduced as an approach in which the robotic system is designed to have the brain
and body separately conceptually and physically [
50
]. Robots can be connected with other robots and non-robotic
infrastructures which may involve dynamic registration and de-registration of the system in a group [
?
]. This approach
allows data from multiple systems or robotic platforms to be shared and processed locally and centrally. In the eld
of social robotics, although internet-based systems appear as a promising area of development due to the real-time
and physical multi-channel complex perception and control, local processing is also required. Consequently, robots’
connectivity might dier among cases.
Our framework considers that a robot might function locally, without being connected to any network, it might
be connected with additional infrastructure and robots in a limited context (e.g. within a school unit) or it might be
connected with a wider infrastructure. This means that the connectivity of the robot might have dierent degrees
which impacts the interacting child in various ways. Consequently, the degree of connectivity is being examined in
relation to the four principles of the rights of the child as described in section 4.1 in the following way.
Principle 1 - Non-discrimination
Connectivity is fundamentally changing children’s lives by facilitating their
access to information and learning, the access to the internet, the technological infrastructure that provides fast
broadband speed, devices that support children’s learning and development and the skills needed to enable users to
make the most of digital technology vary across geographical regions and families socio-economic status. Currently, the
eld of child-robot interaction is being shaped by studies that are typically conducted with children from advantageous
areas while children who are not connected risk exclusion and disadvantage as most of the modern world remains out
of their reach. To promote inclusion and provide opportunities for the use of robot by all children, designers need to
take into consideration functions that might not require internet connectivity and provide content that can be used with
limited internet access. At the same time, solutions for the improvement of the current infrastructure in remote and
disadvantage areas will support the inclusion of children to the current dialogue about robot developments for children.
Principle 2 - Best interests of the child
One of the fundamental interest of the child, as emerged by our empirical
studies is children’s privacy. Connectivity is highly relevant to children’s right to privacy, both as a human right but
also as an important part of children’s development. One concern relating to privacy based on UNGC25 is “concern
19
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
about businesses access to data and its use in marketing and other business activities; collection of diverse kinds of data
that might harm children (e.g. biometric, behavioral); access to parents of data collected, ability to delete.” We add to
this concerns about the child’s feelings of needing to be alone, unobserved, free to act as they like in their own personal
space, which can be aected by robots who are constantly on and constantly watching. There has also been concern
that voice-activated technologies like Alexa could be used as a somewhat insidious marketing tool; this capability could
be enhanced by the physical presence and related persuasiveness of robots to convince people to change their opinions
and behaviours [71].
Principle 3 - Right to life, survival and development
The degree of robot connectivity might have an impact
on the potential security of the child. Connected robots can be hacked and exploited, which many current commercial
robot designs fail to take into account. The impact of such adversarial tactics on children who might interact with
the robot can be signicant. For example, researchers have shown that children might be more likely to open up to
robots, which has led to suggestions that they could be used to interview child survivors of abuse [
16
]. It is possible
that this tendency of children to be frank and open when talking to robots [
2
] could be used by adults to interact in
inappropriate ways with children, or to extract information that can be used for illegal activity (e.g. breaking into a
house when the family is not home). Certain concerns arise regarding the ownership of the shared data and the right of
the child to be forgotten.
Principle 4 - Respect to the views of the child
Throughout the design process, children’s opinion about a robot’s
connectivity should be taken into consideration and be respected. Our empirical studies showed that children expressed
their desiderata for connected robots and robots that might connect them with children from dierent geographical
areas and cultural backgrounds. However, robotic technology can be exceptionally complex. For this reason, including
children in the design requires them to be well informed by increasing the awareness of children of the possible
consequences (opportunities and challenges) of the degree of robot connectivity in order for them and their parents
and educators to have control over it.
A self-reection of the designers and developers on the degree of connectivity of a social robot for children would
allow the involved actors to be aware of the state of the specic robot in terms of its connectivity and even for certain
involved actors such as parents, educators or children to have control over it. Our proposal for the specic component
needs to be further elaborated and tested with children in order to be the basis for the development of a concrete
platform for designers, developers and researchers to consider the degree of connectivity in relation to children’s
fundamental rights. For a transparent design of robots, the degree of the connectivity should be communicated to the
children in a developmentally appropriate way.
5.2 Autonomy
“The robot could understand and help me. It could be my friend” (Greece, 15) Autonomous robots are intelligent machines
capable of performing tasks in the world by themselves, without explicit human control [
13
]. Social robotic systems
perceive low level elements of the physical and the complex social environment via embedded or non-embedded
sensors, they process that information and react according to existing predened or intrinsically-motivated rules in a
socially meaningful way without human intervention. During the process of perception, decision-making and action,
the degree of the autonomy of the robotic device might dier allowing for partial human control for any of the modules
involved. Previous work has already suggested a framework for levels of robot autonomy in human-robot interaction
[12]; however, additional elements should be considered in the case of child-user.
Principle 1 - Non-discrimination
20
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
The complexity of autonomous systems makes it increasingly dicult to understand how these systems arrive at their
decisions, which elicits concerns especially in the case of social robots that have the potential to be used by children in
sensitive environments; thus it is crucial to ensure that these decisions do not reect discriminatory behaviour toward
certain groups of children. This is particularly problematic in the case of opaque decision-making modules with lack of
transparency and explainability. An autonomous social robot for children that is trained by or learns from its interaction
with humans might function in a discriminatory way and apply bias depending on the data provided. This might lead
to biases against children with specic cognitive, social and cultural backgrounds. Taxonomies on bias and fairness in
machine learning indicate that discriminatory decision-making of an autonomous system might be caused by bias in
data as well as because of algorithmic unfairness [
63
]. Discrimination generated by autonomous systems might take
multiple forms, such as systemic or statistical discrimination (ibid.). However, in the case of autonomous social robots
for children, there are additional and complex considerations, such as the social context, norms and conventions that a
socially intelligent robot should align with [
49
]. Currently, the modelling of robot social policies, are mainly designed
for children with certain cognitive and cultural backgrounds. One of the areas of child-robot interaction that actively
consider non-discrimination is the development of social robot for autistic children [27, 75].
Principle 2 - Best interests of the child
The degree of autonomy of a social agent is typically linked with the level
of agency and the intentionality of corresponding actions. An increasing body of research in the eld of child-robot
interaction tries to address questions regarding the ways and the reasons children ascribe agency and intentionality
to social robots even if these robots are controlled by humans. It has been showed that robotic agents that exhibit
social signals intention communication activate areas in the human brain involved in social-cognitive processing [
99
]
which means that children’s interaction with them is particularly powerful. Attribution of intentionality can positively
aect human–robot interaction by (i) fostering feelings of social connection, empathy and prosociality, and by (ii)
enhancing performance on joint human–robot tasks. However, in certain circumstances, attribution of intentionality
to robot agents might result in challenges regarding children’s overtrust towards the robots. The way that autonomy
(or perceived autonomy) of a robot aects children’s attribution of agency to a robotic artefact is an area of research
that requires further research. However, there are already indication regarding children’s awareness for a system’s
autonomy such as in the case of self-driving vehicles [23].
Principle 3 - Right to life, survival and development
While in the eld of child-robot interaction, there are
several successful examples of autonomous robotic systems that interact meaningfully with children “in the wild”
[
30
,
73
,
79
], in the majority of the cases current technical limitations in perception and control of autonomous robots
result in an oversimplication of the complex social environment within which children develop. This might have
a long-term negative impact on children’s development which requires rich, long-term and dynamically interactive
experiences. This challenge becomes even more problematic in the case of children’s cognitive development and the
scaolding of their zone of proximal development [
96
]. Current autonomous robots are not technically advanced
enough to identify and predict the elements of child’s zone of proximal development. The use of autonomous robotic
devices that are designed to support child’s development in formal or informal settings might limit the child’s potential
of development if they do not consider the dynamic nature of the developing child.
Principle 4 - Respect to the views of the child
By identifying the degree of autonomy of a social robot for
children in a transparent way, the user (parent, educator or child) can take the adequate actions for complementing
the robot in an developmentally and socially appropriate way for the best interest of the child. Previous research on
intention communication for self-driving vehicles towards child-pedestrians revealed children’s need to be aware for
the autonomy of a vehicle [
23
]. Additionally, in the context of social robots, the autonomy of a robot should allow
21
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
humans (children or the relevant adults) to be aware of and take control of the robot, if needed. Shared autonomy
requires that the designers and developers of the system provide in a transparent way the degree of autonomy of the
various modules of the robot in order for the human to take the maximum of the advantage that an autonomous robot
may oer while protecting themselves by a long-term negative impact on children’s development of autonomy.
Our framework provides a canvas upon which various actors that are involved in the design, the development, the
implementation and the use of social robots for children can communicate and interact in order for robot autonomy,
agency attribution and intentionality to be meaningful for the child user. Attentive and capable care givers and teachers
can work with the child in their own zone of proximal development, where they are challenged just enough to be able to
develop new skills but not feel discouraged by constant failure. It is dicult to imagine robotic interaction capabilities
being able to achieve this kind of adaptive interaction, however, robots may be part of a teacher-child-robot triad and
serve as a helpful tool to engage children in learning. It is also important to note that play with robots versus with human
peers may result in developmental concerns (e.g. moral, social). Along with the other points mentioned above, the
existing cHRI literature conrms the importance of ensuring that designers do not approach robot development through
the lens of substitution of interaction with the robot for human interaction; rather the robot should be thought of as a
component of a broader social infrastructure that supports the child. Shared autonomy should be further examined
in order for us to identify the optimal balance of the autonomy of an interactive robotic artifact with respect to the
developing autonomy of the child
5.3 Embodiment
‘The robot can carry disabled children to the school classroom” (Uganda, 12) One of the most basic distinctive
characteristics of social robotic systems in comparison with non-embodied AI systems is their physical nature. The
embodiment of robotic systems takes advantage of a set of features that relate to the human social perception, action
and interaction. These features aect the robot appearance and behaviour. The embodiment of robotic platforms, which
include physical features, motion, verbal and non-verbal expression, creates distinct ways of communication with
humans by combining multi-modal channels of interaction with an emphasis on their physical nature and which
contribute to possible robot anthropomorphism. These features, in turn, elicit specic behavioural, cognitive and
emotional reactions in children’s brain and behaviour while the degree of the robot anthropomorphism contributes to
the acceptance or not of the social robot [
85
,
104
]. Because of these characteristics, children’s interaction with robots is
particularly powerful for children’s behaviour and development, even if this interaction is based only on the physical
presence of the robot [61].
Principle 1 - Non-discrimination:
The embodied nature of social robots allow the interaction between a child
and a robot not only to be based on explicit signals but also on implicit communicative features that are generated
and developed based on the aordances of the embodied system and aect the way that the biological or the articial
system behave. This process is based on cognitive mechanisms that are known as embodied cognition. Embodied
cognition states that the physical body holds a bigger role in the interpretation of cognitive activities including the
process for social interaction [
41
]. In this context, possible discriminatory behaviour of the system should be checked
for the interpretations of implicit social signals which largely depend on cultural as well as on developmental aspects.
This means that in addition to the possible technical bias embodied agents might develop bias that are based on their
social interaction with specic group of the human social fabric within which implicit social signal might shape the
interaction. As a result, the embodiment of social robots creates certain needs for us to address rst more fundamental
fairness issues before applying specic algorithms.
22
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
Principle 2 - Best interests of the child
The embodiment aects both the children’s behaviour towards the robot
as well as their perceptions about it. Research indicates that children tend to attribute anthropomorphic characteristics
and agency even to non-humanoid robots with mechanical characteristics [
62
]. The ways children perceive robots have
an impact on their trust towards robots. As a consequence, on the one hand, embodiment of an AI-based system brings
unique gains for children since it facilitates physical interaction with the environment including social interactions.
On the other hand, it might result in child’s over-trust, towards the robot and deception. Children’s attribution of
agency to inanimate objects is not a new phenomenon. Piaget has indicated children’s animistic thinking in four
developmental stages for objects that look, move, act and interact like agents by exhibiting goal-directed actions or
being responsive to changes of their environments [
72
] while Heider et at.[
47
] showed how children attribute agency
to moving geographical gures. In a similar way, when children interact with robotic systems even with minimum
anthropomorphic features, the system actively elicits children’s predisposition for animism with the dierent modules
and the morphology and the corresponding behaviour to have a dierent impact. This indicates that the embodiment of
a social agent creates a particularly powerful social infrastructure which can be used for the best interests of the child
while at the same time it might be misused as a means for child manipulation.
Principle 3 - Right to life, survival and development
The embodied nature of the robotic platforms allow the
creation of learning opportunities for children that take advantage of technologies for personalized and adaptive learning,
which promotes equity among children, while enabling children’s interaction with the physical world and interactions
members of their social environment. The embodiment of social robots comes with technical challenges regarding
their mechanical functions. Possible lack of technical robustness might aect children’s right for safety and physical
security. Social robots have been proved a powerful tool that allows the use of multiple channels of communication
towards children who benet from concrete and tangible means of interaction. For example, robots have the potential
to contribute to a developmentally appropriate communication of the systems’ reasoning and decision-making towards
children and make the system explainability more accessible to child-user, which supports transparency of the system,
since it can function as an interface that can transform abstract notions on non-embodied AI into concrete physical
features.
Principle 4 - Respect to the views of the child
A transparent robot design in terms of robot appearance and
behaviour would facilitate designers and developers in identifying elements of the embodiment of the robot that are
necessary for the support of children’s well-being while being able to reect on aspects of embodiment that might
have a negative eect on children’s fundamental rights. Based on children’s narratives from our empirical studies and
on the literature, we propose certain elements of robots’ embodiment, such as elements of a face or the motion of the
robot and the degree of it’s perceived anthropomorphism and consequently the perceived agency. The appropriate
communication of the purposeful inclusion of the embodiment elements in line with children’s fundamental rights
would facilitate children and all the involved adults to be aware of those elements in order for them to make more
informed decisions regarding the acceptance of a certain robot for children.
5.4 Social embeddedness
We propose the term “social ebeddedness” to describe the potential integration of a robot into the human social fabric.
Embodied robotic devices that are designed to act upon and interact with their environment have the potential to
engage with children around them and eventually to be part of their social fabric, even if the robot is not specically
designed for children. The design and development of social robotic artefacts includes both; the rules according to
which they are programmed to perceive human behaviour by making high level inferences from the perception of low
23
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
level human features as well as the design of actions that elicit human attention and possible response towards them.
One of the main goals of robot designers is to achieve a uent and sustained interaction between children and robots.
Principle 1 - Non-discrimination:
Robotic social agents that are embedded in the child’s social life are being
developed to have a multi-modal perception of the surrounding environment through their sensors. Child users are
often not aware of those capabilities. A social robot might be able to navigate the personal space of the child perceiving
personal information of child’s private environment, which raises issues about the use of this information for the robot’s
adaptation to specic users of the same environment. For example, a robotic device that is socially integrated in a
home environment with children might discriminate against the child-user for technical reasons such as perception and
recognition systems that often are trained with adult data. Even more, social robots for education might discriminate
against children with personality traits that appear in a minority of the children (e.g. [
48
]. Since robot have been proven
to impact human group dynamics [78], a discriminatory robot behaviour has the potential to aect the behaviour the
children against minorities.
Principle 2 - Best interests of the child
For a meaningful and sustained child-robot interaction, the robot needs
to be technically capable to perceive and process the complexity and the dynamic nature of human social interactions.
We acknowledge that numeric representation of dynamic environments and of aspects of human behaviour is necessary
in human-machine interaction. However, emotional and social states are complex human mechanisms many of which
are not yet fully understood and challenging to be captured especially in children [
17
]. As a result, often, experimental
studies in the eld of child-robot interaction use tools that oversimplify child’s behaviour, the representation of
which does not consider the nuances and the complexity of children’s social and emotional states. This appears more
problematic when children interact in informal and non-controlled setups where they perform spontaneously and
with exploratory actions, which are necessary processes for the formation of their social identity and eventually their
dignity.
Principle 3 - Right to life, survival and development
Children’s development and well being is deeply inuenced
by their place in proximate social groups, such as their family or peers. Human-robot interaction can aect children’s
experiences with and actions toward each other, and this aspect should explicitly be considered in robot design. For
example, an early educational study of a social robot embedded in a elementary school classroom showed that attention
from the robot, and its ability to understand what a specic child says, can change the status of children within their
group [
56
]. Robots can also help manage interactions in small groups, including shaping turn-taking [
87
] and helping
address team conict [
82
]. In this way, robots can help children better understand their social surroundings, and even
develop collaborative skills through play-based learning [
68
,
86
,
103
] and creative activities [
3
] while a systematic
literature review revealed that robots are perceived as neither animate nor inanimate, and that they are entities with
whom children will likely form social relationships [
93
]. The design of dierent forms of social embeddedness for
robots, therefore, can have an eect on children’s social development and inclusion.
Principle 4 - Respect to the views of the child
For the design of robots that are embedded in children’s social lives,
we need to respect and prioritize the views of the child and the ways they envision their social activities and eventually
their social self. The results of our empirical studies revealed that children imagine their social interaction with a social
robots in various forms, such as in a dyadic interaction with the robot [
14
,
22
,
38
,
60
], small group interaction [
27
] or a
large group interaction [
78
] in various settings. Social robots can act as socially embedded mediators between children
and others, either by being used as telepresence platforms [
69
] or by being a third party in children’s interactions with
others, providing them with information about common interests they may have with other children, or engaging
them in collaborative play [
20
,
80
,
103
]. As mediators that can aect how a child understands and interacts with its
24
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
social environment, telepresence robots should be designed exibly so that they can appropriately represent the child,
its personality and identity, and allow the child to explore the environment as freely and authentically as possible.
This could be done through suciently abstract morphology of the robot, verbal and nonverbal cues adapted to the
child, or accessories that might express something of the child’s interests and personality [
36
,
37
]. Robots as mediators
should also give children the capability to freely explore and express their reactions to the environment, rather than
corralling them into a too-strict set of pre-dened actions and reactions. With this in mind, robots as mediators, even
if they have limited capability, should leave space for creative and spontaneous actions by the child [
4
], allowing
children’s self-expression even through limited technical means. In a similar way, robots that are designed to facilitate
children’s engagement with a task, they should allow child-initiated voluntary interaction, even if that means that
the child chooses not to have any interaction with the robot [
22
]. While all the above-mentioned cases in the eld of
child-robot interaction indicate that researchers in this eld consider that robots’ socially embeddedness requires our
active listening to children’s views, this should be reected in both the processes for the robot design as well as the
robot behaviour in the interaction with children.
6 DISCUSSION AND FUTURE DIRECTIONS
This paper presents a theoretical framework which aims to facilitate the processes of research, development and
deployment of social robots that are aligned with children’s fundamental rights. It is based on a pilot project initiated
by UNICEF on the topic of AI and Child’s Rights with a focus on social robots for children. The proposed framework
consists of four components, namely connectivity, autonomy, embodiment and social embeddedness of the robotic
system as a set of technical, non-binary features that should be taken into consideration in relation to children’s
fundamental rights. Its development was based on the results of a set of empirical, small-scale participatory studies
with children and parents with the participation of their educators with a geographical and cultural distribution and on
the current literature in the eld of child-robot interaction which was viewed through the prism of the four principles
of children’s fundamental rights.
This framework oers the basis for a transparent and reexive way for designing robots for children in relation
to the four proposed components. It can be used as an interface for designers’ and developers’ self-reection on the
specic components and as a means for the communication of the design decisions to the end-users (including children)
and the relevant stakeholders. The communication of the design decisions for the robot development and deployment
towards the involved stakeholders and especially towards children and their parents and educators can contribute to
actionable points for the inclusion of children in the design of robotic systems that are meant to be used by them. In
this paper we only introduce the four components of the framework and it is beyond the scope of the paper to identify
specic metrics for the identication of the technical requirements, which is part of our future work.
Throughout the conceptualisation and development of the framework, we prioritized children’s participation and the
inclusion of children from under-represented geographical regions and cultural backgrounds. This was one of our means
to indicate the need for children’s inclusion in order for our robotic system to promote fairness and non-discrimination.
In order for the design of social robots for children to promote children’s best interests, one of the basic prerequisites is
to prioritise developmentally appropriate designs. Considering developmentally appropriate interaction capabilities for
robots, one area of focus would be how to make robots more developmentally adaptive. Dierent robotic technologies
are designed for children of dierent ages, but often don’t grow or change with the age of the child, or even adapt their
behaviour to children of dierent ages.
25
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
Children’s developmental levels in robot design should be accompanied by the consideration of children’s rights
with an emphasis on the general principles. The Convention of the Rights of the Child and the identication of the
general principles, the General Comment 25 of the realization of children’s rights in the digital world and UNICEF’s
Policy Guidance of AI and Child’s Rights have guided our project. However, the translation of those principles into
technical robot-specic requirements is a challenging but necessary endeavour. For the identication of actionable
points, a transparent interface is required that will facilitate the reection upon and the communication of those
technical requirement to the relevant stakeholders including children. The proposed framework provides the means for
a functional interface between the rights of the child and the development of specic technical requirements for robot
design and development in a transparent way.
While similar eorts have been observed in the area of AI and researchers have indicated the complexity of
those systems [
103
], embodied social agents appear as even more complex systems with their multiple channels of
communication. The embodied nature supports their integration into children’s social fabric and the increased degree
of their social embeddedness. While the connectivity of a social robot, its degree of autonomy, the embodiment and
the degree of its social embeddedness provide unique opportunities for the expansion of children’s cognitive and
social capabilities, there are specic considerations which relate to the contextual and cultural complexity of a hybrid
environment of social interaction between children and robots as well as the complexity of the system. In this context,
we need to preserve the space and the role for human decision making, the need for human agency and autonomy, the
strength of child-robot collaboration and the full involvement of stakeholders.
In order to further proceed towards the development of the framework, at its current form, further research and
testing is needed with large scale cross-cultural studies with a particular emphasis on the inclusion of children from
under-represented geographical regions and with various cultural backgrounds in accordance to the whole set of the
UNICEF’s policy requirements. In parallel, we aim to further elaborate the components of the robots and explore their
connectedness to a specic context and the robot’s potential use. However, for this to be achieved, we need further
research on the implementation of the framework in various contexts and with various robotic applications as well as
the identication of possible actors involved.
From a research perspective, the majority of the studies in the eld of child-robot interaction focuses on the impact
of social robots on children’s behaviour and development. We envision that along with research on the impact of
social robots on children’s behaviour and development, we will start systematically exploring how children can impact
the design and the development of robotic technology. In this context, in parallel with our research activities, we
aim to develop a child-friendly version of the framework. In this way, we make sure that children that want to get
involved in the process of robot design have the means to acquire and to communicate information regarding the
specic framework.
This framework should not be viewed as a deterministic fully-edged strategy for social robots and children. Our
knowledge in some areas is still not sucient to allow us to denitively state what is best for children and relevant
policies are just emerging. Our intention is that this will remain a living document and continuously evolve in much
the same way that technologies, and our knowledge about their impact on children, are evolving.
7 ACKNOWLEDGMENTS
We thank the participant children, their educators and the participating schools. We also thank the educators and
researchers Tomoko Imai, Tiija Riinta, Joy Nakhayenze, Christos Zotos, Nikos Giannakopoulos, Luis Merino, Fernando
Caballero, Natalia Diaz Rodriguez, Serge Thill, Sawyer Collins, Waki Kamino, and Anne Tally for their contributions to
26
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
the empirical studies. We thank Steven Vosloo and Melanie Penagos for their guidance throughout the project. The
HONDA Research Institute for funding this project and the JRC of the European Commission for the overall support.
Finally we thank UNICEF for the invitation to HONDA Research Institute and to the JRC to pilot the Policy Guidelines
for AI and Child’s Rights and for including the Haru Robot in their pilot project.
REFERENCES
[1]
Neziha Akalin, Pinar Uluer, and Hatice Kose. 2014. Non-verbal communication with a social robot peer: Towards robot assisted interactive sign
language tutoring. In 2014 IEEE-RAS International Conference on Humanoid Robots. IEEE, 1122–1127.
[2]
Takuto Akiyoshi, Junya Nakanishi, Hiroshi Ishiguro, Hidenobu Sumioka, and Masahiro Shiomi. 2021. A Robot that Encourages Self-Disclosure to
Reduce Anger Mood. IEEE Robotics and Automation Letters (2021).
[3]
Sanah Ali, Hae Won Park, and Cynthia Breazeal. 2021. A social robot’s inuence on children’s gural creativity during gameplay. International
Journal of Child-Computer Interaction 28 (2021), 100234.
[4]
Patrícia Alves-Oliveira, Patrícia Arriaga, Ana Paiva, and Guy Homan. 2017. Yolo, a robot for creativity: A co-design study with children. In
Proceedings of the 2017 Conference on Interaction Design and Children. 423–429.
[5]
Ahmed Ansari, Illah Nourbakhsh, Marti Louw, and Chris Bartley. 2013. Exploring Gigapixel Image Environments for Science Communication and
Learning in Museums. In annual conference of Museums and the Web, Portland, Oregon.
[6]
Lindsey Arnold, Kung Jin Lee, and Jason C Yip. 2016. Co-designing with children: An approach to social robot design. ACM Human-Robot
Interaction (HRI) (2016).
[7]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio
Gil-López, Daniel Molina, Richard Benjamins, et al
.
2020. Explainable Articial Intelligence (XAI): Concepts, taxonomies, opportunities and
challenges toward responsible AI. Information Fusion 58 (2020), 82–115.
[8]
Wilma A Bainbridge, Justin Hart, Elizabeth S Kim, and Brian Scassellati. 2008. The eect of presence on human-robot interaction. In RO-MAN
2008-The 17th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 701–706.
[9]
Christoph Bartneck, Tatsuya Nomura, Takayuki Kanda, Tomohiro Suzuki, and Kato Kennsuke. 2005. A cross-cultural study on attitudes towards
robots. (2005).
[10]
Christoph Bartneck, Kumar Yogeeswaran, Qi Min Ser, Graeme Woodward, Robert Sparrow, Siheng Wang, and Friederike Eyssel. 2018. Robots and
racism. In Proceedings of the 2018 ACM/IEEE international conference on human-robot interaction. 196–204.
[11]
Fran Baum, Colin MacDougall, and Danielle Smith. 2006. Participatory action research. Journal of epidemiology and community health 60, 10 (2006),
854.
[12]
Jenay M Beer, Arthur D Fisk, and Wendy A Rogers. 2014. Toward a framework for levels of robot autonomy in human-robot interaction. Journal of
human-robot interaction 3, 2 (2014), 74.
[13] George A Bekey. 2005. Autonomous robots: from biological inspiration to implementation and control. MIT press.
[14]
Tony Belpaeme and Fumihide Tanaka. 2021. Social Robots as Educators. OECD Digital Education Outlook 2021 Pushing the Frontiers with Articial
Intelligence, Blockchain and Robots: Pushing the Frontiers with Articial Intelligence, Blockchain and Robots (2021), 143.
[15] Cindy L Bethel, Zachary Henkel, Kristen Stives, David C May, Deborah K Eakin, Melinda Pilkinton, Alexis Jones, and Megan Stubbs-Richardson.
2016. Using robots to interview children about bullying: Lessons learned from an exploratory study. In 2016 25th IEEE International Symposium on
Robot and Human Interactive Communication (RO-MAN). IEEE, 712–717.
[16]
Cindy L Bethel, Matthew R Stevenson, and Brian Scassellati. 2011. Secret-sharing: Interactions between a child, robot, and adult. In 2011 IEEE
International Conference on systems, man, and cybernetics. IEEE, 2489–2494.
[17]
Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. ’It’s Reducing a Human Being to a Percentage’
Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 Chi conference on human factors in computing systems. 1–14.
[18]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classication. In Conference on
fairness, accountability and transparency. PMLR, 77–91.
[19]
Bengisu Cagiltay, Hui-Ru Ho, Joseph E Michaelis, and Bilge Mutlu. 2020. Investigating family perceptions and design preferences for an in-home
robot. In Proceedings of the interaction design and children conference. 229–242.
[20]
Alejandro Catala, Cristina Maria Sylla, and Arzu Guneysu Ozgur. 2021. Smart Toys++: Exploiting the Social Connectedness for Playing and
Learning. In Interaction Design and Children. 679–681.
[21]
Vicky Charisi, Louise Dennis, Michael Fisher, Robert Lieck, Andreas Matthias, Marija Slavkovik, Janina Sombetzki, Alan FT Wineld, and Roman
Yampolskiy. 2017. Towards moral autonomous systems. arXiv preprint arXiv:1703.04741 (2017).
[22]
Vicky Charisi, Emilia Gomez, Gonzalo Mier, Luis Merino, and Randy Gomez. 2020. Child-Robot collaborative problem-solving and the importance
of child’s voluntary interaction: a developmental perspective. Frontiers in Robotics and AI 7 (2020), 15.
[23]
Vicky Charisi, Azra Habibovic, Jonas Andersson, Jamy Li, and Vanessa Evers. 2017. Children’s views on identication and intention communication
of self-driving vehicles. In Proceedings of the 2017 conference on interaction design and children. 399–404.
27
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
[24]
Vicky Charisi, Tomoko Imai, Tiija Rinta, Joy Maliza Nakhayenze, and Randy Gomez. 2021. Exploring the Concept of Fairness in Everyday, Imaginary
and Robot Scenarios: A Cross-Cultural Study With Children in Japan and Uganda. In Interaction Design and Children (Athens, Greece) (IDC ’21).
Association for Computing Machinery, New York, NY, USA, 532–536. https://doi.org/10.1145/3459990.3465184
[25]
Vicky Charisi, Selma Sabanović, Angelo Cangelosi, and Randy Gomez. 2021. Designing and Developing Better Robots for Children: A Fundamental
Human Rights Perspective. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. 712–714.
[26]
Raja Chatila and John C Havens. 2019. The IEEE global initiative on ethics of autonomous and intelligent systems. In Robotics and well-being.
Springer, 11–16.
[27]
Pauline Chevalier, Jamy J Li, Eloise Ainger, Alyssa M Alcorn, Snezana Babovic, Vicky Charisi, Suncica Petrovic, Bob R Schadenberg, Elizabeth
Pellicano, and Vanessa Evers. 2017. Dialogue design for a robot-based face-mirroring game to engage autistic children with emotional expressions.
In International conference on social robotics. Springer, 546–555.
[28]
Sandra Cortesi, Alexa Hasse, and Urs Gasser. 2021. Youth Participation in a Digital World: Designing and Implementing Spaces, Programs, and
Methodologies. Berkman Center Research Publication 2021-5 (2021).
[29]
Kerstin Dautenhahn. 2007. Socially intelligent robots: dimensions of human–robot interaction. Philosophical transactions of the royal society B:
Biological sciences 362, 1480 (2007), 679–704.
[30] Daniel P Davison, Frances M Wijnen, Vicky Charisi, Jan van der Meij, Vanessa Evers, and Dennis Reidsma. 2020. Working with a social robot in
school: a long-term real-world unsupervised deployment. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction.
63–72.
[31] Daniel P Davison, Frances M Wijnen, Vicky Charisi, Jan van der Meij, Dennis Reidsma, and Vanessa Evers. 2021. Words of encouragement: how
praise delivered by a social robot changes children’s mindset for learning. Journal on multimodal user interfaces 15, 1 (2021), 61–76.
[32]
Virginia Dignum, Melanie Penagos, Klara Pigmans, and Steven Vosloo. 2020. Policy guidance on AI for children (draft). Technical Report. UNICEF.
https://www.unicef.org/globalinsight/reports/policy-guidance-ai-children
[33]
European Commission. 2020. White Paper on Articial Intelligence: a European approach to excellence and trust. White Paper COM(2020) 65 nal.
European Commission, Brussels. https://ec.europa.eu/info/les/white-paper-articial- intelligence-european-approach-excellence-and- trust_en
[34]
Friederike Eyssel and Frank Hegel. 2012. (s) he’s got the look: Gender stereotyping of robots 1. Journal of Applied Social Psychology 42, 9 (2012),
2213–2230.
[35]
Friederike Eyssel and Dieta Kuchenbrandt. 2012. Social categorization of social robots: Anthropomorphism as a function of robot group membership.
British Journal of Social Psychology 51, 4 (2012), 724–731.
[36]
Naomi T Fitter, Yasmin Chowdhury, Elizabeth Cha, Leila Takayama, and Maja J Matarić. 2018. Evaluating the eects of personalized appearance on
telepresence robots for education. In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 109–110.
[37]
Naomi T Fitter, Megan Strait, Eloise Bisbee, Maja J Mataric, and Leila Takayama. 2021. You’re Wigging Me Out! Is Personalization of Telepresence
Robots Strictly Positive?. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. 168–176.
[38]
Thomas Gargot, Thibault Asselborn, Ingrid Zammouri, Julie Brunelle, Wafa Johal, PierreDillenb ourg, Dominique Archambault,Mohame d Chetouani,
David Cohen, and Salvatore M Anzalone. 2021. “It Is Not the Robot Who Learns, It Is Me.” Treating Severe Dysgraphia Using Child–Robot
Interaction. Frontiers in Psychiatry 12 (2021).
[39] James J Gibson. 1977. The theory of aordances. Hilldale, USA 1, 2 (1977), 67–82.
[40]
Sarah Gillet, Wouter van den Bos, and Iolanda Leite. 2020. A social robot mediator to foster collaboration and inclusion among children. Proceedings
of Robotics: Science and Systems. Corvalis, Oregon, USA. https://doi. org/10.15607/RSS (2020).
[41] Alvin Goldman and Frederique de Vignemont. 2009. Is social cognition embodied? Trends in cognitive sciences 13, 4 (2009), 154–159.
[42]
Randy Gomez, Deborah Szapiro, Kerl Galindo, and Keisuke Nakamura. 2018. Haru: Hardware design of an experimental tabletop robot assistant. In
Proceedings of the 2018 ACM/IEEE international conference on human-robot interaction. 233–240.
[43]
Andrea Gomoll, Rebecca Hillenburg, Cindy Hmelo-Silver, and Selma Šabanović. [n.d.]. Designing Robots for Social Change: Exploring a Jr. High
Human-Centered Robotics Curriculum. ([n. d.]).
[44]
Andrea Gomoll, Cindy E Hmelo-Silver, Selma Šabanović, and Matthew Francisco. 2016. Dragons, ladybugs, and softballs: Girls’ STEM engagement
with human-centered robotics. Journal of Science Education and Technology 25, 6 (2016), 899–914.
[45]
Andrea Gomoll, Selma Šabanović, Erin Tolar, Cindy E Hmelo-Silver, Matthew Francisco, and Orion Lawlor. 2018. Between the social and the
technical: Negotiation of human-centered robotics design in a middle school classroom. International Journal of Social Robotics 10, 3 (2018),
309–324.
[46]
Goren Gordon, Cynthia Breazeal, and Susan Engel. 2015. Can children catch curiosity from a social robot?. In Proceedings of the Tenth Annual
ACM/IEEE International Conference on Human-Robot Interaction. 91–98.
[47]
Fritz Heider and Marianne Simmel. 1944. An experimental study of apparent behavior. The American journal of psychology 57, 2 (1944), 243–259.
[48]
Flannery Hope Currin, Kyle Diederich, Kaitlyn Blasi, Allyson Dale Schmidt, Holly David, Kerry Peterman, and Juan Pablo Hourcade. 2021.
Supporting Shy Preschool Children in Joining Social Play. In Interaction Design and Children (Athens, Greece) (IDC ’21). Association for Computing
Machinery, New York, NY, USA, 396–407. https://doi.org/10.1145/3459990.3460729
[49]
Juana Valeria Hurtado, Laura Londoño, and Abhinav Valada. 2021. From Learning to Relearning: A Framework for Diminishing Bias in Social
Robot Navigation. Frontiers in Robotics and AI 8 (2021), 69.
[50] Masayuki Inaba. 1997. Remote-brained robots. In IJCAI. Citeseer, 1593–1606.
28
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
[51]
Luth Idzhar Ismail, Thibault Verhoeven, Joni Dambre, and Francis Wyels. 2019. Leveraging robotics research for children with autism: a review.
International Journal of Social Robotics 11, 3 (2019), 389–410.
[52]
Wendy Ju and Larry Leifer. 2008. The design of implicit interactions: Making interactive systems less obnoxious. Design Issues 24, 3 (2008), 72–84.
[53]
Peter H Kahn, Batya Friedman, Deanne R Perez-Granados, and Nathan G Freier. 2006. Robotic pets in the lives of preschool children. Interaction
Studies 7, 3 (2006), 405–436.
[54]
Peter H Kahn Jr, Takayuki Kanda, Hiroshi Ishiguro, Nathan G Freier, Rachel L Severson, Brian T Gill, Jolina H Ruckert, and Solace Shen. 2012.
“Robovie, you’ll have to go into the closet now”: Children’s social and moral relationships with a humanoid robot. Developmental psychology 48, 2
(2012), 303.
[55]
Peter H Kahn Jr and Solace Shen. 2017. NOC NOC, who’s there? A New Ontological Category (NOC) for social robots. New perspectives on human
development (2017), 13–142.
[56]
Takayuki Kanda, Rumi Sato, Naoki Saiwaki, and Hiroshi Ishiguro. 2007. A two-month eld trial in an elementary school for long-term human–robot
interaction. IEEE Transactions on robotics 23, 5 (2007), 962–971.
[57]
James Kennedy, Séverin Lemaignan, Caroline Montassier, Pauline Lavalade, Bahar Irfan, Fotios Papadopoulos, Emmanuel Senft, and Tony Belpaeme.
2017. Child speech recognition in human-robot interaction: evaluations and recommendations. In Proceedings of the 2017 ACM/IEEE International
Conference on Human-Robot Interaction. 82–90.
[58]
Minkyung Kim, Soonhyung Yi, and Donghun Lee. 2019. Between living and nonliving: Young children’s animacy judgments and reasoning about
humanoid robots. PloS one 14, 6 (2019), e0216869.
[59]
Bryan Kolb and Robbin Gibb. 2011. Brain plasticity and behaviour in the developing brain. Journal of the Canadian Academy of Child and Adolescent
Psychiatry 20, 4 (2011), 265.
[60]
Jacqueline M Kory-Westlund and Cynthia Breazeal. 2019. A long-term study of young children’s rapport, social emulation, and language learning
with a peer-like robot playmate in preschool. Frontiers in Robotics and AI 6 (2019), 81.
[61]
Daniel Leyzberg, Samuel Spaulding, Mariya Toneva, and Brian Scassellati. 2012. The physical presence of a robot tutor increases cognitive learning
gains. In Proceedings of the annual meeting of the cognitive science society, Vol. 34.
[62]
Federico Manzi, Giulia Peretti, Cinzia Di Dio, Angelo Cangelosi, Shoji Itakura, Takayuki Kanda, Hiroshi Ishiguro, Davide Massaro, and Antonella
Marchetti. 2020. A robot is not worth another: exploring children’s mental state attribution to dierent humanoid robots. Frontiers in Psychology
11 (2020), 2011.
[63]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning.
ACM Computing Surveys (CSUR) 54, 6 (2021), 1–35.
[64]
Joseph E Michaelis and Bilge Mutlu. 2018. Reading socially: Transforming the in-home reading experience with a learning-companion robot.
Science Robotics 3, 21 (2018).
[65] Tim Miller. 2018. Contrastive explanation: A structural-model approach. arXiv preprint arXiv:1811.03163 (2018).
[66]
Mako Okanda and Kosuke Taniguchi. 2021. Is a robot a boy? Japanese children’s and adults’ gender-attribute bias toward robots and its implications
for education on gender stereotypes. Cognitive Development 58 (2021), 101044.
[67]
Osazuwa Okundaye, Francis Quek, and Sharon Chu. 2019. Broadening participation for remote communities: Situated distance telepresence. In
Proceedings of the 18th ACM International Conference on Interaction Design and Children. 494–500.
[68]
Rikke Berggreen Paaskesen. 2020. Play-based strategies and using robot technologies across the curriculum. International Journal of Play 9, 2
(2020), 230–254.
[69]
Angela Page, Jennifer Charteris, and Jeanette Berman. 2020. Telepresence Robot Use for Children with Chronic Illness in Australian Schools: A
Scoping Review and Thematic Analysis. International Journal of Social Robotics (2020), 1–13.
[70] Seymour Papert. 1993. The children’s machine: Rethinking school in the age of the computer. ERIC.
[71]
Yvette Pearson. 2020. Child-Robot Interaction: What concerns about privacy and well-being arise when children play with, use, and learn from
robots? American Scientist 108, 1 (2020), 16–22.
[72] Jean Piaget. 1929. The child’s concept of the world. Londres, Routldge & Kegan Paul (1929).
[73]
Tamie Salter, François Michaud, and Hélène Larouche. 2010. How wild is wild? A taxonomy to characterize the ‘wildness’ of child-robot interaction.
International Journal of Social Robotics 2, 4 (2010), 405–415.
[74]
Giulio Sandini, Vishwanathan Mohan, Alessandra Sciutti, and Pietro Morasso. 2018. Social cognition for human-robot symbiosis—challenges and
building blocks. Frontiers in neurorobotics 12 (2018), 34.
[75]
Brian Scassellati, Henny Admoni, and Maja Matarić. 2012. Robots for use in autism research. Annual review of biomedical engineering 14 (2012),
275–294.
[76]
Brian Scassellati, Jake Brawer, Katherine Tsui, Setareh Nasihati Gilani, Melissa Malzkuhn, Barbara Manini, Adam Stone, Geo Kartheiser, Arcangelo
Merla, Ari Shapiro, et al
.
2018. Teaching language to deaf infants with a robot and a virtual human. In Proceedings of the 2018 CHI Conference on
human Factors in computing systems. 1–13.
[77] Marie Schäfer, Daniel BM Haun, and Michael Tomasello. 2015. Fair is not fair everywhere. Psychological science 26, 8 (2015), 1252–1260.
[78]
Sarah Sebo, Brett Stoll, Brian Scassellati, and Malte F Jung. 2020. Robots in groups and teams: a literature review. Proceedings of the ACM on
Human-Computer Interaction 4, CSCW2 (2020), 1–36.
29
WeRobot2021, Miami, Florida,
Charisi and Šabanović , et al.
[79]
Soa Serholt, Lena Pareto, Sara Ekström, and Sara Ljungblad. 2020. Trouble and Repair in Child–Robot Interaction: a study of complex interactions
with a robot tutee in a primary school classroom. Frontiers in Robotics and AI 7 (2020), 46.
[80]
Suleman Shahid, Emiel Krahmer, and Marc Swerts. 2014. Child–robot interaction across cultures: How does playing a game with a social robot
compare to playing a game alone or with a friend? Computers in Human Behavior 40 (2014), 86–100.
[81] Noel Sharkey and Amanda Sharkey. 2010. The crying shame of robot nannies: an ethical appraisal. Interaction Studies 11, 2 (2010), 161–190.
[82]
Solace Shen, Petr Slovak, and Malte F Jung. 2018. " Stop. I See a Conict Happening." A Robot Mediator for Young Children’s Interpersonal Conict
Resolution. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 69–77.
[83] Kallyn Song-Nichols and Andrew Young. 2020. Gendered Robots Can Change Children’s Gender Stereotyping.. In CogSci.
[84]
Katta Spiel, Emeline Brulé, Christopher Frauenberger, Gilles Bailly, and Geraldine Fitzpatrick. 2018. Micro-Ethics for Participatory Design with
Marginalised Children. In Proceedings of the 15th Participatory Design Conference: Full Papers - Volume 1 (Hasselt and Genk, Belgium) (PDC ’18).
Association for Computing Machinery, New York, NY, USA, Article 17, 12 pages. https://doi.org/10.1145/3210586.3210603
[85]
Megan Strait, Heather L Urry, and Paul Muentener. 2019. Children’s responding to humanlike agents reects an uncanny valley. In 2019 14th
ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 506–515.
[86]
Amanda A Sullivan, Marina Umaschi Bers, and Claudia Mihm. 2017. Imagining, playing, and coding with KIBO: using robotics to foster
computational thinking in young children. Siu-cheung KONG The Education University of Hong Kong, Hong Kong 110 (2017).
[87]
Hamish Tennent, Solace Shen, and Malte Jung. 2019. Micbot: A peripheral robotic object to shape conversational dynamics and team performance.
In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 133–142.
[88]
Michael Tomasello and Hannes Rakoczy. 2007. What makes human cognition unique? From individual to shared to collective intentionality.
Intellectica 46, 2 (2007), 25–48.
[89] Sherry Turkle. 2011. .
[90]
Committee on the Rights of the Child UN. 2021. General comment No. 25: Children’s rights in relation to the digital environment. United Nations.
https://www.ohchr.org/EN/HRBodies/CRC/Pages/GCChildrensRightsRelationDigitalEnvironment.aspx
[91]
General Assembly UN. 1989. Convention on the Rights of the Child. United Nations vol. 1577, p. 3. https://www.refworld.org/docid/3ae6b38f0.html
[92]
UNICEF. 2019. Four principles of the convention on the rights of the child. https://www.unicef.org/armenia/en/stories/four-principles-convention-
rights-child
[93]
Caroline L van Straten, Jochen Peter, and Rinaldo Kühne. 2020. Child–robot relationship formation: A narrative review of empirical research.
International Journal of Social Robotics 12, 2 (2020), 325–344.
[94]
Sahil Verma, John Dickerson, and Keegan Hines. 2020. Counterfactual explanations for machine learning: A review. arXiv preprint arXiv:2010.10596
(2020).
[95]
Anna-Lisa Vollmer, Robin Read, Dries Trippas, and Tony Belpaeme. 2018. Children conform, adults resist: A robot group induced peer pressure on
normative social conformity. Science robotics 3, 21 (2018).
[96] Lev Semenovich Vygotsky. 1980. Mind in society: The development of higher psychological processes. Harvard university press.
[97]
Mette Weibel, Martin Kaj Fridh Nielsen, Martha Krogh Topperzer, Nanna Maria Hammer, Sarah Wagn Møller, Kjeld Schmiegelow, and Hanne
Bækgaard Larsen. 2020. Back to school with telepresence robot technology: A qualitative pilot study about how telepresence robots help school-aged
children and adolescents with cancer to remain socially and academically connected with their school classes during treatment. Nursing open 7, 4
(2020), 988–997.
[98]
J Kory Westlund, Leah Dickens, Sooyeon Jeong, Paul Harris, David DeSteno, and Cynthia Breazeal. 2015. A comparison of children learning new
words from robots, tablets, & people. In Proceedings of the 1st international conference on social robots in therapy and education.
[99]
Eva Wiese, Giorgio Metta, and Agnieszka Wykowska. 2017. Robots as intentional agents: using neuroscientic methods to make robots appear
more social. Frontiers in psychology 8 (2017), 1663.
[100]
Randi Williams, Christian Vázquez Machado, Stefania Druga, Cynthia Breazeal, and Pattie Maes. 2018. " My doll says it’s ok" a study of children’s
conformity to a talking doll. In Proceedings of the 17th ACM Conference on Interaction Design and Children. 625–631.
[101]
Alan FT Wineld, Christian Blum, and Wenguo Liu. 2014. Towards an ethical robot: internal models, consequences and ethical action selection. In
Conference towards autonomous robotic systems. Springer, 85–96.
[102]
Elmira Yadollahi, Marta Couto, Pierre Dillenbourg, and Ana Paiva. 2020. Can you guide me? supporting children’s spatial perspective taking
through games with robots. In Proceedings of the 2020 ACM Interaction Design and Children Conference: Extended Abstracts. 217–222.
[103]
Cristina Zaga. 2021. The Design of Robothings: Non-Anthropomorphic and Non-Verbal Robots to Promote Children’s Collaboration Through Play. Ph.D.
Dissertation. University of Twente.
[104]
Jakub Złotowski, Diane Proudfoot, Kumar Yogeeswaran, and Christoph Bartneck. 2015. Anthropomorphism: opportunities and challenges in
human–robot interaction. International journal of social robotics 7, 3 (2015), 347–360.
30
... (1), (8)); the children at group/societal level (e.g. (2), (3)); technical underpinnings and regulations ((4), (5)); or the children's environment ((7), (9)). Together, they build a comprehensive framework that impact many aspects of the AI system design. ...
... Two of the pilot case-studies are closely related to our research: (i) the adoption of the guidelines for the Haru social-robot for typically developing children and (ii) the AutismVR Imisi 3D application that teaches people how to interact with autistic children. While results of the first case-study has contributed to the development of a framework on social robots and child's fundamental rights [9], this was developed with the participation of neurotypical children and further long-term and systematic research is needed for its examination with autistic children. ...
Conference Paper
Full-text available
For a period of three weeks in June 2021, we embedded a social robot (Softbank Pepper) in a Special Educational Needs (SEN) school, with a focus on supporting the well-being of autistic children. Our methodology to design and embed the robot among this vulnerable population follows a comprehensive participatory approach. We used the research project as a test-bed to demonstrate in a complex real-world environment the importance and suitability of the nine UNICEF guidelines on AI for Children. The UNICEF guidelines on AI for Children closely align with several of the UN goals for sustainable development, and, as such, we report here our contribution to these goals.
... (1), (8)); the children at group/societal level (e.g. (2), (3)); technical underpinnings and regulations ((4), (5)); or the children's environment ((7), (9)). Together, they build a comprehensive framework that impact many aspects of the AI system design. ...
... Two of the pilot case-studies are closely related to our research: (i) the adoption of the guidelines for the Haru social-robot for typically developing children and (ii) the AutismVR Imisi 3D application that teaches people how to interact with autistic children. While results of the first case-study has contributed to the development of a framework on social robots and child's fundamental rights [9], this was developed with the participation of neurotypical children and further long-term and systematic research is needed for its examination with autistic children. ...
Preprint
Full-text available
For a period of three weeks in June 2021, we embedded a social robot (Softbank Pepper) in a Special Educational Needs (SEN) school, with a focus on supporting the well-being of autistic children. Our methodology to design and embed the robot among this vulnerable population follows a comprehensive participatory approach. We used the research project as a test-bed to demonstrate in a complex real-world environment the importance and suitability of the nine UNICEF guidelines on AI for Children. The UNICEF guidelines on AI for Children closely align with several of the UN goals for sustainable development, and, as such, we report here our contribution to these goals.
... Similiar to Mondada et al.'s approach to looking at the impact of robots on the home ecosystem [38], our research focuses on the impact of a social robot on the school ecosystem, a dynamic fabric comprised of students, teachers, other member of staff, parents, as well as the physical environ-ment of the school. As such, this work feeds into the recent efforts by Charisi et al. [39] to frame the ethical principles that should guide the design of child-robot interactions in the future. ...
Article
Full-text available
For a period of 3 weeks in June 2021, we embedded a social robot (Softbank Pepper) in a Special Educational Needs (SEN) school for autistic children. The robot's behaviours and integration into the school were co-designed with the children and teachers, with a focus on improving the well-being of the pupils. Using a mix-method approach, we studied the robot's adoption over the course of the study, and the impact of the robot's presence on the children well-being and the school ecosystem. We found that the robot successfully integrated within the school; it fostered and maintained a steady level of interactions (330 interactions, 16 h of continuous use over 3 weeks) with a small yet meaningful group of children with a positive impact on their well-being; and it led to a nuanced conversation with the students and school staff about the role and impact of such a social technology in a SEN school.
Article
Full-text available
One essential role of social robots is supporting human mental health by interaction with people. In this study, we focused on making people's moods more positive through conversations about their problems as our first step to achieving a robot that cares about mental health. We employed the column method, a typical stress coping technique in Japan, and designed conversational contents for a robot. We implemented conversational functions based on the column method for a social robot as well as a self-schema estimation function using conversational data, and proposed conversational strategies to support awareness of their self-schemas and automatic thoughts, which are related to mental health support. We experimentally evaluated our system's effectiveness and found that participants who used it with our proposed conversational strategies made more self-disclosures and experienced less anger than those who did not use our proposed conversational strategies. Unfortunately, the strategies did not significantly increase the performance of the self-schema estimation function.
Conference Paper
Full-text available
This paper describes a cross-cultural pilot study on children’s perceptions of fairness in robot-related scenarios with children in Japan (𝑁 = 20) and Uganda (𝑁 = 24). We used storytelling to facilitate children’s narratives on fairness and to identify areas of alignment and disconnect. Initial results indicate that while both groups referred to similar aspects of fairness, namely psychological, physical and systemic, children in Tokyo focused more on psychological and mental aspects while children in Uganda emphasised on physical and material aspects. Both groups increased their emphasis on mental aspects in robot-related scenarios. All children expressed their interest to further explore fairness and unfairness as experienced by children with different cultural backgrounds and the need for inter-group contact. The results of this study will contribute to the first phase of a study with robots and children in relation to child’s fundamental rights and to the dialogue about the requirements for fairness in robot development by highlighting the importance of considering children’s perspectives especially those of typically under-represented cultural groups.
Article
Full-text available
The exponentially increasing advances in robotics and machine learning are facilitating the transition of robots from being confined to controlled industrial spaces to performing novel everyday tasks in domestic and urban environments. In order to make the presence of robots safe as well as comfortable for humans, and to facilitate their acceptance in public environments, they are often equipped with social abilities for navigation and interaction. Socially compliant robot navigation is increasingly being learned from human observations or demonstrations. We argue that these techniques that typically aim to mimic human behavior do not guarantee fair behavior. As a consequence, social navigation models can replicate, promote, and amplify societal unfairness, such as discrimination and segregation. In this work, we investigate a framework for diminishing bias in social robot navigation models so that robots are equipped with the capability to plan as well as adapt their paths based on both physical and social demands. Our proposed framework consists of two components: learning which incorporates social context into the learning process to account for safety and comfort, and relearning to detect and correct potentially harmful outcomes before the onset. We provide both technological and societal analysis using three diverse case studies in different social scenarios of interaction. Moreover, we present ethical implications of deploying robots in social environments and propose potential solutions. Through this study, we highlight the importance and advocate for fairness in human-robot interactions in order to promote more equitable social relationships, roles, and dynamics and consequently positively influence our society.
Conference Paper
Full-text available
As robotic technology, from simple toys to more advanced social robots - is becoming a common sight in private and in public spaces such as schools, it is important to understand its overall impact on child's development. An important aspect of this process is the consideration of the existing work on children's protection such as the Convention of the Rights of the Child published by the United Nations. Currently, national and global organizations have raised concerns regarding the design, the development and the use of Artificial Intelligence (AI) in applications that are used by or relate to children. UNICEF for example has proposed a set of requirements for AI and child's rights in order for researchers industry and policy-makers to adopt a more child-centred approach. Robot-specific characteristics bring unique opportunities for the support of children's well-being. However, they might also raise concerns about emerging risks that relate to children's fundamental rights. With this workshop, we aim to bring to together researchers that work on all the relevant disciplines and are interested in reflecting on the current and setting goals for the future directions for design and development of better robots for children respecting their fundamental rights. We welcome submissions on any topic of child-robot interaction tackles issues related to child's rights, such as inclusion and non-discrimination, agency, privacy, dignity, transparency, equity, fairness and participation.
Article
With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.
Article
This study examined whether Japanese children aged 3–7 years and adults exhibited a gender bias, or a tendency to attribute gender, to manmade objects including robots. We presented the participants with two humanoid robots and one dog-like robot, along with stuffed animals and figures such as animals with biological sex, animals and natural objects without biological sex, and a manmade object. The results revealed that the participants were more likely to attribute boy gender to the target objects, and that male participants were more likely to do so than their female counterparts. More importantly, 5- and 7-year-olds and adults exhibited a boy bias to the robots and car; however, they exhibited a girl bias for a sunflower. This indicates that people might begin to attribute gender bias to objects at age of five. Our findings highlight the importance of avoiding the dissemination of stereotypes when engaging with children about gender, as they already exhibit gender biases at an early developmental stage.