Fig 1 - uploaded by Anind K. Dey
Content may be subject to copyright.
The Robosapien V2 robot watches participants build structures and asks questions about the blocks it cannot recognize.

The Robosapien V2 robot watches participants build structures and asks questions about the blocks it cannot recognize.

Source publication
Conference Paper
Full-text available
Asking questions is an inevitable part of collaborative interactions between humans and robots. However, robotics novices may have difficulty answering the robots' questions if they do not understand what the robot is asking. We are particularly interested in whether robots can supplement their questions with information about their state in a mann...

Contexts in source publication

Context 1
... sit in front of the robot and are given a set of 50 wooden blocks, containing 6 different block shapes in 5 different colors. The subjects are given 4 pictures of block structures (Fig 1(a)), each composed of 20-35 blocks, to build in 12 minutes. The subject is told the robot is learning to recognize block shapes. ...
Context 2
... Wizard-of-Oz'ed RoboSapien V2 (Fig 1(b)) robot watches subjects build with blocks. The robot was prepro- grammed to follow faces and red, green, and blue colored ob- jects with a built-in color camera. ...

Similar publications

Conference Paper
Full-text available
Feeling and emotion are important to human being during his/her learning process, also valuable to be adopted into intelligent machines. This research presents a system which forms and expresses feelings of a robot. The vision information of robot is used and the environment features are categorized by a hierarchical SOM (Self-Organization Map). Th...
Conference Paper
Full-text available
Language is special, yet its power to facilitate communication may have distracted researchers from the power of another, potential precursor ability: the ability to label things, and the effect this can have in transforming or extending cognitive abilities. In this paper we present a simple robotic model, using the iCub robot, demonstrating the ef...
Article
Full-text available
The goal of the CoSy project is to create cognitive robots to serve as a testbed of theories on how humans work (13), and to identify problems and techniques relevant to producing general-purpose human- like domestic robots. Given the constraints on the resources available at the robot's disposal and the complexity of the tasks that the robot has t...
Conference Paper
Full-text available
In teleoperation tasks, information about the relative posture between end-effector and the object to be grasped is of key importance for human operators. Although visual information plays a major role in monitoring collision and fun-tuning the end-effector towards the object, the operator should make strict observations about the video images to t...

Citations

... Studies analyzing question types and responses in human-robot interaction to provide guidelines for building socially acceptable robots are steadily being conducted [7][8][9]. Rosenthal et al. attempted to provide dialogue guidelines in human-robot interaction; they investigated how robots' questions based on content affect the accuracy of human responses in human-robot collaborative tasks. After the block shape recognition task for robots, the robot observed the experiment participants making structures out of blocks. ...
... Linguistic behaviors such as question types, topic of conversation, and sentence form, as well as non-verbal elements such as facial expressions, gestures, and eye-gaze of the conversational partner, are important factors to effectively initiate and maintain conversation through turn-taking between humans [7][8][9][10]. Lee et al. analyzed nine social cues according to their function in social behavior such as greeting, self-disclosure elicitation, self-disclosure, and suggestions in the counselor-client chat data to build an intimate social dialogue model in human-robot interaction [13]. They reported that counselors used the "self-disclosure elicitation" more than other social cues and so clients used "selfdisclosure" most often in interactions. ...
Article
Full-text available
In recent years, robots have been playing the role of counselor or conversational partner in everyday dialogues and interactions with humans. For successful human–robot communication, it is very important to identify the best conversational strategies that can influence the responses of the human client in human–robot interactions. The purpose of the present study is to examine linguistic behaviors in human–human conversation using chatting data to provide the best model for effective conversation in human–robot interaction. We analyzed conversational data by categorizing them into question types, namely Wh-questions and “yes” or “no” (YN) questions, and their correspondent linguistic behaviors (self-disclosure elicitation, self-disclosure, simple “yes” or “no” answers, and acknowledgment). We also compared the utterance length of clients depending on the question type. In terms of linguistic behaviors, the results reveal that the Wh-question type elicited significantly higher rates of self-disclosure elicitation and acknowledgment than YN-questions. Among the Wh-subtype, how was found to promote more linguistic behaviors such as self-disclosure elicitation, self-disclosure, and acknowledgment than other Wh-subtypes. On the other hand, YN-questions generated significantly higher rates of simple “yes” or “no” answers compared to the Wh-question. In addition, Wh-question type elicited longer utterance than the YN-question type. We suggested that the type of questions of the robot counselor must be considered to elicit various linguistic behaviors and utterances of humans. Our research is meaningful in providing efficient conversation strategies for robot utterances that conform to humans’ linguistic behaviors.
... 10 Jahre später wurde (Cameron et al., 2016) insbesondere der Einfluss der "Persönlichkeit" des Roboters auf die Hilfsbereitschaft der Nutzer am Beispiel eines ferngesteuerten Guide-Roboters untersucht, der Hilfe bei der Bedienung eines Aufzugs und dem Öffnen von Türen benötigt (Hüttenrauch et al., 2006;Rosenthal, 2012b). Zudem wurde auch die Wirkung von konkreten Hilfssituationen auf die Helfer untersucht (Bajones et al., 2016) und im Kontext von Fernsteueraufgaben ermittelt, dass die menschliche Assistenz als eine begrenzte Ressource zu betrachten und vorsichtig einzusetzen ist (Fong, 2003;Rosenthal, Dey & Veloso, 2009). In anderen Untersuchungen konnte gezeigt werden, wie bedeutsam Erscheinungsbild und Verhalten des Roboters für die Hilfsbereitschaft und Verbundenheit seitens des Menschen sind (Goetz, 2003). ...
Book
Full-text available
This edited volume shows an overview over the research conducted in the funding campaign “Autonomous Robots for Assistance Functions: Interactive Basic Abilities“ of the Federal Ministry of Education and Research (BMBF). One accompanying research project and eight research projects are funded which address the development of interactive basic abilities for applying robots in assistance contexts. This volume gives an overview over the research content and results and by this a manifold and varied overview over technical developments as well as contextual factors vital for the use of service robots. In this volume, the accompanying research project ARAIG participated together with the projects ASARob, AuRorA, FRAME, MobILe, RoKoRa, RoPha, RoSylerNT and SINA. The publication was compiled by the Federal Institute for Occupational Safety and Health (BAuA) as a project partner in ARAIG.
... Furthermore, feature queries seem to be harder to interpret than label queries. Rosenthal et al. (2009) also investigate participants' answers to a robot's questions (see Table 7). They present a method and guidelines for developing robot questions. ...
... Monitoring progress by testing throughout Kim et al. (2009) In theory Adaptation (history of interaction) Cakmak and Thomaz (2012) In theory Feature queries perceived smartest (room for manipulation) Rosenthal et al. (2009) In theory Adaptation (transparency) Muhl and Nagai (2007) In theory Adaptation (attention monitoring, reorientation/repair strategies) Vollmer et al. (2009a) In theory Adaptation Lohan et al. (2010) In theory Adaptation Vollmer et al. (2014) In theory Adaptation Nagai et al. (2010) In theory Adaptation de Greeff and Belpaeme (2015) Label learning in language game Adaptation (learning preference) ...
... Consequently, either ML algorithms have to be adapted according to study outcomes -which is rather difficult -or the adaptability of human behavior could be exploited using transparency mechanisms. Transparency mechanisms aim at enabling the human teacher to understand the current state of the robotic system and for instance signal uncertainty or attention with gaze (Thomaz and Breazeal 2008;Vollmer et al. 2014), and include robot state information when asking questions (Rosenthal et al. 2009). More straightforward versions of this mechanism include the explicit signaling of preference for a certain learning input (de Greeff and Belpaeme 2015;Lütkebohle et al. 2009), explicitly asking the teacher questions, or requesting actions or feedback (Cakmak and Thomaz 2012). ...
Article
Full-text available
Studying teaching behavior in controlled conditions is difficult. It seems intuitive that a human learner might have trouble reliably recreating response patterns over and over in interaction. A robot would be the perfect tool to study teaching behavior because its actions can be well controlled and described. However, due to the interactive nature of teaching, developing such a robot is not an easy task. As we will show in this review, respective studies require certain robot appearances and behaviors. These mainly should induce teaching behavior in humans, be interactive, match the study design, and be realizable in terms of effort. We discuss how remote controlling of the robot or simulating robot capabilities is used as an option. With this review, we introduce the field of research on studying human teaching behavior with robots as a tool in the experimental design. We will provide a structured overview of existing work, and identify main challenges of employing robots in such studies.
... This work introduced three main types of queries the robot can ask when learning: Label Queries, Demonstration Queries, and Feature Queries. There is also literature that evaluated how to ask these questions to maximise the accuracy of the user answers [15,16]. ...
... Cakmak studied how different queries are perceived by users, but did not study how they affect the robot's learning capabilities. This paper combines Cakmak's Feature Queries [6], with Rosenthal's ideas on how different questions can lead to inaccuracies in the user's responses [15]. We study how these user inaccuracies can lead to a decrease in the robot learning and propose a method to reduce the impact of this problem. ...
... Regarding the questions themselves, we have taken into account the effect in which the user's responses can be affected by the way the questions are asked [15]. For that reason, we considered three types of questions to the user. ...
Article
Full-text available
In recent years, the role of social robots is gaining popularity in our society but still learning from humans is a challenging problem that needs to be addressed. This paper presents an experiment where, after teaching poses to a robot, a group of users are asked several questions whose answers are used to create feature filters in the robot’s learning space. We study how the answers to different types of questions affect the learning accuracy of a social robot when it is trained to recognize human poses. We considered three types of questions: “Free Speech Queries”, “Yes/No Queries”, and “Rank Queries”, building a feature filter for each type of question. Besides, we provide another filter to help the robot to reduce the effects of inaccurate answers: the Extended Filter. We compare the performance of a robot that learned the same poses with Active Learning (using the four feature filters) versus Passive Learning (without filters). Our results show that, despite the fact that Active Learning can improve the robot’s learning accuracy, there are some cases where this approach, using the feature filters, achieves significant worse results than Passive Learning if the user provides inaccurate feedback when asked. However, the Extended Filter has proven to maintain the benefits of Active Learning even when the user answers are not accurate.
... Although interactive robots can outperform passive ones in both performance and quality of the interaction, a careful design of the Human-Robot interaction (HRI) is needed. Aspects like the transparency of the robot learning process [11,33], the ability of the user to be a good teacher [9], the timing of the queries or the balance in control over the interaction [7] must be taken into account. Furthermore, efficient ways to mediate between the robot's internal skill representation and the user need to be crafted. ...
... Rosenthal et. al [33] investigated how the information included in the robot's questions affects the quality of the user's responses, showing that transparent learners help users focus their teaching efforts. Chao et al. [11] investigated the same issue in a concept learning task by tuning the robot's non-verbal behaviours to explain uncertainty about the target concept. ...
Conference Paper
Full-text available
With the goal of having robots learn new skills after deployment, we propose an active learning framework for modelling user preferences about task execution. The proposed approach interactively gathers information by asking questions expressed in natural language. We study the validity and the learning performance of the proposed approach and two of its variants compared to a passive learning strategy. We further investigate the human-robot-interaction nature of the framework conducting a usability study with 18 subjects. The results show that active strategies are applicable for learning preferences in temporal tasks from non-expert users. Furthermore, the results provide insights in the interaction design of active learning robots.
... A number of current dialog systems also incorporate feedback from the robot learner [11], [12], [13]. For instance, for compound symbol learning, the authors of [11] employed nonverbal robot feedback in form of a fixed sequence of behaviors and a set of animations to communicate a certain object and the confidence in an answer, respectively. ...
... In this setup, the tutor presented the symbols and provided information or queried the system by saying three possible predefined sentences. In a study presented in [12] the content of questions posed by a robot was varied to investigate its influence on responses from the human partner for object recognition. Another study implemented verbal and non-verbal feedback in a robot to investigate its influence on itinerary requests [13]. ...
Article
Full-text available
Robot learning by imitation requires the detection of a tutor's action demonstration and its relevant parts. Current approaches implicitly assume a unidirectional transfer of knowledge from tutor to learner. The presented work challenges this predominant assumption based on an extensive user study with an autonomously interacting robot. We show that by providing feedback, a robot learner influences the human tutor's movement demonstrations in the process of action learning. We argue that the robot's feedback strongly shapes how tutors signal what is relevant to an action and thus advocate a paradigm shift in robot action learning research toward truly interactive systems learning in and benefiting from interaction.
... The advantages of AL are, (i) obtaining better accuracy of the learned concepts, (ii) reducing the number of the training examples required to acquire a concept, and (iii) being preferred by the humans that had to teach a robot [2]. But, despite its advantages, some studies have found that the accuracy of the user's responses might be affected by the robot's queries [5]. In a similar work, Cakmak et al. evaluate how to ask questions that maximize the accuracy of the users' answers [3]. ...
Conference Paper
Full-text available
This paper presents a system in which a robot uses Active Learning (AL) to improve its learning capabilities for pose recognition. We propose a sub-type of Feature Queries, Rank Queries (RQ), in which the user states the relevance of a characteristic of the learning space. In the case of pose learning, these queries refer to the relevance of a single limb for a certain pose. We test the use of RQ with 24 users to learn 3 pointing poses and compare the learning accuracy against a passive learning approach. Our results show that RQ can increase the robot's learning accuracy.
... Our earlier work compares passive and active task learning [5] and addresses the question of when to ask questions in a mixed-initiative AL setting [2]. Rosenthal et al. investigate how augmenting questions with different types of additional information improves the accuracy of human teachers' answers [21]. In later work, they explore the use of humans as information providers in a real-world navigation scenario [22]. ...
Conference Paper
Programming new skills on a robot should take minimal time and effort. One approach to achieve this goal is to allow the robot to ask questions. This idea, called Active Learning, has recently caught a lot of attention in the robotics community. However, it has not been explored from a human-robot interaction perspective. In this paper, we identify three types of questions (label, demonstration and feature queries) and discuss how a robot can use these while learning new skills. Then, we present an experiment on human question asking which characterizes the extent to which humans use these question types. Finally, we evaluate the three question types within a human-robot teaching interaction. We investigate the ease with which different types of questions are answered and whether or not there is a general preference of one type of question over another. Based on our findings from both experiments we provide guidelines for designing question asking behaviors on a robot learner.
... The robot executes the best non-asking for help policy (the Q MDP policy (Littman, Cassandra, and Kaelbling 1995)) unless the cost of asking is lower than the cost of executing under uncertainty. However, actual humans in the environment are not always available or interruptible (Fogarty et al. 2005;Shiomi et al. 2008), may not be accurate (Rosenthal, Dey, and Veloso 2009), and they may have variable costs of asking or interruption (Cohn, Atlas, and Ladner 1994;Rosenthal, Biswas, and Veloso 2010). ...
Article
When mobile robots perform tasks in environments with humans, it seems appropriate for the robots to rely on such humans for help instead of dedicated human oracles or supervisors. However, these humans are not always available nor always accurate. In this work, we consider human help to a robot as concretely providing observations about the robot's state to reduce state uncertainty as it executes its policy autonomously. We model the probability of receiving an observation from a human in terms of their availability and accuracy by introducing Human Observation Providers POMDPs (HOP-POMDPs). We contribute an algorithm to learn human availability and accuracy online while the robot is executing its current task policy. We demonstrate that our algorithmis effective in approximating the true availability and accuracy of humans without depending on oracles to learn, thus increasing the tractability of deploying a robot that can occasionally ask for help.
... Supervisors are often assumed, with few exceptions (e.g.,[18]), to have in-depth knowledge about how robots work so that they can help them appropriately. However, even without this assumption, robots with knowledge of supervisors can take into account their 1) expertise to determine the type of question that the robot should ask[7,17,18], 2) ground or familiarize the helper with the robot's current state to increase the likelihood of accurate responses[9,34], and 3) model the helper's interruptibility or availability to answer questions[16,39]. While supervision effectively reduces the amount of human help and monitoring that robots require compared to teleoperation and provide benefits of incentives and long-term interaction, it still assumes that a human will always be in contact to provide accurate and timely help. ...
Article
Full-text available
Robots are increasingly autonomous in our environments, but they still must overcome limited sensing, reasoning, and actuating capabilities while completing services for humans. While some work has focused on robots that proactively request help from humans to reduce their limitations, the work often assumes that humans are supervising the robot and always available to help. In this work, we instead investigate the feasibility of asking for help from humans in the environment who benefit from its services. Unlike other human helpers that constantly monitor a robot’s progress, humans in the environment are not supervisors and a robot must proactively navigate to them to receive help. We contribute a study that shows that several of our environment occupants are willing to help our robot, but, as expected, they have constraints that limit their availability due to their own work schedules. Interestingly, the study further shows that an available human is not always in close proximity to the robot. We present an extended model that includes the availability of humans in the environment, and demonstrate how a navigation planner can incorporate this information to plan paths that increase the likelihood that a robot can find an available helper when it needs one. Finally, we discuss further opportunities for the robot to adapt and learn from the occupants over time. KeywordsHuman–robot interaction–User study–Asking for help–Planning