Fig 1 - uploaded by Alexander Skoglund
Content may be subject to copyright.
To the left is the manipulator in our experimental setup, an ABB IRB140 manipulator equipped with a vacuum gripper. A video of the task performed is available at the authors homepage: www.aass.oru.se/ ∼ asd/. To the right the 6D-tracker, mounted on a data glove, is shown that was used to capture the human demonstration. 

To the left is the manipulator in our experimental setup, an ABB IRB140 manipulator equipped with a vacuum gripper. A video of the task performed is available at the authors homepage: www.aass.oru.se/ ∼ asd/. To the right the 6D-tracker, mounted on a data glove, is shown that was used to capture the human demonstration. 

Source publication
Conference Paper
Full-text available
This article presents an approach to Programming by Demonstration (PbD) to simplify programming of industrial manipulators. By using a set of task primitives for a known task type, the demonstration is interpreted and a manipulator program is automatically generated. A pick-and-place task is analyzed, based on the velocity profile, and decomposed i...

Contexts in source publication

Context 1
... projected cluster centers t i and the induced matrices M i pro define the input clusters C i ( i = 1 . . . c ). The parameter m ̃ pro > 1 determines the fuzziness of an individual cluster [8]. Before performing a grasp or release operation, the approach phase of the gripper towards the object is of high importance. In our current work we only consider approach motions towards the object perpendicular to the surface of the table (this also imply objects with flat top surfaces), which is a simplification derived from the design of the 1 DOF vacuum gripper. The position and orientation of the table are known since the manipulator is mounted on the table (see Fig. 1). However, the height of the object to be picked is unknown due to the uncertainty of the sensor, thus the approach primitive has to deal with the uncertainty. Therefore the manipulator’s gripper is equipped with a switch that detects when a spring is compressed to a certain extent (illustrated in Fig. 4). When the switch has detected a contact, the motion downwards is immediately stopped and the appropriate action (grasp or release) is performed. A grasp operation is distinguished from a release operation by two discrete states, internally represented in the MoveZ primitive. When performing a grasp or release operation the manipulator is given a set coordinate to move towards the object and search for contact with it. When the switch detects a certain resistance (that is, the spring is compressed at a length, l ) the motion stops. The starting point is determined by the distance d , which is derived from the inaccuracy of the sensor that performs the motion capturing, typically factor of 1 . 1 ∼ 2 . 0. The approach task primitive is implemented with the SearchL command in RAPID (see the ABB specific programming languages, facilitating instructions for manipulator motions). III. E XPERIMENTAL RESULTS FOR A R OBOT M ANIPULATOR The experimental setup consists of a 6D tracker, Polhemus Fastrak, for the demonstrator and a 6 DOF industrial manipulator, the IRB 140 from ABB Robotics. A vacuum gripper is mounted on the manipulator together with a magnetic switch and a spring. We have selected a pick-and-place task ...
Context 2
... approach we present in this article can extract a set of instructions corresponding to a demonstration, without writing a single line of code, given that the task type is known (that is, a pick-and-place task). One important benefit derived from the PbD method is the human-like appearance of the motion which also increases safety implicitly since the motion is predictable to humans (in contrast to, e.g., time- optimal motions). The scenario we consider is to teach an industrial robot equipped with a vacuum gripper, shown to the left in Fig. 1, how to execute a pick-and-place task. The demonstration is done under the assumption that the teacher’s index finger is associated with the suction cup of the gripper. During demon- stration,the fingertip is tracked by a motion capturing device, shown to the right in Fig. 1. Initially, the demonstrator moves from a starting point to the desired pick-point, P pick . Then, he/she moves along a certain path towards the desired place- point, P place and finally, back to the starting position. The collected data consists of position coordinates P , which are used for the following purpose: - Detect the pick- and place-positions, P pick and P place - Reconstruct the desired trajectory that the robot should travel from P to P ( FollowTaj ). In summary, the steps from the demonstration to the compilation of instructions are: 1) A human demonstration is captured and transformed into robot’s reference frame. 2) Trajectories are smoothed to reduce noise from the sensors (see Section II-D). 3) Trajectories are segmented to extract the points where the motions start and end (see Section II-B). 4) Extracted motions are decomposed into task primitives as those described in Section II-C. 5) Each task primitive is automatically translated into robot-specific code. 6) The complete task is executed on the real ABB IRB140 manipulator. For step 4 in the above list it is important to note that the task is known in advance, which makes it possible to describe the task as a sequence of task primitives. These task primitives are designed specifically for the task, and can be executed on most 6 DOF serial manipulators. The primitive controlling the grasp and release are specific to the type of gripper used. In the scenario, the demonstrator and the robot share the same workspace. Our assumption is that the demonstrator knows the manipulator’s structure such as workspace and possible motions which makes a complicated data prepro- cessing unnecessary. To distinguish the different segments of a demonstrated motion, the captured data needs to be segmented. This is done by measuring the mean squared velocity of the demonstrator’s end effector. A ”semi-atomized” segmentation tech- nique is used, equivalent to the one developed by Matarić et. al. [7], but instead of using the mean squared velocity of the joints angles we use the end effector’s velocity. The mean squared velocity is given ...
Context 3
... approach we present in this article can extract a set of instructions corresponding to a demonstration, without writing a single line of code, given that the task type is known (that is, a pick-and-place task). One important benefit derived from the PbD method is the human-like appearance of the motion which also increases safety implicitly since the motion is predictable to humans (in contrast to, e.g., time- optimal motions). The scenario we consider is to teach an industrial robot equipped with a vacuum gripper, shown to the left in Fig. 1, how to execute a pick-and-place task. The demonstration is done under the assumption that the teacher’s index finger is associated with the suction cup of the gripper. During demon- stration,the fingertip is tracked by a motion capturing device, shown to the right in Fig. 1. Initially, the demonstrator moves from a starting point to the desired pick-point, P pick . Then, he/she moves along a certain path towards the desired place- point, P place and finally, back to the starting position. The collected data consists of position coordinates P , which are used for the following purpose: - Detect the pick- and place-positions, P pick and P place - Reconstruct the desired trajectory that the robot should travel from P to P ( FollowTaj ). In summary, the steps from the demonstration to the compilation of instructions are: 1) A human demonstration is captured and transformed into robot’s reference frame. 2) Trajectories are smoothed to reduce noise from the sensors (see Section II-D). 3) Trajectories are segmented to extract the points where the motions start and end (see Section II-B). 4) Extracted motions are decomposed into task primitives as those described in Section II-C. 5) Each task primitive is automatically translated into robot-specific code. 6) The complete task is executed on the real ABB IRB140 manipulator. For step 4 in the above list it is important to note that the task is known in advance, which makes it possible to describe the task as a sequence of task primitives. These task primitives are designed specifically for the task, and can be executed on most 6 DOF serial manipulators. The primitive controlling the grasp and release are specific to the type of gripper used. In the scenario, the demonstrator and the robot share the same workspace. Our assumption is that the demonstrator knows the manipulator’s structure such as workspace and possible motions which makes a complicated data prepro- cessing unnecessary. To distinguish the different segments of a demonstrated motion, the captured data needs to be segmented. This is done by measuring the mean squared velocity of the demonstrator’s end effector. A ”semi-atomized” segmentation tech- nique is used, equivalent to the one developed by Matarić et. al. [7], but instead of using the mean squared velocity of the joints angles we use the end effector’s velocity. The mean squared velocity is given ...

Citations

... In this context, robots have been widely used for pick-and-place operation for autonomous assembly or testing purposes [1]- [5]. The pick-and-place operations normally are implemented in sequence by a program without an vison system for maximizing throughput [5], [6]. However, for testing of complex component like camera modules used in smartphones, which including optical components, electrical parts, and connectors, alignment them to the right place for making electrical connection and performance testing may not Camera modules always success. ...
Preprint
Full-text available
Pick-and-place robots are commonly used in modern industrial manufacturing. For complex devices/parts like camera modules used in smartphones, which contain optical parts, electrical components and interfacing connectors, the placement operation may not absolutely accurate, which may cause damage in the device under test during the mechanical movement to make good contact for electrical functions inspection. In this paper, we proposed an effective vision system including hardware and algorithm to enhance the reliability of the pick-and-place robot for autonomous testing memory of camera modules. With limited hardware based on camera and raspberry PI and using simplify image processing algorithm based on histogram information, the vision system can confirm the presence of the camera modules in feeding tray and the placement accuracy of the camera module in test socket. Through that, the system can work with more flexibility and avoid damaging the device under test. The system was experimentally quantified through testing approximately 2000 camera modules in a stable light condition. Experimental results demonstrate that the system achieves accuracy of more than 99.92%. With its simplicity and effectiveness, the proposed vision system can be considered as a useful solution for using in pick-and-place systems in industry.
... In this context, robots have been widely used for pick-and-place operation for autonomous assembly or testing purposes [1]- [5]. The pick-and-place operations normally are implemented in sequence by a program without an vison system for maximizing throughput [5], [6]. However, for testing of complex component like camera modules used in smartphones, which including optical components, electrical parts, and connectors, alignment them to the right place for making electrical connection and performance testing may not Camera modules always success. ...
... Cette méthode permet de réduire considérablement le temps de programmation. D'autres méthodes basées sur la programmation par démonstration sont présentées dans les travaux de (Lin et al., 2013;Maeda et al., 2002;Skoglund et al., 2007). ...
... Le chapitre prochain a pour objectif de présenter les différentes approches choisies, et notamment les comparer en se basant sur les indicateurs de performance que nous avons définis. Comba et al., 2013;Humbert et al., 2017Humbert et al., , 2015Johari et al., 2007, p. 5;Sam et al., 2012;Stumm et al., 2014) Réseaux de Petri (Grotzinger and Sciomachen, 1988) (Grotzi nger and Sciomachen, 1988;Hanna et al., 1994;Sciomachen et al., 1990;Yasuda, 2012Yasuda, , 1999Zhou and Leu, 1991) AK Programmatio n par démonstration (De Rengervé et al., 2011;Dimeas et al., 2019;Lin et al., 2013;Maeda et al., 2002;Skoglund et al., 2007) IA Logique floue (Mendelson et (Mattone et al., 2000(Mattone et al., , 1998 (Mattone et al., 2000(Mattone et al., , 1998Yu et al., 2017) (Humbert et al., , 2015 ( Humbert et al., 2017Humbert et al., , 2015) (Mattone et al., 2000(Mattone et al., , 1998Yu et al., 2017) Les algorithmes des colonies de fourmis (Daoud et al., 2014a(Daoud et al., , 2012Wong et al., (Bonert et al., 2000;Daoud et al., 2014aDaoud et al., , 2012Garcia-Najera and Brizuela, 2005;Huang et al., 2015;Peng and Zeng, 2013;Premachandra et al., 2020;Zhu and Chen, 2009) (Huang et al., 2015) ( , 1994; Švaco et al., 2012, 2011c, 2011a, 2011b) Chapitre 3 ...
Thesis
Les problèmes de type Pick & Place (PAP) sont très étudiés dans la littérature, mais, à notreconnaissance, très peu de travaux étudient lessystèmes de PAP dans un contexte industriel.L’objectif de cette thèse est la résolution d’unproblème industriel de type PAP au sein d’un centrede tri postal, où des bacs remplis de courriers arriventdynamiquement et dans un ordre inconnu, et où desopérateurs placent ces bacs dans des chariots enfonction de leur destination. Compte tenu de ladiversité importante des destinations journalières, unéquilibre doit être trouvé en temps réel entre les fluxtraités par les humains et par le robot.Ce problème a été résolu en quatre phases. Enpremier lieu, des modèles à base de connaissance ontété proposés à partir de l’expérience de l’opérateurlogistique. Le résultat de l’application de cesmodèles sur une simulation du système réel estconsidéré comme une borne inférieure de laperformance du système. En second lieu, un modèlemathématique du système a été établi, lerelâchement de plusieurs contraintes permettant detraiter le problème comme un problèmed’ordonnancement classique. Les résultats de cetordonnancement, inapplicables sur le terrain, nousont conduits à investiguer l’utilisation d’heuristiquesen ligne. Une troisième étape a été de proposer unmodèle heuristique, à base de règles dynamiques,évaluée en simulation. Enfin, un modèle multiagents,intégrant ces règles de décisions, a étédéveloppé afin de valider l’applicabilité d’un telsystème de pilotage sur le système réel.
... Due to the complex nature of programming robotic systems, many works have explored how to create intuitive robot programming interfaces. Early approaches relied on user input through auxiliary devices, such as computers or hand-held devices for defining task primitives [4], [22], [23]. However, these approaches have not had much uptake. ...
Preprint
An appropriate user interface to collect human demonstration data for deformable object manipulation has been mostly overlooked in the literature. We present an interaction design for demonstrating cloth folding to robots. Users choose pick and place points on the cloth and can preview a visualization of a simulated cloth before real-robot execution. Two interfaces are proposed: A 2D display-and-mouse interface where points are placed by clicking on an image of the cloth, and a 3D Augmented Reality interface where the chosen points are placed by hand gestures. We conduct a user study with 18 participants, in which each user completed two sequential folds to achieve a cloth goal shape. Results show that while both interfaces were acceptable, the 3D interface was found to be more suitable for understanding the task, and the 2D interface suitable for repetition. Results also found that fold previews improve three key metrics: task efficiency, the ability to predict the final shape of the cloth and overall user satisfaction.
... Approaches also vary with respect to the level of organization at which they are applied. Indeed, learning by demonstration can be applied to train the robot to display low-level behaviors capable of achieving a desired function, as in the examples illustrated above, or to learn to combine pre-existing behaviors to achieve new functions (Dilmman, 2004;Skoglund et al., 2007). ...
... During the assignment step (5), a given robot cell definition is used to calculate how the individual steps of a process workflow can be realised with the available skills of the robot cell. For the parametrisation of skills, techniques for including expert knowledge [10], [11], programming by demonstration [12], [13] or constraint-based programming [14], [15] can be applied. The resulting possible automation workflows are examined in a subsequent validation step (6) in order to filter invalid workflows, e. g., with colliding robots or impossible intermediate product situations. ...
Conference Paper
Full-text available
Multi-functional cells with cooperating teams of robots promise to be flexible, robust, and efficient and, thus, are a key to future factories. However, their programming is tedious and AI-based planning for multiple robots is computationally expensive. In this work, we present a modular and efficient twolayer planning approach for multi-robot assembly. The goal is to generate the program for coordinated teams of robots from an (enriched) 3D model of the target assembly. Although the approach is both motivated and evaluated with LEGO, which is a challenging variant of blocks world, the approach can be customized to different kinds of assembly domains.
... How can the user know that the plan best utilizes the skills of human and robot workers? Although robot programming tools enable users to quickly program collaborative robots through demonstration, e.g., the programming by demonstration (PbD) approach developed by Skoglund et al. [40], or using visual programming environments (VPEs), e.g., CoSTAR [34], no tools exist to support users in the entire process of translating existing human tasks to those that human-robot teams can perform within the manufacturing context. In this paper, we outline the technical challenges involved in authoring human-robot plans and present our authoring environment, Authr, as a solution. ...
Conference Paper
Collaborative robots promise to transform work across many industries and promote human-robot teaming as a novel paradigm. However, realizing this promise requires the understanding of how existing tasks, developed for and performed by humans, can be effectively translated into tasks that robots can singularly or human-robot teams can collaboratively perform. In the interest of developing tools that facilitate this process we present Authr, an end-to-end task authoring environment that assists engineers at manufacturing facilities in translating existing manual tasks into plans applicable for human-robot teams and simulates these plans as they would be performed by the human and robot. We evaluated Authr with two user studies, which demonstrate the usability and effectiveness of Authr as an interface and the benefits of assistive task allocation methods for designing complex tasks for human-robot teams. We discuss the implications of these findings for the design of software tools for authoring human-robot collaborative plans.
... In this thesis, an industrial task is dened by the o-line generated robot paths (task primitives), which can be programmed by a Computer-Aided Design (CAD) software, or by using more sophisticated methods including Programming by Demonstration (PbD) [92]. Consequently, a safe human robot interaction is achieved through realtime modication of the o-line generated paths, Figure 3.14. ...
... PbD algorithms are able to function in increasingly complex environments, such as recog nizing the relative positioning of objects in the environment [38], breaking the task into discrete segments [38], recogniz ing partial-ordering of those segments [37], and understanding task constraints [41]. While PbD has been successfully applied to tasks in both in-home [1] and commercial [54] settings, the limitations of current PbD algorithms include their focus on tasks comprised of goals that require physical manipulation and the requirement to visually confirm completion. ...
Conference Paper
Designing and implementing human-robot interactions requires numerous skills, from having a rich understanding of social interactions and the capacity to articulate their subtle requirements, to the ability to then program a social robot with the many facets of such a complex interaction. Although designers are best suited to develop and implement these interactions due to their inherent understanding of the context and its requirements, these skills are a barrier to enabling designers to rapidly explore and prototype ideas: it is impractical for designers to also be experts on social interaction behaviors, and the technical challenges associated with programming a social robot are prohibitive. In this work, we introduce Synthé, which allows designers to act out, or bodystorm, multiple demonstrations of an interaction. These demonstrations are automatically captured and translated into prototypes for the design team using program synthesis. We evaluate Synthé in multiple design sessions involving pairs of designers bodystorming interactions and observing the resulting models on a robot. We build on the findings from these sessions to improve the capabilities of Synthé and demonstrate the use of these capabilities in a second design session.
... In our study, an industrial task is defined by the off-line generated robot paths (task primitives), which can be programmed by a Computer-Aided Design (CAD) software, or by using more sophisticated methods including Programming by Demonstration (PbD) [30]. Consequently, a safe human-robot interaction is achieved through real-time modification of the off-line generated paths, Fig. 1. ...
Preprint
Human-robot collision avoidance is a key in collaborative robotics and in the framework of Industry 4.0. It plays an important role for achieving safety criteria while having humans and machines working side-by-side in unstructured and time-varying environment. This study introduces the subject of manipulator's on-line collision avoidance into a real industrial application implementing typical sensors and a commonly used collaborative industrial manipulator, KUKA iiwa. In the proposed methodology, the human co-worker and the robot are represented by geometric primitives (capsules). The minimum distance and relative velocity between them is calculated, when human/obstacles are nearby the concept of hypothetical repulsion and attraction vectors is used. By coupling this concept with a mathematical representation of robot's kinematics, a task level control with collision avoidance capability is achieved. Consequently, the off-line generated nominal path of the industrial task is modified on-the-fly so the robot is able to avoid collision with the co-worker safely while being able to fulfill the industrial operation. To guarantee motion continuity when switching between different tasks, the notion of repulsion-vector-reshaping is introduced. Tests on an assembly robotic cell in automotive industry show that the robot moves smoothly and avoids collisions successfully by adjusting the off-line generated nominal paths.