Figure 1 - uploaded by Juxi Leitner
Content may be subject to copyright.
Our research platform, the iCub humanoid robot.  

Our research platform, the iCub humanoid robot.  

Source publication
Conference Paper
Full-text available
We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles - other objects detected in the visua...

Similar publications

Article
Full-text available
Children with autism spectrum disorder (ASD) demonstrate a host of motor impairments that may share a common developmental basis with ASD core symptoms. School-age children with ASD exhibit particular difficulty with hand-eye coordination and appear to be less sensitive to visual feedback during motor learning. Sensorimotor deficits are observable...
Article
Full-text available
We describe our software system enabling a tight integration between vision and control modules on complex, high-DOF humanoid robots. This is demonstrated with the iCub humanoid robot performing visual object detection, reaching and grasping actions. A key capability of this system is reactive avoidance of obstacle objects detected from the video s...
Article
Full-text available
This study examined adaptive changes of eye-hand coordination during a visuomotor rotation task under the use of terminal visual feedback. Young adults made reaching movements to targets on a digitizer while looking at targets on a monitor where the rotated feedback (a cursor) of hand movements appeared after each movement. Three rotation angles (3...
Article
Full-text available
Humanoid robots are resourceful platforms and can be used in diverse application scenarios. However, their high number of degrees of freedom (i.e., moving arms, head and eyes) deteriorates the precision of eye-hand coordination. A good kinematic calibration is often difficult to achieve, due to several factors, e.g., unmodeled deformations of the s...
Article
Full-text available
This study examined adaptive changes of eye-hand coordination during a visuomotor rotation task. Young adults made aiming movements to targets on a horizontal plane, while looking at the rotated feedback (cursor) of hand movements on a monitor. To vary the task difficulty, three rotation angles (30°, 75°, and 150°) were tested in three groups. All...

Citations

... Closed-loop approaches have been previously applied to other aspects of robotic manipulation. For example, Leitner et al. (2014) presented a method employing trained visual detectors for closed-loop, real-time reaching and obstacle avoidance on the iCub robot. More recently CNN-based controllers for grasping have been proposed to combine deep learning with closed-loop grasping (Kalashnikov et al., 2018;Levine et al., 2016;Viereck et al., 2017). ...
Article
Full-text available
We present a novel approach to perform object-independent grasp synthesis from depth images via deep neural networks. Our generative grasping convolutional neural network (GG-CNN) predicts a pixel-wise grasp quality that can be deployed in closed-loop grasping scenarios. GG-CNN overcomes shortcomings in existing techniques, namely discrete sampling of grasp candidates and long computation times. The network is orders of magnitude smaller than other state-of-the-art approaches while achieving better performance, particularly in clutter. We run a suite of real-world tests, during which we achieve an 84% grasp success rate on a set of previously unseen objects with adversarial geometry and 94% on household items. The lightweight nature enables closed-loop control of up to 50 Hz, with which we observed 88% grasp success on a set of household objects that are moved during the grasp attempt. We further propose a method combining our GG-CNN with a multi-view approach, which improves overall grasp success rate in clutter by 10%. Code is provided at https://github.com/dougsm/ggcnn
... This approach relies on reactive grasp behaviors based on tactile sensory data to deal with problems caused by inaccurate perception, and active monitoring during motion execution to deal with unforeseen obstacles. In another work, a reactive reaching and grasping system is proposed in an effort towards realizing a vision guided system for object manipulation in unstructured environments [66]. The proposed system tightly integrates perception and control to realize basic eye-hand coordination on an iCub humanoid robot [68] enabling to adapt its behaviors to changes in its environment. ...
... In this architecture, a knowledge-based controller is used for guiding the other components. While most robot systems rely on similar deliberative control strategies, some systems also exhibit reactive behaviors by directly mapping percepts to actions (e.g., [65], [66]) to enable fast adaptation and ensure safety in dynamic human environments. It is also important to mention that today's robot systems generally benefit from robot software frameworks [124] such as ROS [125] as an underlying infrastructure for communication of heterogeneous components operating at different speeds, and there is a growing interest in Cloud Robotics [126] for enabling intensive computations and sharing large amounts of data. ...
Article
Service robots are expected to play an important role in our daily lives as our companions in home and work environments in the near future. An important requirement for fulfilling this expectation is to equip robots with skills to perform everyday manipulation tasks, the success of which is crucial for most home chores, such as cooking, cleaning, and shopping. Robots have been used successfully for manipulation tasks in wellstructured and controlled factory environments for decades. Designing skills for robots working in uncontrolled human environments raises many potential challenges in various subdisciplines, such as computer vision, automated planning, and human-robot interaction. In spite of the recent progress in these fields, there are still challenges to tackle. This article outlines problems in different research areas related to mobile manipulation from the cognitive perspective, reviews recently published works and the state-of-the-art approaches to address these problems, and discusses open problems to be solved to realize robot assistants that can be used in manipulation tasks in unstructured human environments.
Article
Humans perform object manipulation in order to execute a specific task. Seldom is such action started with no goal in mind. In contrast, traditional robotic grasping (first stage for object manipulation) seems to focus purely on getting hold of the object—neglecting the goal of the manipulation. Most metrics used in robotic grasping do not account for the final task in their judgement of quality and success. In this Perspective we suggest a change of view. Since the overall goal of a manipulation task shapes the actions of humans and their grasps, we advocate that the task itself should shape the metric of success. To this end, we propose a new metric centred on the task. Finally, we call for action to support the conversation and discussion on such an important topic for the community. Traditional robotic grasping focuses on manipulating an object, often without considering the goal or task involved in the movement. The authors propose a new metric for success in manipulation that is based on the task itself.
Article
Full-text available
We describe our software system enabling a tight integration between vision and control modules on complex, high-DOF humanoid robots. This is demonstrated with the iCub humanoid robot performing visual object detection, reaching and grasping actions. A key capability of this system is reactive avoidance of obstacle objects detected from the video stream while carrying out reach-and-grasp tasks. The subsystems of our architecture can independently be improved and updated, for example, we show that by using machine learning techniques we can improve visual perception by collecting images during the robot’s interaction with the environment. We describe the task and software design constraints that led to the layered modular system architecture.
Article
The letter reports an evaluation of the iCub grasping capabilities, performed using the YCB Object and Model Set. The goal is to understand what kind of objects the iCub dexterous hand can grasp, and with what degree of robustness and flexibility, given the best possible control strategy. Therefore, the robot fingers are directly controlled by a human expert using a dataglove: in other words, the human brain is employed as the best possible controller. Through this technique, we provide a baseline for researchers who want to evaluate the performance of their grasping controller. By using a widespread robotic platform and a publicly available set of objects, we believe that many researchers can directly benefit from this resource; moreover, what we propose is a general methodology for benchmarking of grasping and manipulation that can be applied to any dexterous robotic hand.
Article
We suggest that different behavior generation schemes, such as sensory reflex behavior and intentional proactive behavior, can be developed by a newly proposed dynamic neural network model, named stochastic multiple timescale recurrent neural network (S-MTRNN). The model learns to predict subsequent sensory inputs, generating both their means and their uncertainty levels in terms of variance (or inverse precision) by utilizing its multiple timescale property. This model was employed in robotics learning experiments in which one robot controlled by the S-MTRNN was required to interact with another robot under the condition of uncertainty about the other's behavior. The experimental results show that self-organized and sensory reflex behavior--based on probabilistic prediction--emerges when learning proceeds without a precise specification of initial conditions. In contrast, intentional proactive behavior with deterministic predictions emerges when precise initial conditions are available. The results also showed that, in situations where unanticipated behavior of the other robot was perceived, the behavioral context was revised adequately by adaptation of the internal neural dynamics to respond to sensory inputs during sensory reflex behavior generation. On the other hand, during intentional proactive behavior generation, an error regression scheme by which the internal neural activity was modified in the direction of minimizing prediction errors was needed for adequately revising the behavioral context. These results indicate that two different ways of treating uncertainty about perceptual events in learning, namely, probabilistic modeling and deterministic modeling, contribute to the development of different dynamic neuronal structures governing the two types of behavior generation schemes.