Figure 1 - uploaded by Luigi Penco
Content may be subject to copyright.
First evidence of the realization of the concept of humanoid robot. Left: First recorded designs of humanoid robots by Leonardo da Vinci (1495) and reconstruction of the Leonardo's robot from the original drawings (credits: lucasliso i.pinimg.com). Right: The supposedly first built humanoid robot is a soldier with a trumpet made by Friedrich Kaufmann (1810) (credits: AP Photo/Heinrich Sanden).

First evidence of the realization of the concept of humanoid robot. Left: First recorded designs of humanoid robots by Leonardo da Vinci (1495) and reconstruction of the Leonardo's robot from the original drawings (credits: lucasliso i.pinimg.com). Right: The supposedly first built humanoid robot is a soldier with a trumpet made by Friedrich Kaufmann (1810) (credits: AP Photo/Heinrich Sanden).

Source publication
Thesis
Full-text available
This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct an...

Contexts in source publication

Context 1
... des premiers dessins enregistrés d'un robot humanoïde a été réalisé par Léonard de Vinci vers 1495. Ses carnets, redécouverts dans les années 1950, contiennent des dessins détaillés d'un chevalier mé-canique en armure (Figure 1 gauche) qui était censé s'asseoir, agiter les bras et bouger la tête et la mâchoire. Les preuves de l'existence de robots humanoïdes construits n'apparaissent que trois siècles plus tard. ...
Context 2
... of the first recorded designs of a humanoid robot was made by Leonardo da Vinci in around 1 495. His notebooks, rediscovered in the 1950s, contain detailed drawings of a mechanical knight in armour (Figure 1 left) which was supposed to sit up, wave its arms and move its head and jaw. Evidence of built humanoid robots only surfaces three centuries later. ...
Context 3
... of built humanoid robots only surfaces three centuries later. The first built humanoid robot in history is in fact said to be a soldier with automatic bellows blowing a trumpet (Figure 1 right), made in 1810 by Friedrich Kaufmann in Dresden, Germany. In 1928, a more advanced humanoid robot was exhibited at the annual exhibition of the Model Engineers Society in London. ...
Context 4
... teleoperation system consists of several parts, including the perception of human states, their mapping to the robot, the robot controller, and the perception of the robot states and the remote environment, essential to give feedback to the user. Figure 1.1 is a schematic view of the architecture for teleoperating the robot. First, the human kinematic and dynamic information are measured in order to teleoperate the robot. ...
Context 5
... technologies have been employed in the literature to measure the motion of the main human limbs, such as legs, torso, arms, head, and hands (Figure 1.2). An ubiquitous option are the Inertial Measurement Unit (IMU)-based wearable technologies. ...
Context 6
... conventional way to provide situation awareness to the human operator is through visual feedback (Figure 1.3). Visual information allows the user to localize themselves and other humans or objects in the remote environment. ...
Context 7
... are different technologies available in the literature to provide haptic feedback to the human ( Figure 1.4). Force feedback, tactile and vibro-tactile feedback are the most used in teleoperation scenarios. ...
Context 8
... MVN solutions are based on inertial technology, using gyroscopes, accelerometers, and magnetometers rather than digital cameras, volumes and markers to capture movement. The hardware version of the Xsens MVN product line we used in all our experiments was the MVN Link ( Figure 1.5). It consists of a lycra suit with 17 wired trackers, a wireless data link and a 9.5h life battery. ...
Context 9
... recording session saves the tracked data containing the information of the tracked body segments (their pose, velocity and acceleration) together with the joint angles values and the center of mass position; all the information that is generally used in a retargeting problem. Figure 1.6 contains a list of the tracked body segments and joints. Further details about the system can be found in [156]. ...
Context 10
... robotic platform we used in our teleoperation system is the iCub robot (Figure 1.7). iCub [126] is a research-grade open-source humanoid robot designed by the Italian Institute of Technology (IIT) to experiment with embodied artificial intelligence. ...
Context 11
... the teleoperation the ZMP position of the robot always lies inside the support polygon as expected (see Figure 2 the legs are not taken into consideration in the stack, and the robot is free to adapt the posture of the legs according to the retargeted center of mass and waist height. Hence, the resulting trajectories of legs do not follow precisely the human reference (Figure 2.11), except for the knees and of the pitch of the ankles, which are close to the those of the human because of the waist height tracking. Even if the human performs some motions that are very challenging for the balance, the ZMP constraints introduced by our framework make the robot maintain its balance. ...
Context 12
... validate our optimization process on the humanoid robot iCub, for double support motions. We compare controllers that were automatically optimized on a training sequence of motions against a baseline hand-tuned controller, over different motion sequences (see Figure 3.1). We further show that our optimized controller can be successfully employed to execute a variety of motions, other than those used in the training, for example for teleoperation, where reference trajectories (unknown a priori) are generated in real-time by the human operator. ...
Context 13
... retargeting of the upper-body joints, which is important for manipulation, has often been done independently from the motion generation of the legs, crucial for balancing and locomotion. In [109] for example, the authors employ the mobile manipulator Justin to retarget upper-body motions with haptic feedback at the hands, without considering leg motions (Figure 4.1). Kim et al. [97] were among the first to extend the robot teleoperation to walking motions. ...
Context 14
... [55], the authors teleoperate the iCub robot in an immersive scenario using a VR headset and a walking platform (Figure 4.1). The robot starts and stops walking whenever the operator does, but the retargetable double-support motions are only limited arm movements. ...
Context 15
... retargeting can be performed also at whole-body level. Ishiguro et al [79] conducted some experiments retargeting highly dynamic upper-body and leg motions onto the humanoid robot JAXON (Fig 4.1). Although suitable for executing challenging movements, such as kicking or hitting a tennis ball with a racket, their technique cannot be used for extended walking due to the great mismatch between the human and robot dynamics, which forces the human to walk in a Figure 4.2: Overview of the proposed teleoperation system. ...
Context 16
... key idea is that if the robot executes the desired movement before the operator performs it, then the operator will watch a delayed video feed that will be almost indistinguishable from a real-time feed (Figure 5.1). At each time-step, the robot analyzes the data that it has received so far, measures the communication time, estimates the communication time to send the feedback, and predicts what the operator is most likely to do in the next seconds. ...
Context 17
... Goals. The training set is composed of 7 demonstrations of bottle reaching motions ( Figure 5.10), with the goal located in 7 different positions. The average duration of the demonstrations is 6.1s. ...
Context 18
... motion recognition, we assumed that the duration of the observed trajectories is equal to the mean duration of the demonstrated trajectories, which might not be true. To match as closely as possible the exact speed at which the movement is being executed by the human operator, we have to estimate the actual trajectory duration (Figure 5.12). More specifically, we want to find the time modulation α, that maps the actual duration of a given (observed) trajectory to the mean duration of the associated demonstrated trajectories. ...
Context 19
... generating a first prediction, the transition from delayed to predicted references can be discontinuous ( Figure 5.13). To smoothen the transition, a policy blending arbitrates the delayed received references y(t − τ f (t)) and the predicted onesˆyonesˆ onesˆy(t + ˆ τ b (t)|t − τ f (t)), determining the adjusted reference ( Figure 5.13): ...
Context 20
... generating a first prediction, the transition from delayed to predicted references can be discontinuous ( Figure 5.13). To smoothen the transition, a policy blending arbitrates the delayed received references y(t − τ f (t)) and the predicted onesˆyonesˆ onesˆy(t + ˆ τ b (t)|t − τ f (t)), determining the adjusted reference ( Figure 5.13): ...
Context 21
... compensate for the delay, once the robot has identified the best ProMP that matches the observations, it has to select the predictions that correspond to the right time-step from the mean of the ProMP. By computing the round-trip delay, the robot chooses the right samples from the current movement prediction, i.e. posterior ProMPs' distributions ( Figure 5.13). At the beginning of the motion, before any ProMP is recognized, the robot uses the delayed commands; however, once a ProMP is recognized, the robot can start compensating for the delays, but first the delayed trajectory needs to catch up with the ProMP. ...
Context 22
... evaluated our prescient whole-body teleoperation system on the iCub robot with a time-varying round-trip delay around 1.5s, given by a stochastic forward delay following a normal distribution with 750ms as mean and 100ms as standard deviation, and a constant backward delay of 750ms ( Figure 5.16 and Figure A.1). We used a constant backward delay in these experiments because we relied on an existing video streaming system that cannot artificially delay images randomly. ...
Context 23
... error is computed for the 20 testing motions from the bottle reaching scenario of the dataset Multiple Tasks (including those from Figure 5.7), and for the 21 testing motions from the box handling scenario of the dataset Multiple Tasks (including those from Figure 5.5). The compensated trajectories are temporally realigned with the non-delayed trajectories for computing the error, which is considered only once the prediction starts, and once the blended transition from delay to compensation is over ( Figure 5.13). The time-varying forward follows a normal distribution with 750ms as mean and 100ms as standard deviation. ...
Context 24
... the experiments can be seen in the video youtu.be/N3u4ot3aIyQ. If the operator decides to stop or to perform a different movement from the one that has begun, then the predicted trajectories are blended into the delayed trajectories in a way similar to that adopted to switch from delayed to predicted references (as shown in Figure 5.14), hence avoiding any undesired prolonged mismatch. ...
Context 25
... then evaluated the performance of the compensation when the communication delay increases ( Figure 5.18A). To do so, we compared the compensated trajectory to the non-delayed trajectory, for the right hand, in the task of reaching the bottle on the table of the dataset Multiple Tasks (Figure 5.7). ...
Context 26
... do so, we compared the compensated trajectory to the non-delayed trajectory, for the right hand, in the task of reaching the bottle on the table of the dataset Multiple Tasks (Figure 5.7). During the synchronization, the error is roughly proportional to the delay ( Figure 5.13), which adds up to the prediction errors. In that case, we observe a mean error of about 2.5cm for 1s delay, but more than 10cm for a 3s delay, because the transition time takes a significant amount of time on a short trajectory (30% for a delay of 3s and a trajectory of 10s). ...
Context 27
... Tracking error of the compensated trajectories for the right hand position with respect to the nondelayed ones compared to the tracking error of the corresponding non-compensated (delayed) trajectories with respect to the same non-delayed ones. The tracking error of the compensated trajectories is considered both including the transition from the delayed phase to the synchronization phase ( Figure 5.13), which adds a non-compensable error, and without transition. The RMS of the error is computed from the 10 testing motions of the task of reaching the bottle on the table from dataset Multiple Tasks ( Figure 5.7) and its median value is reported as a bold line. ...
Context 28
... to how we tested the adaptability of the approach to the specific way the operator is performing a given task, we also investigated its ability to adjust to new object locations. To do that, we recorded a third dataset (data "Goals") by teleoperating the robot in simulation to reach a bottle located onto the table in several positions ( Figure 5.10), then we tested the approach while reaching the same bottle but at different locations ( Figure 5.11). ...
Context 29
... des premiers dessins enregistrés d'un robot humanoïde a été réalisé par Léonard de Vinci vers 1495. Ses carnets, redécouverts dans les années 1950, contiennent des dessins détaillés d'un chevalier mé-canique en armure (Figure 1 gauche) qui était censé s'asseoir, agiter les bras et bouger la tête et la mâchoire. Les preuves de l'existence de robots humanoïdes construits n'apparaissent que trois siècles plus tard. ...
Context 30
... of the first recorded designs of a humanoid robot was made by Leonardo da Vinci in around 1 495. His notebooks, rediscovered in the 1950s, contain detailed drawings of a mechanical knight in armour (Figure 1 left) which was supposed to sit up, wave its arms and move its head and jaw. Evidence of built humanoid robots only surfaces three centuries later. ...
Context 31
... of built humanoid robots only surfaces three centuries later. The first built humanoid robot in history is in fact said to be a soldier with automatic bellows blowing a trumpet (Figure 1 right), made in 1810 by Friedrich Kaufmann in Dresden, Germany. In 1928, a more advanced humanoid robot was exhibited at the annual exhibition of the Model Engineers Society in London. ...
Context 32
... teleoperation system consists of several parts, including the perception of human states, their mapping to the robot, the robot controller, and the perception of the robot states and the remote environment, essential to give feedback to the user. Figure 1.1 is a schematic view of the architecture for teleoperating the robot. First, the human kinematic and dynamic information are measured in order to teleoperate the robot. ...
Context 33
... technologies have been employed in the literature to measure the motion of the main human limbs, such as legs, torso, arms, head, and hands (Figure 1.2). An ubiquitous option are the Inertial Measurement Unit (IMU)-based wearable technologies. ...
Context 34
... conventional way to provide situation awareness to the human operator is through visual feedback (Figure 1.3). Visual information allows the user to localize themselves and other humans or objects in the remote environment. ...
Context 35
... are different technologies available in the literature to provide haptic feedback to the human ( Figure 1.4). Force feedback, tactile and vibro-tactile feedback are the most used in teleoperation scenarios. ...
Context 36
... MVN solutions are based on inertial technology, using gyroscopes, accelerometers, and magnetometers rather than digital cameras, volumes and markers to capture movement. The hardware version of the Xsens MVN product line we used in all our experiments was the MVN Link ( Figure 1.5). It consists of a lycra suit with 17 wired trackers, a wireless data link and a 9.5h life battery. ...
Context 37
... recording session saves the tracked data containing the information of the tracked body segments (their pose, velocity and acceleration) together with the joint angles values and the center of mass position; all the information that is generally used in a retargeting problem. Figure 1.6 contains a list of the tracked body segments and joints. Further details about the system can be found in [156]. ...
Context 38
... robotic platform we used in our teleoperation system is the iCub robot (Figure 1.7). iCub [126] is a research-grade open-source humanoid robot designed by the Italian Institute of Technology (IIT) to experiment with embodied artificial intelligence. ...
Context 39
... the teleoperation the ZMP position of the robot always lies inside the support polygon as expected (see Figure 2 the legs are not taken into consideration in the stack, and the robot is free to adapt the posture of the legs according to the retargeted center of mass and waist height. Hence, the resulting trajectories of legs do not follow precisely the human reference (Figure 2.11), except for the knees and of the pitch of the ankles, which are close to the those of the human because of the waist height tracking. Even if the human performs some motions that are very challenging for the balance, the ZMP constraints introduced by our framework make the robot maintain its balance. ...
Context 40
... validate our optimization process on the humanoid robot iCub, for double support motions. We compare controllers that were automatically optimized on a training sequence of motions against a baseline hand-tuned controller, over different motion sequences (see Figure 3.1). We further show that our optimized controller can be successfully employed to execute a variety of motions, other than those used in the training, for example for teleoperation, where reference trajectories (unknown a priori) are generated in real-time by the human operator. ...
Context 41
... retargeting of the upper-body joints, which is important for manipulation, has often been done independently from the motion generation of the legs, crucial for balancing and locomotion. In [109] for example, the authors employ the mobile manipulator Justin to retarget upper-body motions with haptic feedback at the hands, without considering leg motions (Figure 4.1). Kim et al. [97] were among the first to extend the robot teleoperation to walking motions. ...
Context 42
... [55], the authors teleoperate the iCub robot in an immersive scenario using a VR headset and a walking platform (Figure 4.1). The robot starts and stops walking whenever the operator does, but the retargetable double-support motions are only limited arm movements. ...
Context 43
... retargeting can be performed also at whole-body level. Ishiguro et al [79] conducted some experiments retargeting highly dynamic upper-body and leg motions onto the humanoid robot JAXON (Fig 4.1). Although suitable for executing challenging movements, such as kicking or hitting a tennis ball with a racket, their technique cannot be used for extended walking due to the great mismatch between the human and robot dynamics, which forces the human to walk in a Figure 4.2: Overview of the proposed teleoperation system. ...
Context 44
... key idea is that if the robot executes the desired movement before the operator performs it, then the operator will watch a delayed video feed that will be almost indistinguishable from a real-time feed (Figure 5.1). At each time-step, the robot analyzes the data that it has received so far, measures the communication time, estimates the communication time to send the feedback, and predicts what the operator is most likely to do in the next seconds. ...
Context 45
... Goals. The training set is composed of 7 demonstrations of bottle reaching motions ( Figure 5.10), with the goal located in 7 different positions. The average duration of the demonstrations is 6.1s. ...
Context 46
... motion recognition, we assumed that the duration of the observed trajectories is equal to the mean duration of the demonstrated trajectories, which might not be true. To match as closely as possible the exact speed at which the movement is being executed by the human operator, we have to estimate the actual trajectory duration (Figure 5.12). More specifically, we want to find the time modulation α, that maps the actual duration of a given (observed) trajectory to the mean duration of the associated demonstrated trajectories. ...
Context 47
... generating a first prediction, the transition from delayed to predicted references can be discontinuous ( Figure 5.13). To smoothen the transition, a policy blending arbitrates the delayed received references y(t − τ f (t)) and the predicted onesˆyonesˆ onesˆy(t + ˆ τ b (t)|t − τ f (t)), determining the adjusted reference ( Figure 5.13): ...
Context 48
... generating a first prediction, the transition from delayed to predicted references can be discontinuous ( Figure 5.13). To smoothen the transition, a policy blending arbitrates the delayed received references y(t − τ f (t)) and the predicted onesˆyonesˆ onesˆy(t + ˆ τ b (t)|t − τ f (t)), determining the adjusted reference ( Figure 5.13): ...
Context 49
... compensate for the delay, once the robot has identified the best ProMP that matches the observations, it has to select the predictions that correspond to the right time-step from the mean of the ProMP. By computing the round-trip delay, the robot chooses the right samples from the current movement prediction, i.e. posterior ProMPs' distributions ( Figure 5.13). At the beginning of the motion, before any ProMP is recognized, the robot uses the delayed commands; however, once a ProMP is recognized, the robot can start compensating for the delays, but first the delayed trajectory needs to catch up with the ProMP. ...
Context 50
... evaluated our prescient whole-body teleoperation system on the iCub robot with a time-varying round-trip delay around 1.5s, given by a stochastic forward delay following a normal distribution with 750ms as mean and 100ms as standard deviation, and a constant backward delay of 750ms ( Figure 5.16 and Figure A.1). We used a constant backward delay in these experiments because we relied on an existing video streaming system that cannot artificially delay images randomly. ...
Context 51
... error is computed for the 20 testing motions from the bottle reaching scenario of the dataset Multiple Tasks (including those from Figure 5.7), and for the 21 testing motions from the box handling scenario of the dataset Multiple Tasks (including those from Figure 5.5). The compensated trajectories are temporally realigned with the non-delayed trajectories for computing the error, which is considered only once the prediction starts, and once the blended transition from delay to compensation is over ( Figure 5.13). The time-varying forward follows a normal distribution with 750ms as mean and 100ms as standard deviation. ...
Context 52
... the experiments can be seen in the video youtu.be/N3u4ot3aIyQ. If the operator decides to stop or to perform a different movement from the one that has begun, then the predicted trajectories are blended into the delayed trajectories in a way similar to that adopted to switch from delayed to predicted references (as shown in Figure 5.14), hence avoiding any undesired prolonged mismatch. ...
Context 53
... then evaluated the performance of the compensation when the communication delay increases ( Figure 5.18A). To do so, we compared the compensated trajectory to the non-delayed trajectory, for the right hand, in the task of reaching the bottle on the table of the dataset Multiple Tasks (Figure 5.7). ...
Context 54
... do so, we compared the compensated trajectory to the non-delayed trajectory, for the right hand, in the task of reaching the bottle on the table of the dataset Multiple Tasks (Figure 5.7). During the synchronization, the error is roughly proportional to the delay ( Figure 5.13), which adds up to the prediction errors. In that case, we observe a mean error of about 2.5cm for 1s delay, but more than 10cm for a 3s delay, because the transition time takes a significant amount of time on a short trajectory (30% for a delay of 3s and a trajectory of 10s). ...
Context 55
... Tracking error of the compensated trajectories for the right hand position with respect to the nondelayed ones compared to the tracking error of the corresponding non-compensated (delayed) trajectories with respect to the same non-delayed ones. The tracking error of the compensated trajectories is considered both including the transition from the delayed phase to the synchronization phase ( Figure 5.13), which adds a non-compensable error, and without transition. The RMS of the error is computed from the 10 testing motions of the task of reaching the bottle on the table from dataset Multiple Tasks ( Figure 5.7) and its median value is reported as a bold line. ...
Context 56
... to how we tested the adaptability of the approach to the specific way the operator is performing a given task, we also investigated its ability to adjust to new object locations. To do that, we recorded a third dataset (data "Goals") by teleoperating the robot in simulation to reach a bottle located onto the table in several positions ( Figure 5.10), then we tested the approach while reaching the same bottle but at different locations ( Figure 5.11). ...

Citations

Article
We present a tele-operation control framework that (i) enhances the upper motion synchrony between a user and a robot using the minimum-jerk model coupled with a recursive least-square filter, and (ii) synchronizes the walking pace by predicting user's stepping frequency using motion capture data and a deep learning model. By integrating (i) and (ii) in a task-space whole-body controller, we achieve full-body synchronization. We assess our humanoid-to-human whole-body synchronized motion model on the HRP-4 humanoid robot in forward, lateral and backward walks with concurrent upper limbs motions experiments.