Two WAM 7-DoF robots performing a bimanual sweeping task. 

Two WAM 7-DoF robots performing a bimanual sweeping task. 

Source publication
Conference Paper
Full-text available
Very often, when addressing the problem of human-robot skill transfer in task space, only the Cartesian position of the end-effector is encoded by the learning algorithms , instead of the full pose. However, orientation is just as important as position, if not more, when it comes to successfully performing a manipulation task. In this paper, we pre...

Contexts in source publication

Context 1
... robotic manipulation ( Fig. 1) is a good example of a scenario where complex movements at the level of the end-effectors are needed for performing successfully [6]. In this context, learning generalized motions (i.e., position and orientation) is crucial for achieving dexterous and au- tonomous dual-arm skills. In a PbD framework, we posit This work was supported ...
Context 2
... apply the proposed framework to the learning of a bimanual sweeping task, a particular case where bimanual coordination patterns, that encompass both position and orientation constraints, arise. For this task, we employed two torque-controlled 7-DoF WAM robots (see Fig. 1). A broom is attached to the tool plate of the right arm using a Cardan joint, while the left arm uses a Barrett robotic hand to hold the broom. Since the broom is passively attached to the right arm, the sweeping movement consists of a rotation between the two end-effectors, with the hand grabbing the broom and describing a ...

Similar publications

Preprint
Full-text available
A deep generative model such as a GAN learns to model a rich set of semantic and physical rules about the target distribution, but up to now, it has been obscure how such rules are encoded in the network, or how a rule could be changed. In this paper, we introduce a new problem setting: manipulation of specific rules encoded by a deep generative mo...
Conference Paper
Full-text available
Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every s...
Conference Paper
Full-text available
In this paper it is introduced a class of non-singular manifolds with predefined-time stability. That is, for a given dynamical system with its trajectories constrained to this manifold it can be shown predefined-time stability to the origin. In addition, the function that defines the manifold and its derivative along the system trajectories are co...
Article
Full-text available
Let $\Omega$ in $R^M$ be a compact connected $M$-dimensional real analytic domain with boundary and $\phi$ be a primal navigation function; i.e. a real analytic Morse function on $\Omega$ with a unique minimum and with minus gradient vector field $G$ of $\phi$ on the boundary of $\Omega$ pointed inwards along each coordinate. Related to a robotics...

Citations

... Yi et al. (2022) developed an autonomous robotic grasping system using an imitation learning algorithm consisting of K-means clustering and DMP, which could be finely manipulated using a variety of machine learning methods, and proved its reliability through evaluation. There are also studies on improving individual algorithms or combining multiple algorithms to improve iterative efficiency and reproduction accuracy, such as task-parameterized GMM is used to learn the demonstration trajectory to obtain motion characteristics, which enables the robot to perform the dual-arm sweeping task smoothly (Silvério et al., 2015). However, the reference movement for demonstration learning relies on the richness of experimental data. ...
Article
Full-text available
With the development of technology, the humanoid robot is no longer a concept, but a practical partner with the potential to assist people in industry, healthcare and other daily scenarios. The basis for the success of humanoid robots is not only their appearance, but more importantly their anthropomorphic behaviors, which is crucial for the human-robot interaction. Conventionally, robots are designed to follow meticulously calculated and planned trajectories, which typically rely on predefined algorithms and models, resulting in the inadaptability to unknown environments. Especially when faced with the increasing demand for personalized and customized services, predefined motion planning cannot be adapted in time to adapt to personal behavior. To solve this problem, anthropomorphic motion planning has become the focus of recent research with advances in biomechanics, neurophysiology, and exercise physiology which deepened the understanding of the body for generating and controlling movement. However, there is still no consensus on the criteria by which anthropomorphic motion is accurately generated and how to generate anthropomorphic motion. Although there are articles that provide an overview of anthropomorphic motion planning such as sampling-based, optimization-based, mimicry-based, and other methods, these methods differ only in the nature of the planning algorithms and have not yet been systematically discussed in terms of the basis for extracting upper limb motion characteristics. To better address the problem of anthropomorphic motion planning, the key milestones and most recent literature have been collated and summarized, and three crucial topics are proposed to achieve anthropomorphic motion, which are motion redundancy, motion variation, and motion coordination. The three characteristics are interrelated and interdependent, posing the challenge for anthropomorphic motion planning system. To provide some insights for the research on anthropomorphic motion planning, and improve the anthropomorphic motion ability, this article proposes a new taxonomy based on physiology, and a more complete system of anthropomorphic motion planning by providing a detailed overview of the existing methods and their contributions.
... Since the ADAM robot is geared towards the care of elderly individuals or those with a certain degree of physical disability, it has established this task as relevant and primary in people's daily lives. Various works present the sweeping task as easily applicable within the standards of learning by demonstration (Silvério et al., 2015;Sasatake et al., 2021;Liu et al., 2018). ...
Article
Full-text available
In the field of robotics, the demand for adaptable skills capable of effectively handling diverse situations has surpassed the reliance on repetitive tasks. To enhance the generalization of motion policies, task-parameterized learning has emerged as a valuable approach, as it encodes pertinent contextual information in task parameters, facilitating the flexible execution of tasks. This process requires the collection of multiple demonstrations in various situations. Generate a set of diverse situations covering all possible cases is a complex task. Due to this, the search for training with fewer demonstrations becomes highly desirable. In this article, we present a novel algorithm focused on generating synthetic information to facilitate the generalization of parameterized tasks. This algorithm makes use of Kinesthetic Fast Marching Learning, a Learning from Demonstration (LfD) algorithm that allows obtaining optimal movement paths based on velocity fields. The algorithm enables autonomous data generation, producing demonstrations that are equal to or more optimal than those generated by users themselves. Evaluation is done through a metric based on Wasserstein distance that takes into account the probabilistic data of the generated paths. The algorithm has been evaluated through tests in simulated environments, comparing its efficiency against two widely used algorithms (where it has shown greater efficiency in generalization and the generation of more optimal paths) and with real-world tests in a task-oriented environment (sweeping) carried out by the ADAM robot.
... The operations such as Gaussian product and conditioning are defined using Riemannian statistics and exponential mapping. It also has applications for bimanual manipulation [42] and grasping [8] tasks. Operations in ProMP can also work on Riemannian manifold, including trajectory modulation, blending, task parameterization, etc [43]. ...
Article
Full-text available
This paper proposes a learning-from-demonstration (LfD) method using probability densities on the workspaces of robot manipulators. The method, named PRobabilistically-Informed Motion Primitives (PRIMP), learns the probability distribution of the end effector trajectories in the 6D workspace that includes both positions and orientations. It is able to adapt to new situations such as novel via points with uncertainty and a change of viewing frame. The method itself is robot-agnostic, in that the learned distribution can be transferred to another robot with the adaptation to its workspace density. Workspace-STOMP, a new version of the existing STOMP motion planner, is also introduced, which can be used as a post-process to improve the performance of PRIMP and any other reachability-based LfD method. The combination of PRIMP and Workspace-STOMP can further help the robot avoid novel obstacles that are not present during the demonstration process. The proposed methods are evaluated with several sets of benchmark experiments. PRIMP runs more than 5 times faster than existing state-of-the-art methods while generalizing trajectories more than twice as close to both the demonstrations and novel desired poses. They are then combined with our lab's robot imagination method that learns object affordances, illustrating the applicability to learn tool use through physical experiments.
... With the help of an additionally equipped arm, a dual-arm robotic system possesses many merits compared with a single robot arm, such as flexible distribution of payload, adjustable contact support, and efficient task execution, among others [1]. It has been observed that bimanual robots can accomplish a wide range of complicated tasks, such as deformable objects shaping [2], [3], [4], stir-fry cooking [5], electric cable routing [6], floor sweeping [7], clothes folding [8], components screwing [9], and wrench balancing [10], just to name a few. ...
Article
Full-text available
Robots with bimanual morphology usually possess higher flexibility, dexterity, and efficiency than those only equipped with a single arm. The dual-arm structure has enabled robots to perform various intricate tasks that are difficult or even impossible to achieve by unimanipulation. In this article, we aim to achieve robust bimanual grasping for object transportation. In particular, provided that stable contact is the key to the success of the transportation task, our focus lies on stabilizing the contact between the object and the robot end-effectors by employing the contact servoing strategy. To ensure that the contact is stable, the contact wrenches are required to evolve within the so-called friction cones all the time throughout the transportation task. To this end, we propose stabilizing the contact by leveraging a novel contact parameterization model. Parameterization expresses the contact stability manifold with a set of constraint-free exogenous parameters where the mapping is bijective. Notably, such parameterization can guarantee that the contact stability constraints can always be satisfied. We also show that many commonly used contact models can be parameterized out of a similar principle. Furthermore, to exploit the parameterized contact models in the control law, we devise a contact servoing strategy for the bimanual robotic system such that the force feedback signals from the force/torque sensors are incorporated into the control loop. The effectiveness of the proposed approach is well demonstrated with the experiments on several representative bimanual transportation tasks.
... The operations such as Gaussian product and conditioning are defined using Riemannian statistics and exponential mapping. It also has applications for bimanual manipulation [35] and grasping [8] tasks. Operations in ProMP can also work on Riemannian manifold, including trajectory modulation, blending, task parameterization, etc [36]. ...
Preprint
Full-text available
This paper proposes a learning-from-demonstration method using probability densities on the workspaces of robot manipulators. The method, named "PRobabilistically-Informed Motion Primitives (PRIMP)", learns the probability distribution of the end effector trajectories in the 6D workspace that includes both positions and orientations. It is able to adapt to new situations such as novel via poses with uncertainty and a change of viewing frame. The method itself is robot-agnostic, in which the learned distribution can be transferred to another robot with the adaptation to its workspace density. The learned trajectory distribution is then used to guide an optimization-based motion planning algorithm to further help the robot avoid novel obstacles that are unseen during the demonstration process. The proposed methods are evaluated by several sets of benchmark experiments. PRIMP runs more than 5 times faster while generalizing trajectories more than twice as close to both the demonstrations and novel desired poses. It is then combined with our robot imagination method that learns object affordances, illustrating the applicability of PRIMP to learn tool use through physical experiments.
... For the orientation, early work [13] did not consider geometric constraints-unit norm for UQs or orthogonality for rotation matrices-when they were learning the orientation data. Instead, they modified the generated trajectory at runtime to fulfill the constraints, causing deviations from the demonstrated motion. ...
Article
Full-text available
In this paper, we propose RiemannianFlow, a deep generative model that allows robots to learn complex and stable skills evolving on the Riemannian manifold. Examples of Riemannian data in robotics include stiffness (symmetric and positive definite matrix (SPD)) and orientation (unit quaternion (UQ)) trajectories. For Riemannian data, unlike Euclidean ones, different dimensions are interconnected by geometric constraints which have to be properly considered during the learning process. Using distance preserving mappings, our approach transfers the data between their original manifold and the tangent space, realizing the removing and re-fulfilling of the constraints. This allows to extend existing frameworks to learn stable skills from Riemannian data while guaranteeing the stability of the learning results. The ability of RiemannianFlow to learn various data patterns and the stability of the learned models are experimentally shown on a dataset of manifold motions. Further, we analyze from different perspectives the robustness of the model with different hyperparameter combinations. While stability is not affected by different hyperparameters, a proper choice of the hyperparameters leads to a significant improvement (up to 27.6%) in the model accuracy. Last, we show the effectiveness of RiemannianFlow in a real peg-in-hole (PiH) task where we need to generate stable and consistent position and orientation trajectories for the robot starting from different initial poses.
... For the orientation, early work [13] did not consider geometric constraints-unit norm for UQs or orthogonality for rotation matrices-when they were learning the orientation data. Instead, they modified the generated trajectory at runtime to fulfill the constraints, causing deviations from the demonstrated motion. ...
Preprint
Full-text available
In this paper, we propose RiemannianFlow, a deep generative model that allows robots to learn complex and stable skills evolving on Riemannian manifolds. Examples of Riemannian data in robotics include stiffness (symmetric and positive definite matrix (SPD)) and orientation (unit quaternion (UQ)) trajectories. For Riemannian data, unlike Euclidean ones, different dimensions are interconnected by geometric constraints which have to be properly considered during the learning process. Using distance preserving mappings, our approach transfers the data between their original manifold and the tangent space, realizing the removing and re-fulfilling of the geometric constraints. This allows to extend existing frameworks to learn stable skills from Riemannian data while guaranteeing the stability of the learning results. The ability of RiemannianFlow to learn various data patterns and the stability of the learned models are experimentally shown on a dataset of manifold motions. Further, we analyze from different perspectives the robustness of the model with different hyperparameter combinations. It turns out that the model's stability is not affected by different hyperparameters, a proper combination of the hyperparameters leads to a significant improvement (up to 27.6%) of the model accuracy. Last, we show the effectiveness of RiemannianFlow in a real peg-in-hole (PiH) task where we need to generate stable and consistent position and orientation trajectories for the robot starting from different initial poses.
... In many previous works, training data are simply treated as time series of Euclidean vectors. Other approaches, like (Pastor et al. 2009) and (Silvério et al. 2015), learn and adapt quaternion trajectories without enforcing the unit norm constraint, which leads to non-unit quaternions and hence requires an additional re-normalization step. Nevertheless, several works in the literature have investigated to some degree the problem of learning manipulation skills with specific geometric constrains. ...
Preprint
Full-text available
Learning from demonstration (LfD) is considered as an efficient way to transfer skills from humans to robots. Traditionally, LfD has been used to transfer Cartesian and joint positions and forces from human demonstrations. The traditional approach works well for some robotic tasks, but for many tasks of interest it is necessary to learn skills such as orientation, impedance, and/or manipulability that have specific geometric characteristics. An effective encoding of such skills can be only achieved if the underlying geometric structure of the skill manifold is considered and the constrains arising from this structure are fulfilled during both learning and execution. However, typical learned skill models such as dynamic movement primitives (DMPs) are limited to Euclidean data and fail in correctly embedding quantities with geometric constraints. In this paper, we propose a novel and mathematically principled framework that uses concepts from Riemannian geometry to allow DMPs to properly embed geometric constrains. The resulting DMP formulation can deal with data sampled from any Riemannian manifold including, but not limited to, unit quaternions and symmetric and positive definite matrices. The proposed approach has been extensively evaluated both on simulated data and real robot experiments. The performed evaluation demonstrates that beneficial properties of DMPs, such as convergence to a given goal and the possibility to change the goal during operation, apply also to the proposed formulation.
... One is to construct probability model of demonstration characteristics based on statistical methods. Calinon et al. [2][3][4][5][6] proposed a Gaussian mixture model-Gaussian mixture regression (GMM-GMR) learning framework to extract the feature distribution of trajectories in task space as well as manipulator joint space and applied the learned model to new task scenarios. Paraschos et al. [7][8][9][10] proposed the probabilistic movement primitives, which introduced linear weighted basis functions using Gaussian to construct the probabilistic model. ...
Article
Full-text available
Robots need the ability to tackle problems of movement generalization in variable task state and complex environment. Dynamical movement primitives can effectively endow robots with humanoid characteristics. However, when the initial state of tasks changes, the generalized trajectories by dynamical movement primitives cannot retain shape features of demonstration, resulting in the loss of imitation quality. In this article, a modified dynamical movement primitives based on Euclidean transformation is proposed to solve this problem. It transforms the initial task state to a virtual situation similar to the demonstration and then utilizes the dynamical movement primitive method to realize movement generalization. Finally, it reverses the movement back to the real situation. Besides, the information of obstacles is added to Euclidean transformation based dynamical movement primitives framework to endow robots with the ability of obstacle avoidance. The normalized root-mean-square error is proposed as the criterion to evaluate the imitation similarity. The feasibility of this method is verified through writing letters, wiping whiteboard in two-dimensional task, and stirring mixture in three-dimensional task. The results show that the similarity of movement imitation in the proposed method is higher than dynamical movement primitives when the initial state changes. Meanwhile, Euclidean transformation based dynamical movement primitives can still greatly retain shape feature of demonstration while avoiding obstacles in an unstructured environment.
... In the past decades, several LfD based approaches have been developed such as: DMPs [12], [9], Probabilistic Movement Primitives (ProMP) [13], Stable Dynamical Systems (SDS) [14], [15], Gaussian Mixture Model (GMM) and Task-Parameterized GMM (TP-GMM) [16], and Kernelized Movement Primitive (KMP) [17]. In many previous works, quaternion trajectories are learned and adapted without considering the unit norm constraint (e.g., orientation DMP [18] and TP-GMM [19]), leading to improper quaternions and hence requiring an additional re-normalization. ...
Conference Paper
Full-text available
Imitation learning techniques have been used as a way to transfer skills to robots. Among them, dynamic movement primitives (DMPs) have been widely exploited as an effective and an efficient technique to learn and reproduce complex discrete and periodic skills. While DMPs have been properly formulated for learning point-to-point movements for both translation and orientation, periodic ones are missing a formulation to learn the orientation. To address this gap, we propose a novel DMP formulation that enables encoding of periodic orientation trajectories. Within this formulation we develop two approaches: Riemannian metric-based projection approach and unit quaternion based periodic DMP. Both formulations exploit unit quaternions to represent the orientation. However, the first exploits the properties of Riemannian manifolds to work in the tangent space of the unit sphere. The second encodes directly the unit quaternion trajectory while guaranteeing the unitary norm of the generated quaternions. We validated the technical aspects of the proposed methods in simulation. Then we performed experiments on a real robot to execute daily tasks that involve periodic orientation changes (i.e., surface polishing/wiping and liquid mixing by shaking).