Fig 1 - uploaded by Rachid Alami
Content may be subject to copyright.
The costs of Safety function mapped around the human at 0.05m resolution. This function returns decreasing costs 

The costs of Safety function mapped around the human at 0.05m resolution. This function returns decreasing costs 

Source publication
Conference Paper
Full-text available
Robots' interaction with humans raises new issues for geometrical reasoning where the humans must be taken explicitly into account. We claim that a human-aware motion system must not only elaborate safe robot motions, but also synthesize good, socially acceptable and legible movement. This paper focuses on a manipulation planner and a placement mec...

Contexts in source publication

Context 1
... reaching its destination, the robot uses Human Aware Manipulation Planner to plan its final body motion ( figure 11). The produced path is collision free, but also it is comfortable for the human gaze direction, his arm comfort and his safety are taken into account in the planning loop. ...
Context 2
... reaching its destination, the robot uses Human Aware Manipulation Planner to plan its final body motion ( figure 11). The produced path is collision free, but also it is comfortable for the human gaze direction, his arm comfort and his safety are taken into account in the planning loop. Fig. 10. After seeing where the human is, Perspective Placement mecha- nism finds another configuration that will be suitable for ...
Context 3
... taking to take decisions about human reasoning, they probe how human- human interaction helps if it is applied to human-robot interaction. [31] and [30] also work in perspective taking in a 3D simulated environment in order to help robot with the learning process. Although the correct placement of the robot relative to human is very important, it is not enough to maintain safety and comfort for manipulation scenarios where the robot should move its structures very close to the human. Besides new designs [3][9] that will ensure safety at the physical level, in software level most of the approaches are based on the minimization of an index related to various safety metrics. As an example of these methods, we can cite Kulic[7] and Ikuta[8] where the level of danger is estimated and minimized based on factors influencing the impact force during a human-robot collision, such as the effective robot inertia, the relative velocity and the distance between the robot and the human. With these approaches physical safety is assured by avoiding collisions with human and by minimizing the intensity of a possible impact in case of a collision. Although several authors propose motion planning or re- active schemes considering humans, there is no contribution that tackles globally the problem as we propose to do. III. HUMAN AWARE MANIPULATION PLANNER User studies with humans and robots [16][1][20] provide a number of properties and non written rules/protocols [15] of human-robot or human-human interactions. Only very limited works consider such properties and often in an ad hoc manner. We describe below a new technique that allows to integrate such additional constraints in a more generic way. Our approach is based on separating the whole problem of manipulation, e.g a robot giving an object to the human, into 3 stages. Each of these stages will produce the corresponding result and past it to the next stage: • Spatial coordinates of the point where the object will be handled to the human, • The path that the object will follow from its resting position to human hand as it was a free flying object, • The path of the whole body of the robot along with its posture for manipulation. All these items must be calculated by taking explicitly into account the human partner to maintain his safety and his comfort. Not only the kinematic structure of the human, but also his vision field, his accessibility, his preferences and his state must be reasonned in the planning loop in order to have a safe and comfortable interaction. In each steps of the items stated above, the planner ensures human’s safety by avoiding any possible collision between the robot and the human. One of the key points in the manipulation planning is to decide where robot, human and the object meet. In classical motion planners, this decision is made implicitly by only reasoning about robot’s and the object’s structure. The absence of human is compensated by letting him adapt himself to the robot’s motion, thus making the duty of the human more important and the motions of the robot less predictable. We present 3 properties of the interaction that will help us to find safe and comfortable coordinates of the object where the robot will handle it to the human. Each property is represented by a cost function f ( x, y, z, C H , P ref H ) for spatial coordinates ( x, y, z ) ,a given human configuration C H and his preferences P ref H when handling an object (e.g left/right handiness, sitting/standing, etc.). This function calculates the cost of a given point around the human by taking into account his preferences, his accessibility his vision field and his state. We will now explain the structure of the “Safety” , “Visibility” and ”Arm comfort“ properties ,their attached functions and how we use them to find the object exchange coordinate. Safety : The first of the 3 properties is the ”safety“. The notion of safety is the absolute need of any human- robot interaction scenario. It gains a higher importance in manipulation scenario where the robot places itself close proximity of the human. As farther the robot is from human, safer the interaction is, the safety cost function f saf ety ( x, y, z, C H , P ref H ) is a decreasing function according to the distance between the human H and object coordinates ( x, y, z ) . The P ref H contains preferences of the function behavior according to human states like sitting,standing, etc. The cost of each coordinate ( x, y, z ) around the human is inversely proportional to the distance to the human. When the distance between the human and a point D ( H, ( x i , y j , z k )) is greater than the distance of another point D ( H, ( x l , y m , z n )) , we have f ( x i , y j , z k ) > f ( x l , y m , z n ) . Since the safety concerns loose their importance when the object exchange point is far away from the human, once it is farther from some maximal distance, it becomes null. The values of the Safety function is illustrated in figure 1 with 0.05m between neighboring points. It’s clear that from a safety point of view, the farther the object is placed, the farther the robot will be placed, so the more safe will the interaction become. Visibility : The visibility of the object is an important property of HR manipulation scenarios. The robot have to choose a place for the object where it will be as visible as possible to the human. We represent this property with a visibility cost function f visibility ( x, y, z, C H , P ref H ) . Alone this function represents the effort required by the human head and body to get the object in his field of view. With a given eye motion tolerance, a point ( x, y, z ) that has a minimum cost is situated in the cone situated directly in front of human’s gaze direction. For this property, the P ref H can contain the eye tolerance for human as well as any preferences or disabilities that he can have. The values of the Visibility function is shown in figure 2 with 0.05m between neighboring points. We can see that points at direction of human’s gaze have lower costs. The more the human has to turn his head to see a point, the higher is the cost. Arm Comfort : The last property of the placement of the object is human’s arm configuration when he tries to reach to the object. The human arm should be in a comfortable configuration when reaching the object. This property also is re- flected by a cost function f armComf ort ( x, y, z, C H , P ref H ) which returns costs representing how confortable for human arm to reach at a given point ( x, y, z ) . At this case P ref H value can contain left/right handiness as well as an other preference of which arm the human prefers. The inverse kinematics of human arm is solved by IKAN[12] algorithm which return a comfortable configuration among other possible ones because of the redundancy of the arm structure. For a given arm configuration, the costs of Arm Comfort property is calculated by where θ j is a joint angle of the j th joint, n is the number of arm joints, θ is angle of the joint in the rest ...

Similar publications

Article
Full-text available
A new motion planning approach is proposed for a holonomic robot (with 2 degrees of freedom in translation and 1 in rotation) under geometric uncertainty constraints. The constraints take into account the position, orientation and motion of the robot. The original contribution of the approach is the resolution method and the modelling of uncertaint...
Article
Full-text available
In a collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to enabling a robot system to understand and predict human behaviour, allowing the robot to adapt its motion to avoid collisions with human workers....
Conference Paper
Full-text available
This paper focuses on development of a motion planning strategy for car-like vehicles in dynamic urban-like scenarios. The strategy can be summarized as a search for a collision-free trajectory among linearly moving obstacles applying rapidly-exploring random trees (RRT) and B-splines. Collision avoidance is based on geometric search in transformed...
Article
Full-text available
In this paper, a new fast algorithm for path planning and a collision prediction framework for two dimensional dynamically changing environments are introduced. The method is called Time Distance (TD) and benefits from the space-time space idea. First, the TD concept is defined as the time interval that must be spent in order for an object to reach...
Article
Full-text available
The objective of this paper is to present recent developments in nonholonomic control systems, mainly mobile robots with Pfaffians constraints, so as to give an overview to the beginners in the area. The main tools used to model, to analyze, to make motion planning and trajectory tracking and to guarantee stabilization in an equilibrium point are p...

Citations

... We emphasize at this point, that κ dist is indirectly proportional to the so called safety cost function introduced in [25]. The definition of κ danger is based on the collision danger introduced in [28]. ...
Preprint
Full-text available
We present our approach for the development, validation and deployment of a data-driven decision-making function for the automated control of a vehicle. The decisionmaking function, based on an artificial neural network is trained to steer the mobile robot SPIDER towards a predefined, static path to a target point while avoiding collisions with obstacles along the path. The training is conducted by means of proximal policy optimisation (PPO), a state of the art algorithm from the field of reinforcement learning. The resulting controller is validated using KPIs quantifying its capability to follow a given path and its reactivity on perceived obstacles along the path. The corresponding tests are carried out in the training environment. Additionally, the tests shall be performed as well in the robotics situation Gazebo and in real world scenarios. For the latter the controller is deployed on a FPGA-based development platform, the FRACTAL platform, and integrated into the SPIDER software stack.
... The proposed framework was tested in several scenarios , Sisbot 2007a] to show its effectiveness. This framework was later extended to develop a path planner for complete motion planning tasks, including human-aware manipulation [Sisbot 2007b]. Similar to the case of navigation, human-aware manipulation takes safety, visibility and human comfort into account while planning the path. ...
... Similar to the case of navigation, human-aware manipulation takes safety, visibility and human comfort into account while planning the path. A placement position is found based on the spatial location of the human using perspective placement [Sisbot 2007b], and then the HAN system plans a path to reach this position. Then the robot moves and places itself there before finally planning the manipulation to hand over the object. ...
... The mutual enhancement between multiple modalities inspires our multimodal learning [22]. Multimodal learning in robotics does not only provide more information to spatial learning [23], but also explores the association between different perceptions [18]. Spatial relational learning falls under the multimodal task of translation, namely, using vision, haptic, and/or audio data to ground the natural language which describes the spatial relations. ...
Preprint
Full-text available
For robots to operate in a three dimensional world and interact with humans, learning spatial relationships among objects in the surrounding is necessary. Reasoning about the state of the world requires inputs from many different sensory modalities including vision ($V$) and haptics ($H$). We examine the problem of desk organization: learning how humans spatially position different objects on a planar surface according to organizational ''preference''. We model this problem by examining how humans position objects given multiple features received from vision and haptic modalities. However, organizational habits vary greatly between people both in structure and adherence. To deal with user organizational preferences, we add an additional modality, ''utility'' ($U$), which informs on a particular human's perceived usefulness of a given object. Models were trained as generalized (over many different people) or tailored (per person). We use two types of models: random forests, which focus on precise multi-task classification, and Markov logic networks, which provide an easily interpretable insight into organizational habits. The models were applied to both synthetic data, which proved to be learnable when using fixed organizational constraints, and human-study data, on which the random forest achieved over 90% accuracy. Over all combinations of $\{H, U, V\}$ modalities, $UV$ and $HUV$ were the most informative for organization. In a follow-up study, we gauged participants preference of desk organizations by a generalized random forest organization vs. by a random model. On average, participants rated the random forest models as 4.15 on a 5-point Likert scale compared to 1.84 for the random model
... This cost takes into account the worker's arm length and posture to determine the comfortable distance of the delivery position from the centroid of the human body. The concept was first introduced by Sisbot et al. (2007b) and was then refined by Mainprice et al. (2011). We define the arm comfort as a function of joint angles of the human arm. ...
... This cost takes into account the worker's arm length and posture to determine the comfortable distance of the delivery position from the centroid of the human body. The concept was first introduced by Sisbot et al. (2007b) and was then refined by Mainprice et al. (2011). We define the arm comfort as a function of joint angles of the human arm. ...
... Frontal approach versus lateral approach by the robot towards the human receiver is discussed with some contrast in [193], [194]. Such considerations are further used to develop the planner in [191], which is composed of three components: spatial reasoning to account for the human receiver (perspective placement [195]), path planning optimising over costs that account for safety, visibility and human arm comfort (humanaware manipulation planner [196]), and trajectory control to ensure minimum-jerk motions at the end effector (soft motion trajectory planning [197]). Humans minimise jerk in order to realise well-behaved trajectories for arm movements [198]. ...
Preprint
Full-text available
This article surveys the literature on human-robot object handovers. A handover is a collaborative joint action where an agent, the giver, gives an object to another agent, the receiver. The physical exchange starts when the receiver first contacts the object held by the giver and ends when the giver fully releases the object to the receiver. However, important cognitive and physical processes begin before the physical exchange, including initiating implicit agreement with respect to the location and timing of the exchange. From this perspective, we structure our review into the two main phases delimited by the aforementioned events: 1) a pre-handover phase, and 2) the physical exchange. We focus our analysis on the two actors (giver and receiver) and report the state of the art of robotic givers (robot-to-human handovers) and the robotic receivers (human-to-robot handovers). We report a comprehensive list of qualitative and quantitative metrics commonly used to assess the interaction. While focusing our review on the cognitive level (e.g., prediction, perception, motion planning, learning) and the physical level (e.g., motion, grasping, grip release) of the handover, we briefly discuss also the concepts of safety, social context, and ergonomics. We compare the behaviours displayed during human-to-human handovers to the state of the art of robotic assistants, and identify the major areas of improvement for robotic assistants to reach performance comparable to human interactions. Finally, we propose a minimal set of metrics that should be used in order to enable a fair comparison among the approaches.
... During trajectory execution, visibility of the robot's end effector is an essential factor for human comfort [18]. If the robot is out of the field of view of the human, the human may be distracted and try to locate it, thus decreasing both human comfort and task efficiency. ...
Preprint
We address the problem of adapting robot trajectories to improve safety, comfort, and efficiency in human-robot collaborative tasks. To this end, we propose CoMOTO, a trajectory optimization framework that utilizes stochastic motion prediction models to anticipate the human's motion and adapt the robot's joint trajectory accordingly. We design a multi-objective cost function that simultaneously optimizes for i) separation distance, ii) visibility of the end-effector, iii) legibility, iv) efficiency, and v) smoothness. We evaluate CoMOTO against three existing methods for robot trajectory generation when in close proximity to humans. Our experimental results indicate that our approach consistently outperforms existing methods over a combined set of safety, comfort, and efficiency metrics.
... Cakmak et al. [18] presented a user study on human preferences of robot hand-over configurations, using a simulated kinematics model of humans to collect information on user preferences. Moreover, the spatial reasoning of users, such as user visibility and arm comfort, was considered to be an important factor in [19] for object hand-over tasks. Understanding human behaviours is also helpful in learning user models. ...
Article
Full-text available
Assistive robots in home environments are steadily increasing in popularity. Due to significant variabilities in human behaviour, as well as physical characteristics and individual preferences, personalising assistance poses a challenging problem. In this paper, we focus on an assistive dressing task that involves physical contact with a human’s upper body, in which the goal is to improve the comfort level of the individual. Two aspects are considered to be significant in improving a user’s comfort level: having more natural postures and exerting less effort. However, a dressing path that fulfils these two criteria may not be found at one time. Therefore, we propose a user modelling method that combines vision and force data to enable the robot to search for an optimised dressing path for each user and improve as the human-robot interaction progresses. We compare the proposed method against two single-modality state-of-the-art user modelling methods designed for personalised assistive dressing by user studies (31 subjects). Experimental results show that the proposed method provides personalised assistance that results in more natural postures and less effort for human users.
... Although solution optimization is not a direct objective of this work, xxl often arrives at solution paths that do not exhibit the superfluity as in Figure 1. These paths are beneficial not only for R2 operating autonomously onboard the iss, but in a much broader class of situations where robots operate near or even with people where the paths executed by the robot must be intuitive (Sisbot et al., 2007;Dragan et al., 2013). ...
Article
Full-text available
Sampling-based algorithms are known for their ability to effectively compute paths for high-dimensional robots in relatively short times. The same algorithms, however, are also notorious for poor-quality solution paths, particularly as the dimensionality of the system grows. This work proposes a new probabilistically complete sampling-based algorithm, XXL, specially designed to plan the motions of high-dimensional mobile manipulators and related platforms. Using a novel sampling and connection strategy that guides a set of points mapped on the robot through the workspace, XXL scales to realistic manipulator platforms with dozens of joints by focusing the search of the robot’s configuration space to specific degrees of freedom that affect motion in particular portions of the workspace. Simulated planning scenarios with the Robonaut2 platform and planar kinematic chains confirm that XXL exhibits competitive solution times relative to many existing works while obtaining execution-quality solution paths. Solutions from XXL are of comparable quality to cost-aware methods even though XXL does not explicitly optimize over any particular criteria, and are computed in an order of magnitude less time. Furthermore, observations about the performance of sampling-based algorithms on high-dimensional manipulator planning problems are presented that reveal a cautionary tale regarding two popular guiding heuristics used in these algorithms, indicating that a nearly random search may outperform the state-of-the-art when defining such heuristics is known to be difficult.
... We break down the main problem into problems of lower dimensionality: we distinguish two subsets of the main problem, (1) navigation between handover places, and (2) handovers themselves. However we take into consideration the influence of the decision of where to go to perform the handover on the quality of the handover itself, like pointed out by [Sisbot 2007a]; indeed we search for a solution optimizing the quality of the whole solution to the transport problem. ...
Thesis
Full-text available
When interacting with humans, robotic systems shall behave in compliance to some of our socio-cultural rules, and every component of the robot have to take them into account. When deciding an action to perform and how to perform it, the system then needs to communicate pertinent contextual information to its components so they can plan respecting these rules. It is also essential for such robot to ensure a smooth coordination with its human partners. We humans use many cues for synchronization like gaze, legible motions or speech. We are good at inferring what actions are available to our partner, helping us to get an idea of what others are going to do (or what they should do) to better plan for our own actions. Enabling the robot with such capacities is key in the domain of human-robot interaction. This thesis presents our approach to solve two tasks where humans and robots collaborate deeply: a transport problem where multiple robots and humans need to or can handover an object to bring it from one place to another, and a guiding task where the robot helps the humans to orient themselves using speech, navigation and deictic gestures (pointing). We present our implementation of components and their articulation in a architecture where contextual information is transmitted from higher levels decision components to lower ones, which use it to adapt. Our planners also plan for the human actions, as in a multi-robot system: this allows to not be waiting for humans to act, but rather be proactive in the proposal of a solution, and try to predict the actions they will take.