Figure - available from: Journal of Sensors
This content is subject to copyright. Terms and conditions apply.
The vision system of the real robot (a) and the vision system on the control software (b).

The vision system of the real robot (a) and the vision system on the control software (b).

Source publication
Article
Full-text available
One of the problems with industrial robots is their ability to accurately locate the pose of the end-effector. Over the years, many other solutions have been studied including static calibration and dynamic positioning. This paper presents a novel approach for pose estimation of a Hexa parallel robot. The vision system uses three simple color featu...

Similar publications

Article
Full-text available
In the field of high accuracy dual-axis rotational inertial navigation system (RINS), the calibration accuracy of the gyroscopes and accelerometers is of great importance. Although rotation modulation can suppress the navigation error caused by scale factor error and bias error in a static condition, it cannot suppress the scale factor errors thoro...

Citations

... A practical way to accomplish this is by placing visual markers on the robot's end effector and using the geometric model of the camera to associate the coordinates of the markers with their corresponding coordinates in camera space. This circumvents the need to know a priori the Jacobian transformation of the manipulator, which can be a complex task [29,30]. With the data set and the camera model, the estimation of the so-called vision parameters is performed. ...
... Some vision techniques have been reported in the literature to guide parallel robots through various tasks. In [29], a vision system is used to estimate the pose of the Hexa parallel robot, while [31] uses a computer vision system to develop an obstacle collision detection method for the same robot, no control description is included in either work. In [32], computer vision is used to identify the geometric parameters of a Delta parallel architecture. ...
Article
Full-text available
In this paper, a control approach for reconfigurable parallel robots is designed. Based on it, controls in the vision-sensor, 3D and joint spaces are designed and implemented in target tracking tasks in a novel reconfigurable delta-type parallel robot. No a priori information about the target trajectory is required. Robot reconfiguration can be used to overcome some of the limitations of parallel robots like small relative workspace or multiple singularities, at the cost of increasing the complexity of the manipulator, making its control design even more challenging. No general control methodology exists for reconfigurable parallel robots. Tracking objects with unknown trajectories is a challenging task required in many applications. Sensor-based robot control has been actively used for this type of task. However, it cannot be straightforwardly extended to reconfigurable parallel manipulators. The developed vision-sensor space control is inspired by, and can be seen as an extension of, the Velocity Linear Camera Model–Camera Space Manipulation (VLCM-CSM) methodology. Several experiments were carried out on a reconfigurable delta-type parallel robot. An average positioning error of 0.6 mm was obtained for static objectives. Tracking errors of 2.5 mm, 3.9 mm and 11.5 mm were obtained for targets moving along a linear trajectory at speeds of 6.5, 9.3 and 12.7 cm/s, respectively. The control cycle time was 16 ms. These results validate the proposed approach and improve upon previous works for non-reconfigurable robots.
... Traditional control strategies are hard to implement on parallel robots, due to: the robot's multiple solutions to the direct kinematic problem and modeling errors in the Jacobian as a result of inaccurate geometric calibration. Research on parallel robots has focused on mitigating the effects of these drawbacks on their precision and ease of control [5][6][7][8]. ...
Article
Full-text available
It is a challenging task to track objects moving along an unknown trajectory. Conventional model-based controllers require detailed knowledge of a robot’s kinematics and the target’s trajectory. Tracking precision heavily relies on kinematics to infer the trajectory. Control implementation in parallel robots is especially difficult due to their complex kinematics. Vision-based controllers are robust to uncertainties of a robot’s kinematic model since they can correct end-point trajectories as error estimates become available. Robustness is guaranteed by taking the vision sensor’s model into account when designing the control law. All camera space manipulation (CSM) models in the literature are position-based, where the mapping between the end effector position in the Cartesian space and sensor space is established. Such models are not appropriate for tracking moving targets because the relationship between the target and the end effector is a fixed point. The present work builds upon the literature by presenting a novel CSM velocity-based control that establishes a relationship between a movable trajectory and the end effector position. Its efficacy is shown on a Delta-type parallel robot. Three types of experiments were performed: (a) static tracking (average error of 1.09 mm); (b) constant speed linear trajectory tracking—speeds of 7, 9.5, and 12 cm/s—(tracking errors of 8.89, 11.76, and 18.65 mm, respectively); (c) freehand trajectory tracking (max tracking errors of 11.79 mm during motion and max static positioning errors of 1.44 mm once the object stopped). The resulting control cycle time was 48 ms. The results obtained show a reduction in the tracking errors for this robot with respect to previously published control strategies.