The coordinate systems of the robotic intelligent grasping system consist of robot frame (RF), parts frame (PF), robot gripper frame (GF) and camera frame (CF).

The coordinate systems of the robotic intelligent grasping system consist of robot frame (RF), parts frame (PF), robot gripper frame (GF) and camera frame (CF).

Source publication
Article
Full-text available
Pose estimation is a typical problem in the field of image processing, the purpose of which is to compare or fuse images acquired under different conditions. In recent years, many studies have focused on pose estimation algorithms, but so far there are still many challenges, such as efficiency, complexity and accuracy for various targets and condit...

Contexts in source publication

Context 1
... shown in Figure 1, before the robot grasps the part, it moves the camera to measure several feature points on it, and then determines the pose of the part. Parts vary in size and differ in shape, as small-sized parts may be less than 0.5 m × 0.5 m, while large-sized parts may be larger than 1.5 m × 1.5 m. ...
Context 2
... When the target is rest on the initial position, the grasping path is obtained through robot teaching. As shown in Figure 1, the transformation CF1 PF0 T, from part frame at initial position PF0 to the camera frame at the first measurement position CF1, can be given as follows: ...
Context 3
... front floor of automobile is used as the test part. The floor part is a large-size automobile part and the cameras have to move to two different positions to cover it ( Figure 10). There are four feature points, two of which are measured in each view (Figure 11). ...
Context 4
... floor part is a large-size automobile part and the cameras have to move to two different positions to cover it ( Figure 10). There are four feature points, two of which are measured in each view (Figure 11). ...
Context 5
... this basis, the vision-based grasping system has been used in automobile intelligent manufacturing by reconfiguring the traditional production line. Through the integration with production line (see Figure 12), production-manufacturing can be accomplished intelligently, quickly and accurately with the grasping system. Table 1. ...
Context 6
... estimation errors (rotation and translation) compared between proposed method and robot teaching (seen as ground-truth). Figure 12. Two different stations in the automobile production line with intelligent grasping system: (a) the first station; (b) the second station. ...

Similar publications

Article
Full-text available
Nowadays, image processing and computer vision detection technology have very obvious advantage, which are applied in all walks of life, it also has the characteristics of intelligent and fast speed. Now the development of science and technology rapidly, computer vision detection technology is widely used, and at the same time, People are paying mo...

Citations

... In addition, among robotic assembly studies, there are many assembly scenarios, such as peg-in-hole assembly (Pauli et al., 2001;Yang et al., 2020), chute assembly (Peternel et al., 2018), bolt assembly (Laursen et al., 2015), etc. The peg-in-hole assembly is the most common of all assembly tasks and the one most commonly studied. ...
Article
Full-text available
Accurately estimating the 6DoF pose of objects during robot grasping is a common problem in robotics. However, the accuracy of the estimated pose can be compromised during or after grasping the object when the gripper collides with other parts or occludes the view. Many approaches to improving pose estimation involve using multi-view methods that capture RGB images from multiple cameras and fuse the data. While effective, these methods can be complex and costly to implement. In this paper, we present a Single-Camera Multi-View (SCMV) method that utilizes just one fixed monocular camera and the initiative motion of robotic manipulator to capture multi-view RGB image sequences. Our method achieves more accurate 6DoF pose estimation results. We further create a new T-LESS-GRASP-MV dataset specifically for validating the robustness of our approach. Experiments show that the proposed approach outperforms many other public algorithms by a large margin. Quantitative experiments on a real robot manipulator demonstrate the high pose estimation accuracy of our method. Finally, the robustness of the proposed approach is demonstrated by successfully completing an assembly task on a real robot platform, achieving an assembly success rate of 80%.
... The assembly process of electronic components such as in case of automotive electric parts, we well as in case of surveillance for educational purposes showed important challenges. Many approaches of assembly surveillance are taking into account of identifying parts using features such as colour, specific shape and of course using artificial intelligence algorithm such as CNN (Convolutional neural networks) versus manual assembly task on a production line [11][12][13]. ...
Article
Full-text available
This research is concerned to propose a computer vision algorithm to track manual assembly task. Manual assembly in case of electronics parts are used largely in automotive industry. The phases tracking of assembly could also be used for learning purposes such in case showed in this research, checking the assembly of an electronic educational board. The algorithms used for detection of different components are CNN (Convolutional Neuronal Network) as well as blob detection.
... Common 2D image segmentation algorithms include the binarisation method, watershed method [19], and edge method [20]. Figure 6 shows the results of the test image segmentation process using the edge method. ...
Article
Full-text available
At present, most of the handling industrial robots working on the production line are operated by teaching or preprogramming, which makes the flexibility of the production line poor and does not meet the flexible production requirements of the material handling system. This study proposes a solution based on adding computer binocular vision to a five-axis industrial robot system. A simple and high-precision binocular camera calibration method is proposed, the kinematics of the five-axis robot is analyzed, and the target positioning is realized; the communication between the upper and lower robots is realized through Ethernet. According to the specific target, the grasping scheme of the gripper was designed; the control software was developed using two schemes. Visual control is carried out by operating specific buttons on the control panel, and visual control is carried out by executing the macrovariable program, finally realizing the joint fusion of multisensor data and binocular vision.
... For instance, [29] proposed a visual guidance system for robotic a peg-in-hole application consisting of four cameras: two in an eye-to-hand configuration for the localization of the robotic tool, while the others are in an eye-in-hand configuration and are used for alignment of the tool with reference holes. A multi-view approach was presented in [30] for the localization of target objects in a pick-and-place framework with sub-millimeter level accuracy. The versatility of vision systems have also enabled other uses in navigation, guidance, and calibration systems of mobile industrial robots [31,32,33]. ...
Preprint
Full-text available
The manufacturing industry is currently witnessing a paradigm shift with the unprecedented adoption of industrial robots, and machine vision is a key perception technology that enables these robots to perform precise operations in unstructured environments. However, the sensitivity of conventional vision sensors to lighting conditions and high-speed motion sets a limitation on the reliability and work-rate of production lines. Neuromorphic vision is a recent technology with the potential to address the challenges of conventional vision with its high temporal resolution, low latency, and wide dynamic range. In this paper and for the first time, we propose a novel neuromorphic vision based controller for faster and more reliable machining operations, and present a complete robotic system capable of performing drilling tasks with sub-millimeter accuracy. Our proposed system localizes the target workpiece in 3D using two perception stages that we developed specifically for the asynchronous output of neuromorphic cameras. The first stage performs multi-view reconstruction for an initial estimate of the workpiece's pose, and the second stage refines this estimate for a local region of the workpiece using circular hole detection. The robot then precisely positions the drilling end-effector and drills the target holes on the workpiece using a combined position-based and image-based visual servoing approach. The proposed solution is validated experimentally for drilling nutplate holes on workpieces placed arbitrarily in an unstructured environment with uncontrolled lighting. Experimental results prove the effectiveness of our solution with an average positional errors of less than 0.1 mm, and demonstrate that the use of neuromorphic vision overcomes the lighting and speed limitations of conventional cameras.
Article
The manufacturing industry is currently witnessing a paradigm shift with the unprecedented adoption of industrial robots, and machine vision is a key perception technology that enables these robots to perform precise operations in unstructured environments. However, the sensitivity of conventional vision sensors to lighting conditions and high-speed motion sets a limitation on the reliability and work-rate of production lines. Neuromorphic vision is a recent technology with the potential to address the challenges of conventional vision with its high temporal resolution, low latency, and wide dynamic range. In this paper and for the first time, we propose a novel neuromorphic vision based controller for robotic machining applications to enable faster and more reliable operation, and present a complete robotic system capable of performing drilling tasks with sub-millimeter accuracy. Our proposed system localizes the target workpiece in 3D using two perception stages that we developed specifically for the asynchronous output of neuromorphic cameras. The first stage performs multi-view reconstruction for an initial estimate of the workpiece’s pose, and the second stage refines this estimate for a local region of the workpiece using circular hole detection. The robot then precisely positions the drilling end-effector and drills the target holes on the workpiece using a combined position-based and image-based visual servoing approach. The proposed solution is validated experimentally for drilling nutplate holes on workpieces placed arbitrarily in an unstructured environment with uncontrolled lighting. Experimental results prove the effectiveness of our solution with maximum positional errors of less than 0.2 mm, and demonstrate that the use of neuromorphic vision overcomes the lighting and speed limitations of conventional cameras. The findings of this paper identify neuromorphic vision as a promising technology that can expedite and robustify robotic manufacturing processes in line with the requirements of the fourth industrial revolution.