Visual positioning error without depth estimation; (a) Translation error. (b) Rotation error around Z axes.

Visual positioning error without depth estimation; (a) Translation error. (b) Rotation error around Z axes.

Source publication
Article
Full-text available
To solve the view visibility problem and keep the observed object in the field of view (FOV) during the visual servoing, a depth adaptive zooming visual servoing strategy for a manipulator robot with a zooming camera is proposed. Firstly, a zoom control mechanism is introduced into the robot visual servoing system. It can dynamically adjust the cam...

Similar publications

Preprint
Full-text available
Most of the deep-learning based depth and ego-motion networks have been designed for visible cameras. However, visible cameras heavily rely on the presence of an external light source. Therefore, it is challenging to use them under low-light conditions such as night scenes, tunnels, and other harsh conditions. A thermal camera is one solution to co...
Preprint
Full-text available
In the research area of human-robot interactions, the automatic estimation of the mass of a container manipulated by a person leveraging only visual information is a challenging task. The main challenges consist of occlusions, different filling materials and lighting conditions. The mass of an object constitutes key information for the robot to cor...
Preprint
Full-text available
Estimating heart rate from video allows non-contact health monitoring with applications in patient care, human interaction, and sports. Existing work can robustly measure heart rate under some degree of motion by face tracking. However, this is not always possible in unconstrained settings, as the face might be occluded or even outside the camera....
Preprint
Full-text available
Video depth estimation is crucial in various applications, such as scene reconstruction and augmented reality. In contrast to the naive method of estimating depths from images, a more sophisticated approach uses temporal information, thereby eliminating flickering and geometrical inconsistencies. We propose a consistent method for dense video depth...
Article
Full-text available
Robot detection, recognition, positioning, and other applications require not only real-time video image information but also the distance from the target to the camera, that is, depth information. This paper proposes a method to automatically generate any monocular camera depth map based on RealSense camera data. By using this method, any current...

Citations

... The former employs the observed parameters of the visual feature in the image of geometric primitives (points, straight lines, ellipses, and cylinders) in image-based visual servoing (IBVS). [3][4][5] In position-based visual servoing (PBVS), [6][7][8] the geometric primitives reconstruct the camera pose, which then serves as input for visual servoing. The above approaches subject the image stream to an ensemble of measurement processes, including image processing, image matching, and visual tracking steps, from which the visual features are determined. ...
Article
Full-text available
Image moments are global descriptors of an image and can be used to achieve control-decoupling properties in visual servoing. However, only a few methods completely decouple the control. This study introduces a novel camera pose estimation method, which is a closed-form solution, based on the image moments of planar objects. Traditional position-based visual servoing estimates the pose of a camera relative to an object, but the pose estimation method directly estimates the pose of an initial camera relative to a desired camera. Because the estimation method is based on plane parameters, a plane parameters estimation method based on the 2D rotation, 2D translation, and scale invariant moments is also proposed. A completely decoupled position-based visual servoing control scheme from the two estimation methods above was adopted. The new scheme exhibited asymptotic stability when the object plane was in the camera field of view. Simulation results demonstrated the effectiveness of the two estimation methods and the advantages of the visual servo control scheme compared with the classical method.
Article
Visual servoing technology has widely been employed in manufacturing because it is a flexible, realizability, and low-cost way to improve the intelligence of the industry robot. Nevertheless, a worrisome and overlooked issue is that the loss of visual features in the camera's field of view may lead to the failures of the visual servoing tasks. This article addresses the visual features escaping problem, by implementing an asymmetric barrier Lyapunov function with a field of view constraint controller. The asymmetric barrier Lyapunov function defines a tightly specified range for the feature coordinate errors and ensures the transient response of the tracking error as well as enables arbitrary tracking accuracy. It is worth noting that the asymmetric barrier Lyapunov function directly handles the visual-robot coupled dynamics while guaranteeing system stabilities. Besides, to accommodate the uncertain dynamics derived from a high-dimensional coupled system, an adaptive controller is proposed utilizing fuzzy neural networks with computational efficiency and few training parameters to enhance the control performance. Finally, the effectiveness of the proposed control strategy has been demonstrated through both theoretical analysis and experimental verification.
Article
This paper presents a new image-based control scheme for spacecraft rendezvous and synchronization with an uncooperative tumbling target, which is capable of autonomously adjusting the camera focal length, in order to extend the working range of visual servoing and guarantee that the target remains within the camera field-of-view. Unlike conventional visual servoing, the new scheme is based on a system model that is invariant to changes in camera-intrinsic parameters. An active zooming strategy is proposed which ensures that the target remains in the image plane with a proper size during the visual servoing. By utilizing the image features as feedback information, a finite-time controller is designed, which is robust to the unknown target’s motion as well as the external perturbations, with the ability to estimate and adapt to the upper bound of the uncertainties. The closed-loop system stability is proved using the Lyapunov theory. Simulation scenarios are studied for two different onboard cameras, namely, a fixed-focal-length camera and a zooming camera. In addition, a comparative study with a conventional sliding-mode controller is performed to evaluate the convergence and accuracy of the proposed control scheme.
Article
Full-text available
Aiming at the problem of servoing task failure caused by the manipulated object deviating from the camera field-of-view (FOV) during the robot manipulator visual servoing (VS) process, a new VS method based on an improved tracking learning detection (TLD) algorithm is proposed in this article, which allows the manipulated object to deviate from the camera FOV in several continuous frames and maintains the smoothness of the robot manipulator motion during VS. Firstly, to implement the robot manipulator visual object tracking task with strong robustness under the weak FOV constraints, an improved TLD algorithm is proposed. Then, the algorithm is used to extract the image features (object in the camera FOV) or predict image features (object out of the camera FOV) of the manipulated object in the current frame. And then, the position of the manipulated object in the current image is further estimated. Finally, the visual sliding mode control law is designed according to the image feature errors to control the motion of the robot manipulator so as to complete the visual tracking task of the robot manipulator to the manipulated object in complex natural scenes with high robustness. Several robot manipulator VS experiments were conducted on a six-degrees-of-freedom MOTOMANSV3 industrial manipulator under different natural scenes. The experimental results show that the proposed robot manipulator VS method can relax the FOV constraint requirements on real-time visibility of manipulated object and effectively solve the problem of servoing task failure caused by the object deviating from the camera FOV during the VS.