Fig 1 - uploaded by Oscar Nasisi
Content may be subject to copyright.
Camera-in-hand robotic system. 

Camera-in-hand robotic system. 

Source publication
Article
Full-text available
In this paper, the control problem of camera-in-hand robotic systems is considered. In this approach, a camera is mounted on the robot, usually at the hand, which provides an image of objects located in the robot environment. The aim of this approach is to move the robot arm in such a way that the image of the objects attains the desired locations....

Contexts in source publication

Context 1
... robotic system considered in this paper is composed of a robot manipulator featuring a camera in its hand, as depicted in Fig. 1. The basic mathematical description of this system con- sists of the robot dynamics and differential kinematics and the camera ...
Context 2
... HE USE of visual information in the feedback loop is an attractive solution for the position and motion control of autonomous robot manipulators evolving in unstructured envi- ronments [1]. This robot control strategy, so-called visual ser- voing, can be classified in two approaches: fixed camera and camera in hand. In fixed-camera robotic systems, multiple cam- eras fixed in the world-coordinate frame capture images of both the robot and its environment. The objective of this approach is to make the robot move in such a way that its end effector reaches a desired object [2]- [8]. In the camera-in-hand config- uration, a camera is mounted on the robot, which supplies visual information of the environment, as depicted in Fig. 1. The ob- jective of this approach is to move the manipulator in such a way that the projection of either a moving or a static object be always at a desired location in the image captured by the camera [9]- [18]. This paper deals with the camera-in-hand approach to vision robot control. This control problem has attracted the attention of researchers in recent years (see [1] for an interesting historical review). A common characteristic of most of previous works is Manuscript received February 25, 1997;revised May 15, 1998. Recom- mended by Technical Editor T. Fukuda. This work was supported in part by CONACyT (Mexico) and CONICET (Argentina), by the project Perception Systems for Robots-CYTED, and by CONACyT-NSF Grant 228050-5-C084A and Grant ...

Similar publications

Conference Paper
Full-text available
In this paper we consider the problem of stabilizing in a desired configuration a mobile manipulator; only the arm's joint displacement information and the measures provided by the camera mounted on the end-effector are used to stabilize the system. In particular, no knowledge about the position and orientation of the mobile base is supposed to be...
Conference Paper
Full-text available
Many researchers have turned to sensing, and in particular computer vision, to create more flexible robotic systems. Computer vision is often required to provide data for the grasping of a target. Using a vision system for grasping presents several issues with respect to sensing, control, and system configuration. This paper presents some of these...
Conference Paper
Full-text available
We present a novel algorithm for path planning that avoids occlusions of a visual target for an ldquoeye-in-handrdquo sensor on an articulated robot arm. We compute paths using a probabilistic roadmap to avoid collisions between the robot and obstacles, while penalizing trajectories that do not maintain line-of-sight. The system determines the spac...
Article
Full-text available
This paper is concerned with the use of a spherical-projection model for visual servoing from three points. We propose a new set of six features to control a 6-degree-of-freedom (DOF) robotic system with good decoupling properties. The first part of the set consists of three invariants to camera rotations. These invariants are built using the Carte...
Conference Paper
Full-text available
Describes an approach to the problem of trajectory generation in a workspace by visual servoing. Visual servoing is based on an array of measurements taken from a set of images and used each time as an error function to compute a control vector. This is applied to the system (robot and camera) and enables it to move in order to reach a desired situ...

Citations

... In order to help the reader to choose the right 3D vision technique for particular project, a Sets of questions and advises are presented in [13]. Regarding the arrangement of the vision equipment in the workplace, one can further characterize it as: a) Eye-in-hand system with the camera mounted on the robot End-effector [14,15], b) fixed in the workspace [16,17] and c) active head systems with limited flexibility on translating and rotating [18,19,20]. ...
Preprint
Full-text available
Industrial robots are extensively employed in a variety of surface industrial processes. To increase productivity, automated manufacturing systems operate in the factory on the physical product to perform operations such as processing, material handling, polishing and sometimes accomplishing more than one of these operations in the same system. It is necessary to locate objects or its features as rapidly as possible so that further image-processing algorithms can extract features. Robotic-based pick-and-place applications can be achieved by determining the correct orientation of a part within 2D or 3D space. A machine vision image acquisition hardware platform was set up, HALCON software used for noise filter, threshold segmentation and edge detection of the workpiece image to comply with identification accuracy requirements. Based on the preprocessing result, the workpiece features information extracted. Integration of the machine vision system allows the polishing system to simultaneously processing a variety of workpieces identifying the shape, size and color on the test bench. The workpiece was classified by these characteristics and the classification information is output to the controller. The recognition result is output to the PLC to generate the corresponding polishing paths, reduce time of operation and increase the flexible processing capacity.
... The consistency of features enables VS with large pose offset. But IBVS has inherent drawbacks such as small convergence region and local minima [12], [13]. ...
... PBVS uses the relative pose between current and desired pose as visual feature and plans a globally asymptotically stable straight trajectory in 3D Cartesian space. IBVS uses matched keypoints on 2D image plane, which is insensitive to calibration error, but suffers from small convergence region due to the high non-linearity [12], [13]. It may meet feature loss problem [23] dealing with large initial pose offset. ...
Preprint
Full-text available
Recently, several works achieve end-to-end visual servoing (VS) for robotic manipulation by replacing traditional controller with differentiable neural networks, but lose the ability to servo arbitrary desired poses. This letter proposes a differentiable architecture for arbitrary pose servoing: a hyper-network based neural controller (HPN-NC). To achieve this, HPN-NC consists of a hyper net and a low-level controller, where the hyper net learns to generate the parameters of the low-level controller and the controller uses the 2D keypoints error for control like traditional image-based visual servoing (IBVS). HPN-NC can complete 6 degree of freedom visual servoing with large initial offset. Taking advantage of the fully differentiable nature of HPN-NC, we provide a three-stage training procedure to servo real world objects. With self-supervised end-to-end training, the performance of the integrated model can be further improved in unseen scenes and the amount of manual annotations can be significantly reduced.
... In the article written by Kelly et al. [22], a slightly different approach is observed, since only one camera is used, which is mounted on the end effector of the robotic arm. The robotic arm has only two joints. ...
Article
Full-text available
This paper presents a method on how to compute the position of a robotic arm in the 2D and 3D spaces. This method is slightly different from the well-known methods, such as forward or inverse kinematics. The method presented in this paper is an optical method, which uses two video cameras in stereo vision configuration to locate and compute the next move of a robotic arm in space. The method recognizes the coordinates of the markers placed at the joints of the robotic arm using the two video cameras. The coordinate points of these markers are connected with straight lines. Around certain points, circles are drawn. From the tangent to the circles, a non-Cartesian (orthogonal) coordinate system is drawn, which is enough to compute the target position of the robotic arm. All of these drawings are overlaid on the live video feed. This paper also presents another method for calculating the stereo distance using the triangulation method. An alternative method is also presented when a non-Cartesian (orthogonal) 3D coordinate system is created, which is used to compute the target position of the robotic arm in the 3D space. Because the system is in a loop, it can make micro-adjustments of the robotic arm, in order to be exactly in the desired position. In this way, there is no need to make calibrations for the robotic arm. In an industrial system, there is no need to stop the production line, which can be a really big cost saver.
... Neste trabalho, considera-se o modelo matemático proposto em (Hutchinson et al., 1996;Flandin et al., 2000;Kelly et al., 2000) queé generalizado para uma câmera móvel. Como uma câmera CCDé usada para medir a posição do alvo, seu modelo de projeção, que está representado na Fig.6 (Hutchinson et al., 1996;Flandin et al., 2000;Kelly et al., 2000), o ponto da imagem depende exclusivamente da posição do objeto e da posição e orientação da câmera, e pode ser representado por ...
... Neste trabalho, considera-se o modelo matemático proposto em (Hutchinson et al., 1996;Flandin et al., 2000;Kelly et al., 2000) queé generalizado para uma câmera móvel. Como uma câmera CCDé usada para medir a posição do alvo, seu modelo de projeção, que está representado na Fig.6 (Hutchinson et al., 1996;Flandin et al., 2000;Kelly et al., 2000), o ponto da imagem depende exclusivamente da posição do objeto e da posição e orientação da câmera, e pode ser representado por ...
... (3) Como a orientação da câmera nãoé uma grandeza estática, seus movimentos de rotação afetarão a projeção da imagem do alvo. Assim, usando a conhecida fórmula geral para a velocidade de um ponto em movimento com um referencial em movimento em relação a um referencial fixo (Hutchinson et al., 1996;Sciavicco and Siciliano, 2000),é possível computar a derivada temporal da Eq. (3) em termos das velocidades de translação e de rotação da câmera como (Kelly et al., 2000): ...
Conference Paper
In this work, the orientation control problem of a pan-tilt camera for focusing on moving objects is considered. Because of being a non-linear multivariable control system with coupled control inputs, this work presents and discusses the Active Disturbance Rejection Control (ADRC) method. One of the main advantages of using the ADRC is that the control law design does not require exact knowledge of the plant parameters and the pan and tilt angles measurements. These advantages are possible by using an extended observer to estimate the non-measurable signals of the states and nonmodeled dynamics. As a contribution to state-of- the-art, it is proposed an ADRC variant approach that is implemented with a reduced-order observer. Numerical simulations and comparisons with other control techniques illustrate the efficiency of the proposed control.
... While concept of visual servoing is a mature research topic in robotics [30]- [32], its translation to clinical practice requires adaptation for any specific procedure or imaging modality [33]. Based on this notion, a study by Zettinig et al. [34] showed that US visual servoing can be used for neurosurgical navigation and needle guidance. ...
Article
Full-text available
Objective: This study demonstrates intravascular micro-agent visualization by utilizing robotic ultrasound-based tracking and visual servoing in clinically-relevant scenarios. Methods: Visual servoing path is planned intraoperatively using a body surface point cloud acquired with a 3D camera and the vessel reconstructed from ultrasound (US) images, where both the camera and the US probe are attached to the robot end-effector. Developed machine vision algorithms are used for detection of micro-agents from minimal size of 250μm inside the vessel contour and tracking with error recovery. Finally, real-time positions of the micro-agents are used for servoing of the robot with the attached US probe. Constant contact between the US probe and the surface of the body is accomplished by means of impedance control. Results: Developed algorithms are tested in clinically relevant scenarios which include anthropomorphic phantom, biological tissue, simulation of physiological movement and simulation of fluid flow through the vessels. Breathing motion is compensated to keep constant contact between the US probe and the body surface, with minimal measured force of 2.02N. Anthropomorphic phantom vessels are segmented with an Intersection-Over-Union (IOU) score of 0.93±0.05, while micro-agent tracking is performed with up to 99.8% success rate at 28-36 frames per second. Path planning, tracking and visual servoing are realized over 80mm and 120mm long surface paths. Conclusion: Experiments performed using anthropomorphic surfaces, biological tissue, simulation of physiological movement and simulation of fluid flow through the vessels indicate that robust visualization and tracking of micro-agents involving human patients is an achievable goal.
... Vision-based localization and contact prediction: Camera-on-hand or more generally camera-on-mobile-agent arrangements have been investigated by several researchers to pursue diverse goals such as visual servoing to a workspace goal (e.g. [12]), collision avoidance systems on miniature aerial vehicles (e.g. [13]), and how flying insects, birds, and rapidly moving animals perceive motion [14]. ...
Preprint
Full-text available
Coordinating proximity and tactile imaging by collocating cameras with tactile sensors can 1) provide useful information before contact such as object pose estimates and visually servo a robot to a target with reduced occlusion and higher resolution compared to head-mounted or external depth cameras, 2) simplify the contact point and pose estimation problems and help tactile sensing avoid erroneous matches when a surface does not have significant texture or has repetitive texture with many possible matches, and 3) use tactile imaging to further refine contact point and object pose estimation. We demonstrate our results with objects that have more surface texture than most objects in standard manipulation datasets. We learn that optic flow needs to be integrated over a substantial amount of camera travel to be useful in predicting movement direction. Most importantly, we also learn that state of the art vision algorithms do not do a good job localizing tactile images on object models, unless a reasonable prior can be provided from collocated cameras.
... El uso de Field Programmable Gate Arrays (FPGAs) permite aplicar una tecnología basada en hardware reprogramable específica a la implementación de controladores visuales [1]. A diferencia de los controladores visuales clásicos, la estrategia de control visual directo [2] desempeña el control articular directamente usando información visual, generando los pares a aplicar a las articulaciones. Como se describe a lo largo del artículo, la implementación de un sistema de control visual directo totalmente integrado en una FPGA permite mejorar dichos sistemas reduciendo los retrasos por procesamiento y obteniendo retrasos estables en la retroalimentación. ...
... Para evaluar la arquitectura, dos controladores diferentes han sido implementados. El primero consiste en el bien conocido controlador basado en Jacobiana transpuesta [2] y el segundo que es propuesto en este artículo, el cual desarrolla un framework óptimo. En trabajos previos como [3] este framework es definido para guiar visualmente sistemas robóticos durante la manipulación de objetos rígidos. ...
... Un sistema de control visual basado en Jacobiana transpuesta puede ser obtenido de la siguiente manera para tareas de posicionamiento [2]: ...
Conference Paper
En este artículo se describe la formulación, implementación y experimentación de un sistema de control visual dinámico aplicado a un robot de 6 grados de libertad. Se propone una arquitectura hardware basada en FPGAs para la implementación de los controladores. Con el objetivo de limitar la latencia del controlador se ha implementado en la FPGA no sólo el controlador sino también la captura y procesamiento de la información visual. Se hace uso de las capacidades de procesamiento paralelo de la FPGA para optimizar los diferentes componentes del sistema de control visual propuesto. Finaliza el artículo con los resultados experimentales obtenidos en tareas de posicionamiento de un robot Mitsubishi PA10 de 6 grados de libertad.
... Considering the complexity of inverse kinematics calculation, a first regulation control method that is designed directly in task-space was developed in Takegaki and Arimoto 10 by using a transposed Jacobian matrix. Later on, many other results have been developed for task-space robot manipulator control such as Miyazaki and Masutani 11 ; Lewis et al. 12 ; Kelly 13 ; Kelly et al. 14 These task-space control methods are based on the common basis that the kinematic information of the robot manipulator is accurately known such that the exact Jacobian matrix mapping joint-space velocity to its task-space counter-part is available. But in real practice, the robot kinematics may not be perfectly known mainly due to two sources of uncertainty: robot calibration errors; end position change of the grasped object after/during pick-up manipulation and interaction with the working environment. ...
... Robot kinematics uncertainties are not taken into account in these control designs and therefore they suffer the same problem as the aforementioned joint-space controllers. [1][2][3][4][5][6][7][8][9][10][11][12][13][14][18][19][20][21][22][23][24][25][26] In this paper, we propose a new adaptive neural networks control method for task-space regulation of RLED robots with uncertainties existing in kinematics. The main contribution of the proposed method lies in that, different from most existing NN-based controllers for RLED robots, asymptotic stability of the overall system can be guaranteed with the presence of all uncertainties in the robot kinematics, dynamics and the actuator dynamics. ...
... The projection operator C is used to limit u k in the feasible range of ½u min , u max and hence is bounded. 37 Substituting the desired current (14) into (11), we have ...
Article
Full-text available
Extensive research efforts have been made to address the motion control of rigid-link electrically-driven (RLED) robots in literature. However, most existing results were designed in joint space and need to be converted to task space as more and more control tasks are defined in their operational space. In this work, the direct task-space regulation of RLED robots with uncertain kinematics is studied by using neural networks (NN) technique. Radial basis function (RBF) neural networks are used to estimate complicated and calibration heavy robot kinematics and dynamics. The NN weights are updated on-line through two adaptation laws without the necessity of off-line training. Compared with most existing NN-based robot control results, the novelty of the proposed method lies in that asymptotic stability of the overall system can be achieved instead of just uniformly ultimately bounded (UUB) stability. Moreover, the proposed control method can tolerate not only the actuator dynamics uncertainty but also the uncertainty in robot kinematics by adopting an adaptive Jacobian matrix. The asymptotic stability of the overall system is proven rigorously through Lyapunov analysis. Numerical studies have been carried out to verify efficiency of the proposed method.
... An alternative approach that does not require the use of second order models consists in using the following control law, corresponding to a proportional and derivative feedback with gravity compensation [Kel+00]: ...
Thesis
This thesis deals with increasing the productivity in manufacturing robots, when performing sensor-based tasks. Such tasks may be coming from the target not being absolutely positioned. Visual servoing control schemes are well known for their robustness and precision, but generally require long execution times due to differentfactors.Control laws are generally formulated only at a kinematic level and characterized by exponentially decreasing velocities. Moreover, the nonlinear map from the operational space to the sensor space can lead to sub-optimal and longer paths. To increase control performances and reduce the time required to complete a task, this thesis investigates the use of second-order interaction models. Their use in dynamic feedback control laws is investigated and compared to classical controllers. They are then employed in Model Predictive Control (MPC) schemes, allowing to obtain higher velocities and better sensor trajectories. However, a drawback of MPC techniques is their computational load. In order to obtain even better results, a new type of predictive control is thus investigated, leading to a reduced number of variables involved in MPC optimization problems thanks to the use of a parameterization of the control input sequences.
... Los sistemas de control visual clásicos permiten realizar el posicionamiento punto a punto de un robot haciendo uso de información visual [1]. Como se menciona en [7], hasta mediados de la década de los 90 fueron pocos los controladores propuestos basados en visión que tenían en cuenta la dinámica no lineal de los robots manipuladores. En estas dos últimas décadas la tendencia sigue siendo la misma, aunque en los últimos años se aprecia un aumento de trabajos relacionados con el control visual directo basado en imagen para vehículos autónomos no tripulados. ...