Figure 1 - uploaded by Giulio Reina
Content may be subject to copyright.
Block diagram of the visual odometry algorithm using two consecutive image pairs corresponding to time t and t .  

Block diagram of the visual odometry algorithm using two consecutive image pairs corresponding to time t and t .  

Source publication
Article
Full-text available
External perception based on vision plays a critical role in developing improved and robust localization algorithms, as well as gaining important information about the vehicle and the terrain it is traversing. This paper presents two novel methods for rough terrain-mobile robots, using visual input. The first method consists of a stereovision algor...

Citations

... For instance, Reina et al. (2010) derive the slip angle of a robotic system using visual observation of the traces produced by its wheels in soft and deformable terrain. Two methods for estimating estates using visual inputs for rough terrain-mobile robots have been proposed by Milella et al. (2006). While the first method consists of a stereovision algorithm for real-time motion estimation, the second method aims at estimating the wheel sinkage of a mobile robot on sandy soil, similarly to Reina et al. (2010). ...
... To enable this localization task, robot incorporate one monocular camera supplied by Logitech Co. Ltd [11], to be used as images source with any visual odometry algorithms such as presented in [2], [12], [1], [15]. ...
Conference Paper
This article presents a study of the resources necessary to providemovement and localization in three wheeled omnidirectionalrobots, through the detailed presentation of the mathematical proceduresapplicable in the construction of the inverse kinematic model,the presentation of the main hardware and software componentsused for the construction of a functional prototype, and the testprocedure used to validate the assembly.The results demonstrate that the developed prototype is functional,as well as the developed kinematic equation, given the smallerror presented at the end of the validation procedure.
... 17 However, when applying scan matching to unstructured environments, the performance of the registration techniques degrades. 20 For example, a comparison of scan matching techniques in real-world data sets showed the limitations of several scan matching methods when applied to unstructured environments. 21 To improve registration in a specific environment, works such as Refs. ...
Article
Global navigation satellite system (GNSS) is the standard solution for solving the localization problem in outdoor environments, but its signal might be lost when driving in dense urban areas or in the presence of heavy vegetation or overhanging canopies. Hence, there is a need for alternative or complementary localization methods for autonomous driving. In recent years, exteroceptive sensors have gained much attention due to significant improvements in accuracy and cost-effectiveness, especially for 3D range sensors. By registering two successive 3D scans, known as scan matching, it is possible to estimate the pose of a vehicle. This work aims to provide in-depth analysis and comparison of the state-of-the-art 3D scan matching approaches as a solution to the localization problem of autonomous vehicles. Eight techniques (deterministic and probabilistic) are investigated: iterative closest point (with three different embodiments), normal distribution transform, coherent point drift, Gaussian mixture model, support vector-parametrized Gaussian mixture and the particle filter implementation. They are demonstrated in long path trials in both urban and agricultural environments and compared in terms of accuracy and consistency. On the one hand, most of the techniques can be successfully used in urban scenarios with the probabilistic approaches that show the best accuracy. On the other hand, agricultural settings have proved to be more challenging with significant errors even in short distance trials due to the presence of featureless natural objects. The results and discussion of this work will provide a guide for selecting the most suitable method and will encourage building of improvements on the identified limitations.
... To deal with this issue, the incremental localization technique, which is based on laser rangefinders -the Laser odometry (LO) has been developed. The during current research in regarding to rough and isolate terrain the following efficient Laser odometry (LO) techniques which have been analyzed: one of most robust laser odometry Iterative Corresponding Point (ICP) approach, which is based on point-to-line (PL-ICP) [1,17,6], the faster but not so accurate as PL-ICP method -the Polar Scan Matching approach (PSM) [7], the combined PL-ICP and PSM algorithm [23], the 2D laser odometry estimation which applies a static world assumption with dynamic obstacle filtering by the Cauchy M-estimator and calculates robot's motion from the motion of observed landmarks provided by the velocity function of sensors -Range Flow-based 2D Odometry (R2FO) [12], and the 3D SLAM pose estimation solution -Laser Odometry and Mapping (LOAM) [28]. Therefore, it has been assumed that the target robotic platform vision sensor system should based on laser range-finders that allow to applicate the laser odometry incremental localization approach for local pose estimation in challenging environment with low level of light and to exclude any of error sources regarding wheel-to-surface contact. ...
Chapter
The original approach of wheeled mobile robot localization in rough terrain, that connects local hybrid particles-Kalman filtering and global SLAM-based pose tracking, has been presented in this paper. Authors, basing on Adaptive Monte-Carlo Localization (AMCL) features of good resistance to unexpected errors, including slippages and kidnapping, use this particles filter together with the laser odometry and the inertial navigation for local pose estimation, which is performed by Extended Kalman Filter (EKF). In the proposed technique the MCL algorithm is used, as one of data sources for Kalman’s filter. Instead its regular performance of global localization. Global localization is based on Rao-Blackwellized SLAM technique with motion information estimated by EKF observations from Lidar sensor. The developed approach is based on Robot Operation System (ROS) framework and verified by V-REP simulations, in comparison to similar techniques. The reached results confirm the robustness and stability of the developed approach in inspection tasks of rough terrain.
... Microelectromechanical systems (MEMS) gyroscopes have the benefit of miniaturization and low cost mass-production, which makes them suitable to be used for various applications, including video cameras anti-shake systems, stabilization and driving of automotive applications, dead-reckoning [1], mobile robots [2,3], and aerospace applications [4]. MEMS Coriolis vibratory gyroscope (CVG) detects angular rotation by measuring the Coriolis force induced on the sense element which is vibrating inside the rotating drive frame [5]. ...
Article
Automotive MEMS gyroscopes are used for various purposes, such as rollover prevention and dynamic stability. Although, employment of gyroscopes for automotive applications is reported, what is not discussed is the gyro topology design and optimization for these applications. This article reports parametric topology size optimization of a MEMS gyroscope proper for automotive applications. The structure optimization of a translational dual-mass Coriolis vibratory gyroscope (CVG) with electrostatically sense/excitation mechanism is presented for automotive applications. Fabrication considerations conform to the X-FAB procedures, and application uncertainty is inherent in the vehicle lateral model. The proposed design approach takes into account the inaccessible levels of application uncertainty to optimization, which provides results that are more reliable, compared with the ideal models. More than thirteen thousand virtual cases are studied, in which the gyroscope performance specifications including scale factor, linearity, etc. are calculated. The doublelane-change (DLC) maneuver based on ISO 3888-1 is performed on the optimal candidates in order to apply the vehicle application constrains. The suggested optimal solution is the one with the least yaw rate error with respect to the target yaw rate instructed by the maneuver, which suits best for the automotive application. The results show that the yaw rate error of DLC maneuver improved from about 40% to about 1.4% using the optimization algorithm, which shows the effectiveness of the proposed method.
... L'évolution des systèmes microrobots distribués estétroitement liée au développement deś equipements et réseaux informatiques, comme la diminution du coût de fabrication, l'augmentation des performances, et les systèmes de production de masse [32,69]. Ces développements ont contribué aux avancées dans le domaine du contrôle et l'architecture de communication des microrobots autonomes distribués. ...
Thesis
Les microrobots MEMS sont des éléments miniaturisés qui peuvent capter et agir sur l'environnement. Leur taille est de l'ordre du millimètre et ils ont une faible capacité de mémoire et une capacité énergétique limitée. Les microrobots MEMS continuent d'accroître leur présence dans notre vie quotidienne. En effet, ils peuvent effectuer plusieurs missions et tâches dans une large gamme d'applications telles que la localisation d'odeur, la lutte contre les incendies, le service médical, la surveillance, le sauvetage et la sécurité. Pour faire ces taches et missions, ils doivent appliquer des protocoles de redéploiement afin de s'adapter aux conditions du travail. Ces algorithmes doivent être efficaces, évolutifs, robustes et ils doivent utiliser de préférence des informations locales. Le redéploiement pour les microrobots MEMS mobiles nécessite actuellement un système de positionnement et une carte (positions prédéfinies) de la forme cible. La solution traditionnelle de positionnement comme l'utilisation d'un GPS consommerait trop d'énergie. De plus, l'utilisation de solutions de positionnement algorithmique avec les techniques de multilatération pose toujours des problèmes à cause des erreurs dans les coordonnées obtenues.Dans la littérature, si nous voulons une auto-reconfiguration de microrobots vers une forme cible constituée de P positions, chaque microrobot doit avoir une capacité mémoire de P positions pour les sauvegarder. Par conséquent, si P est de l'ordre de milliers ou de millions, chaque noeud devra avoir une capacité de mémoire de positions en milliers ou millions. Parconséquent, ces algorithmes ne sont pas extensibles ou évolutifs. Dans cette thèse, on propose des protocoles de reconfiguration où les noeuds ne sont pas conscients de leurs positions dans le plan et n'enregistrent aucune position de la forme cible. En d'autres termes, les noeuds ne stockent pas au départ les coordonnées qui construisent la forme cible. Par conséquent, l'utilisation de mémoire pour chaque noeud est réduite à une complexité constante. L'objectif desalgorithmes distribués proposés est d'optimiser la topologie logique du réseau des microrobots afin de chercher une meilleure complexité pour l'échange de message et une communication peu coûteuse. Ces solutions sont complètement distribués. On montre pour la reconfiguration d'une chaîne à un carré comment gérer la dynamicité du réseau pour sauvegarder l'énergie, on étudie comment utiliser le parallélisme de mouvements pour optimiser le temps d'exécution et lenombre de mouvements. Ainsi, on propose une autre solution où la topologie physique initiale peut être n'importe quelle configuration initiale. Avec ces solutions, les noeuds peuvent exécuter l'algorithme indépendamment du lieu où ils sont déployés, parce que l'algorithme est indépendant de la carte de la forme cible. En outre, ces solutions cherchent à atteindre la forme de la cible avec une quantité minimale de mouvement.
... Milella). useful features for classification of different objects present in the scene [4]. Due to the complementary characteristics of the two sensors, it is reasonable to combine them in order to get improved performance. ...
Article
Full-text available
Imaging sensors are being increasingly used in autonomous vehicle applications for scene understanding. This paper presents a method that combines radar and monocular vision for ground modeling and scene segmentation by a mobile robot operating in outdoor environments. The proposed system features two main phases: a radar-supervised training phase and a visual classification phase. The training stage relies on radar measurements to drive the selection of ground patches in the camera images, and learn online the visual appearance of the ground. In the classification stage, the visual model of the ground can be used to perform high level tasks such as image segmentation and terrain classification, as well as to solve radar ambiguities. This method leads to the following main advantages: (a) self-supervised training of the visual classifier across the portion of the environment where radar overlaps with the camera field of view. This avoids time-consuming manual labeling and enables on-line implementation; (b) the ground model can be continuously updated during the operation of the vehicle, thus making feasible the use of the system in long range and long duration applications. This paper details the algorithms and presents experimental tests conducted in the field using an unmanned vehicle.
... Autonomous driving for ground vehicles in general, being they mobile robots or driverless cars, requires efficient characterization of the perceived scene to ensure safe navigation and perform basic tasks including path planning, obstacle avoidance and state estimation. A large body of research exists in the robotics community related to the development of robust algorithms for scene interpretation using LIDARs (Vandapel et al. 2004), stereo vision (Milella et al. 2006), and radars . The application examples have been diverse including off-road traversability analysis for planetary exploration (Gennery 1999), and off-road terrain classification in challenging vegetated areas Milella 2012, Bellone et al. 2013). ...
Article
Full-text available
This work presents an IR-based system for parking assistance and obstacle detection in the automotive field that employs the Microsoft Kinect camera for fast 3D point cloud reconstruction. In contrast to previous research that attempts to explicitly identify obstacles, the proposed system aims to detect "reachable regions" of the environment, i.e., those regions where the vehicle can drive to from its current position. A user-friendly 2D traversability grid of cells is generated and used as a visual aid for parking assistance. Given a raw 3D point cloud, first each point is mapped into individual cells, then, the elevation information is used within a graph-based algorithm to label a given cell as traversable or non-traversable. Following this rationale, positive and negative obstacles, as well as unknown regions can be implicitly detected. Additionally, no flat-world assumption is required. Experimental results, obtained from the system in typical parking scenarios, are presented showing its effectiveness for scene interpretation and detection of several types of obstacle.
... Stereovision is a widely adopted input for outdoor navigation, as it provides an effective technique to extract range information and perform complex scene understanding tasks [2][3][4][5][6]. Nevertheless, the accuracy of stereo reconstruction is generally affected by various design parameters, such as the baseline, i.e., the distance between the optical centers of two cameras in a stereo head [7,8]. ...
Article
Full-text available
In natural outdoor settings, advanced perception systems and learning strategies are major requirement for an autonomous vehicle to sense and understand the surrounding environment, recognizing artificial and natural structures, topology, vegetation and drivable paths. Stereo vision has been used extensively for this purpose. However, conventional single-baseline stereo does not scale well to different depths of perception. In this paper, a multi-baseline stereo frame is introduced to perform accurate 3D scene reconstruction from near range up to several meters away from the vehicle. A classifier that segments the scene into navigable and non-navigable areas based on 3D data is also described. It incorporates geometric features within an online self-learning framework to model and identify traversable ground, without any a priori assumption on the terrain characteristics. The ground model is automatically retrained during the robot motion, thus ensuring adaptation to environmental changes. The proposed strategy is of general applicability for robot’s perception and it can be implemented using any range sensor. Here, it is demonstrated for stereo-based data acquired by the multi-baseline device. Experimental tests, carried out in a rural environment with an off-road vehicle, are presented. It is shown that the use of a multi-baseline stereo frame allows for accurate reconstruction and scene segmentation at a wide range of visible distances, thus increasing the overall flexibility and reliability of the perception system.
... Proposed solutions change according to available sensors and data, as presented by Papadakis (2013), wherein an overview of terrain traversability analysis methods for unmanned vehicles can be found. Most of the methods are based on visual information, as in Milella et al. (2006), in which two visual algorithms are presented. The first approach concerns 6DoF ego-motion estimation, whereas the latter estimates wheel sinkage in sandy soil. ...
Article
Full-text available
Purpose – This research aims to address the issue of safe navigation for autonomous vehicles in highly challenging outdoor environments. Indeed, robust navigation of autonomous mobile robots over long distances requires advanced perception means for terrain traversability assessment. Design/methodology/approach – The use of visual systems may represent an efficient solution. This paper discusses recent findings in terrain traversability analysis from RGB-D images. In this context, the concept of point as described only by its Cartesian coordinates is reinterpreted in terms of local description. As a result, a novel descriptor for inferring the traversability of a terrain through its 3D representation, referred to as the unevenness point descriptor (UPD), is conceived. This descriptor features robustness and simplicity. Findings – The UPD-based algorithm shows robust terrain perception capabilities in both indoor and outdoor environment. The algorithm is able to detect obstacles and terrain irregularities. The system performance is validated in field experiments in both indoor and outdoor environments. Research limitations/implications – The UPD enhances the interpretation of 3D scene to improve the ambient awareness of unmanned vehicles. The larger implications of this method reside in its applicability for path planning purposes. Originality/value – This paper describes a visual algorithm for traversability assessment based on normal vectors analysis. The algorithm is simple and efficient providing fast real-time implementation, since the UPD does not require any data processing or previously generated digital elevation map to classify the scene. Moreover, it defines a local descriptor, which can be of general value for segmentation purposes of 3D point clouds and allows the underlining geometric pattern associated with each single 3D point to be fully captured and difficult scenarios to be correctly handled.