Conference Paper

FastSLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The ability to simultaneously localize a robot and accurately map its surroundings is considered by many to be a key prerequisite of truly autonomous robots. However, few approaches to this problem scale up to handle the very large number of landmarks present in real environments. Kalman filter-based algorithms, for example, require time quadratic in the number of landmarks to incorporate each sensor observation. This paper presents FastSLAM, an algorithm that recursively estimates the full posterior distribution over robot pose and landmark locations, yet scales logarithmically with the number of landmarks in the map. This algorithm is based on an exact factorization of the posterior into a product of conditional landmark distributions and a distribution over robot paths. The algorithm has been run successfully on as many as 50,000 landmarks, environments far beyond the reach of previous approaches. Experimental results demonstrate the advantages and limitations of the FastSLAM algorithm on both simulated and real-world data.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The RRR method [36] and Fast SLAM algorithm [44] are compared at last to evaluate the accuracy and efficiency of the proposed algorithm. Fast SLAM uses 100 particles for filtering estimation. ...
... The running time of the proposed SLAM, RRR, and Fast SLAM methods were 521.4 s, 665.6 s, and 752.0 s, respectively. The RRR method [36] and Fast SLAM algorithm [44] are compared at last to ev the accuracy and efficiency of the proposed algorithm. Fast SLAM uses 100 partic filtering estimation. ...
... The path estimated by proposed method is the to the truth and the average error is less than 10 m. Statistics of the running time o methods are also made on the computer that has 16G RAM and Intel i7-11700 CPU 2. With the true bathymetric map of the research area shown in Figure 13a as a ence, the mapping errors for pure DR, RRR, Fast SLAM, and the proposed algorith The RRR method [36] and Fast SLAM algorithm [44] are compared at last to ev the accuracy and efficiency of the proposed algorithm. Fast SLAM uses 100 partic filtering estimation. ...
Article
Full-text available
Accurate positioning is the necessary basis for autonomous underwater vehicles (AUV) to perform safe navigation in underwater tasks, such as port environment monitoring, target search, and seabed exploration. The position estimates of underwater navigation systems usually suffer from an error accumulation problem, which makes the AUVs difficult use to perform long-term and accurate underwater tasks. Underwater simultaneous localization and mapping (SLAM) approaches based on multibeam-bathymetric data have attracted much attention for being able to obtain error-bounded position estimates. Two problems limit the use of multibeam bathymetric SLAM in many scenarios. The first is that the loop closures only occur in the AUV path intersection areas. The second is that the data association is prone to failure in areas with gentle topographic changes. To overcome these problems, a joint graph-based underwater SLAM approach that fuses bathymetric and magnetic-beacon measurements is proposed in this paper. In the front-end, a robust dual-stage bathymetric data-association method is used to first detect loop closures on the multibeam bathymetric data. Then, a magnetic-beacon-detection method using Euler-deconvolution and optimization algorithms is designed to localize the magnetic beacons using a magnetic measurement sequence on the path. The loop closures obtained from both bathymetric and magnetic-beacon observations are fused to build a joint-factor graph. In the back-end, a diagnosis method is introduced to identify the potential false factors in the graph, thus improving the robustness of the joint SLAM system to outliers in the measurement data. Experiments based on field bathymetric datasets are performed to test the performance of the proposed approach. Compared with classic bathymetric SLAM algorithms, the proposed algorithm can improve the data-association accuracy by 50%, and the average positioning error after optimization converges to less than 10 m.
... Lastly, a separate map management routine is required to create landmarks that enter the sensor's FOV and delete landmarks that originate from false detections. Numerous works have demonstrated that such an approach works well in practice [9]- [11], but is sensitive to DA uncertainty [12]. To account for data ambiguities in a theoretically sound manner requires that DA uncertainty is treated as a part of the estimation process. ...
... In terms of implementation and representation, many SLAM algorithms take advantage of an important characteristic of the SLAM problem: by conditioning the map to the robot's trajectory, the landmarks are conditionally independent [19], making it natural to apply Rao-Blackwellized particle filter (RBPF) solutions. Such a factored solution was adopted in FastSLAM, which uses a particle filter (PF) to sample over robot paths and each particle possesses numerous low-dimensional extended Kalman filters (EKFs), one for each of the landmarks [11]. ...
... The filtering recursion of PHD-SLAM is similar to that in FastSLAM, but the use of RFS likelihoods effects how the particle weights are computed and in addition, PHD filters are used for mapping [13]. The standard PHD-SLAM algorithm [13], similar to that of FastSLAM 1.0 [11], is a bootstrap filter in which the dynamic model is used as the importance density. If the robot motion is affected by large disturbances, a large number of particles are required to adequately explore the sample-space which can be computationally expensive. ...
Article
Full-text available
One of the most fundamental problems in simultaneous localization and mapping (SLAM) is the ability to take into account data association (DA) uncertainties. In this paper, this problem is addressed by proposing a multi-hypotheses sampling distribution for particle filtering-based SLAM algorithms. By modeling the measurements and landmarks as random finite sets, an importance density approximation that incorporates DA uncertainties is derived. Then, a tractable Gaussian mixture model approximation of the multi-hypotheses importance density is proposed in which each mixture component represents a different DA. Finally, an iterative method for approximating the mixture components of the sampling distribution is utilized and a partitioned update strategy is developed. Using synthetic and experimental data, it is demonstrated that the proposed importance density improves the accuracy and robustness of landmark-based SLAM in cluttered scenarios over state-of-the-art methods. At the same time, the partitioned update strategy makes it possible to include multiple DA hypotheses in the importance density approximation, leading to a favorable linear complexity scaling, in terms of the number of landmarks in the field-of-view.
... It also restricts the use of SLAM algorithms to problems with vague landmarks, resulting in a data association problem [2,3]. In contrast, FastSLAM has a parallelized structure that enables it to achieve the needed performance for big map calculations in real-time applications [4,5]. ...
... There exist several variants of FastSLAM algorithm. The most common ones are FastSLAM 1.0 [4], FastSLAM 2.0 [5], and Unscented FastSLAM (U-FastSLAM) [6]. The FastSLAM 1.0 algorithm estimates the vehicle pose using the generic particle filter (PF) [4], where each particle is coupled with a set of independent extended Kalman filters that are used to determine the position of each feature on the map. ...
... The most common ones are FastSLAM 1.0 [4], FastSLAM 2.0 [5], and Unscented FastSLAM (U-FastSLAM) [6]. The FastSLAM 1.0 algorithm estimates the vehicle pose using the generic particle filter (PF) [4], where each particle is coupled with a set of independent extended Kalman filters that are used to determine the position of each feature on the map. In FastSLAM 2.0, some modifications have been made for FastSLAM 1.0 in the selection of proposal distributions and the computation of importance weights. ...
... More specifically, when considering online recursive algorithms, the former approximation results in extended Kalman filter (EKF)-SLAM Mourikis and Roumeliotis, 2007;Barrau and Bonnabel, 2015), while the latter results in particle filter-based SLAM. The most well-known particle filter SLAM algorithm is FastSLAM (Montemerlo et al., 2002). An extension of this algorithm is FastSLAM 2.0 (Montemerlo et al., 2003), which recognizes that the bootstrap particle filter implementation from Montemerlo et al. (2002) suffers from particle degeneracy, and hence it does not capture the full posterior distribution. ...
... The most well-known particle filter SLAM algorithm is FastSLAM (Montemerlo et al., 2002). An extension of this algorithm is FastSLAM 2.0 (Montemerlo et al., 2003), which recognizes that the bootstrap particle filter implementation from Montemerlo et al. (2002) suffers from particle degeneracy, and hence it does not capture the full posterior distribution. In this work, we will instead overcome this limitation using particle smoothing. ...
Article
Full-text available
Simultaneous localization and mapping (SLAM) is the task of building a map representation of an unknown environment while at the same time using it for positioning. A probabilistic interpretation of the SLAM task allows for incorporating prior knowledge and for operation under uncertainty. Contrary to the common practice of computing point estimates of the system states, we capture the full posterior density through approximate Bayesian inference. This dynamic learning task falls under state estimation, where the state-of-the-art is in sequential Monte Carlo methods that tackle the forward filtering problem. In this paper, we introduce a framework for probabilistic SLAM using particle smoothing that does not only incorporate observed data in current state estimates, but it also backtracks the updated knowledge to correct for past drift and ambiguities in both the map and in the states. Our solution can efficiently handle both dense and sparse map representations by Rao-Blackwellization of conditionally linear and conditionally linearized models. We show through simulations and real-world experiments how the principles apply to radio (Bluetooth low-energy/Wi-Fi), magnetic field, and visual SLAM. The proposed solution is general, efficient, and works well under confounding noise.
... Particle filters, while being a powerful tool for localization, suffer from degenerate proposal distributions in high dimensional state-spaces [Thrun, 2002] -making it infeasible to estimate both the robot and map state just by sampling alone. To overcome this problem, [Montemerlo et al., 2002] proposed a factored approach to SLAM which allowed the usage of a Rao-Blackwellized particle filter (RBPF). This was improved upon by [Montemerlo et al., 2003], applying a correction to the proposal distribution by taking into account the observations as well as the control. ...
... The results show, that FastSLAM2.0 produces estimates of very high confidence, which can easily be attributed to the fact that it improves the proposal distribution by incorporating landmark observations into its PDF [Montemerlo et al., 2002[Montemerlo et al., , 2003. Regular FastSLAM, on the other hand, suffers from high drops in effective weights, resampling only being a small but inevitably nearly imperceptible measure to mitigate this issue. ...
Conference Paper
Full-text available
The application of Bayesian filtering to localization problems in mobile robotics is a common approach. Particle filters are used to obtain a more accurate representation of the uncertainties in highly nonlinear systems. These filters rely on models of the kinematic and sensory behavior of the system, which require assumptions about the real-world dynamics. Such restrictions may lead to unbounded motion drift and hence render their applications unusable in real world scenarios. Machine learning-based models were recently introduced to overcome these problems. However, these approaches amount to high computational cost, which is not permissible on mobile robot platforms. A parameter-free method of learning nonlinear motion behavior and its approximated counterpart are employed in a Bayesian filtering scheme to tackle the problem of Simultaneous Localization and Mapping. The proposed technique is evaluated against a simulated trajectory and compared to ground truth data. The ability of Gaussian Processes to learn the vehicle kinematics are investigated in detail. Results show that while regression results can be sufficient for relative motion estimation, subsequent corrections are needed to account for mitigation of long-term drift and careful selection of environmental features is needed to avoid total divergence of the filter.
... Sensing facilitates locomotor planning and control in mobile robots and animals (Fig. 1A, B). For example, sensors that enable machine vision (e.g., cameras, radars, LiDAR (1)) to provide a geometric map of the environment (2)(3)(4)(5)(6). Based on this map, the robot can construct a navigation function (7,8) (or an artificial potential field (9)), with high potential regions representing obstacles and low potential minimum representing its goal, and plan and follow a gradient descent path to reach a goal while avoiding sparse obstacles (10)(11)(12)(13). ...
... (3) the landscape gradients along α direction was initially negative when the robot's state was in the roll basin to the +α side of the barrier (Fig. 2B, red) at a large roll angle. (4,5) The landscape gradients along β direction were initially negative when the robot's state was to the −β side of the local pitch or roll minimum. ...
... In 1999, Gutmann [13] formally proposed the graph optimization framework. In the single-sensor period, 2D laser SLAM methods emerged, such as Fast SLAM [14], Karto SLAM [15], G mapping [16], and other classic models. In the 1990s, visual SLAM began to emerge, and feature methods such as SUSAN [17] and SIFT (scale-invariant feature transform) [18] were proposed. ...
... In 1997, Lu et al. [12] introduced graph optimization into 2D SLAM for the first time, where graph optimization opened a new research environment for laser SLAM methods, such as Karto SLAM [15] and Lago SLAM [127]. In 2002, Montemerlo et al. first applied PF to laser SLAM, resulting in the Fast SLAM model [14], which provides a very fast 2D laser SLAM. In particular, Fast SLAM provides a fast-matching method that can output a raster map in real-time; however, it will consume more memory in a large-scale environment. ...
Article
Full-text available
In the face of large-scale environmental mapping requirements, through the use of lightweight and inexpensive robot groups to perceive the environment, the multi-robot cooperative (V)SLAM scheme can resolve the individual cost, global error accumulation, computational load, and risk concentration problems faced by single-robot SLAM schemes. Such schemes are robust and stable, form a current research hotspot, and relevant algorithms are being updated rapidly. In order to enable the reader to understand the development of this field rapidly and fully, this paper provides a comprehensive review. First, the development history of multi-robot collaborative SLAM is reviewed. Second, the fusion algorithms and architectures are detailed. Third, from the perspective of machine learning classification, the existing algorithms in this field are discussed, including the latest updates. All of this will make it easier for readers to discover problems that need to be studied further. Finally, future research prospects are listed.
... More specifically, when considering online recursive algorithms, the former approximation results in EKF-SLAM Barrau and Bonnabel, 2015;Mourikis and Roumeliotis, 2007), while the latter results in particle filter-based SLAM. The most well-known particle filter SLAM algorithm is FastSLAM (Montemerlo et al., 2002). An extension of this algorithm is Fast-SLAM 2.0 (Montemerlo et al., 2003), which recognizes that the bootstrap particle filter implementation from Montemerlo et al. (2002) suffers from particle degeneracy, and hence does it not capture the full posterior distribution. ...
... The most well-known particle filter SLAM algorithm is FastSLAM (Montemerlo et al., 2002). An extension of this algorithm is Fast-SLAM 2.0 (Montemerlo et al., 2003), which recognizes that the bootstrap particle filter implementation from Montemerlo et al. (2002) suffers from particle degeneracy, and hence does it not capture the full posterior distribution. In this work, we will instead overcome this limitation using particle smoothing. ...
Preprint
Simultaneous localization and mapping (SLAM) is the task of building a map representation of an unknown environment while it at the same time is used for positioning. A probabilistic interpretation of the SLAM task allows for incorporating prior knowledge and for operation under uncertainty. Contrary to the common practice of computing point estimates of the system states, we capture the full posterior density through approximate Bayesian inference. This dynamic learning task falls under state estimation, where the state-of-the-art is in sequential Monte Carlo methods that tackle the forward filtering problem. In this paper, we introduce a framework for probabilistic SLAM using particle smoothing that does not only incorporate observed data in current state estimates, but it also back-tracks the updated knowledge to correct for past drift and ambiguities in both the map and in the states. Our solution can efficiently handle both dense and sparse map representations by Rao-Blackwellization of conditionally linear and conditionally linearized models. We show through simulations and real-world experiments how the principles apply to radio (BLE/Wi-Fi), magnetic field, and visual SLAM. The proposed solution is general, efficient, and works well under confounding noise.
... The finding that fusion of RGB video and IMU data is advantageous to single-modality approaches is consistent with findings from other disciplines, despite the lack of exploration in biomechanics. State estimation and simultaneous localization and mapping (SLAM) in autonomous robot navigation is commonly achieved by fusing IMU and video data with extended Kalman filters [108] and modified particle filters [109] . Currently, this fusion method provides the most viable alternative to GPS and lidar-based odometry in aerial navigation [110] . ...
Thesis
Full-text available
Biomechanical analysis and musculoskeletal simulation techniques have been developed in laboratory settings to help us understand the body and the mechanisms of motion with enough precision to evaluate health and suggest rehabilitative treatments that aim to improve musculoskeletal function. To achieve this same result outside of a laboratory, portable sensors must be made available that can monitor the motion of the body, kinematics, the contact interactions between the body and the surrounding environment, kinetics, and the forces generated by the body that enable locomotion, muscle dynamics. While many portable sensing options have been developed in recent years, the accuracy of portable biomechanics monitoring techniques has yet to match the accuracy of traditional laboratory-based tools and remains insufficient for many rehabilitative applications. One cause for insufficient accuracy is that portable sensing techniques are often explored in isolation and unable to overcome their unique limitations. Another cause is that many possible alternative portable sensing approaches developed outside of the biomechanics field have yet to be fully investigated for their potential to serve as biomechanics monitoring tools in combination with existing portable techniques. Here, I show how developments in inertial sensing and computer vision techniques can be intelligently synthesized using either kinematics equations of motion or rigid body dynamics equations of motion to enable more accurate portable predictions of body kinematics than approaches which utilize only inertial sensors or computer vision (Chapter 2). I also show how nuance exists in the choice of fusion approach depending on the quality of inertial sensing data and computer vision estimates. A prominent trade-off exists when adding rigid body dynamics into the synthesis paradigm, and adding dynamics is helpful so long as the dynamics equations provide more accurate estimates of angular velocities and accelerations than inertial sensing data, which is likely to occur during real-world applications due to soft-tissue motion artifacts. Next, I show how capacitive sensing, a sensing technique that has been understudied in biomechanics, can be adapted for use as a customizable, comfortable, lightweight, and sensitive biomechanics monitoring wearable sensor that enables muscle-activity measurements with the fidelity of gold-standard laboratory-based techniques (Chapter 3). Capacitive sensing muscle-activity measurements can then be synthesized with inertial sensors to enable full-body kinematics, kinetics, and muscle dynamics predictions with comparable accuracy to that of marker-based motion capture. Altogether, these findings show the importance of extensively validating and incorporating new sensing approaches into biomechanics monitoring tools that seamlessly integrate with other sensors to cover their weaknesses. I suggest that future biomechanics monitoring emphasize more nuanced applications, where multiple sensing modalities are fused intelligently and optimized for specific applications to maximize monitoring accuracy and intervention efficacy in each local domain, rather than sacrificing local accuracy to reach for a one-size-fits-all solution. In the future, I envision the development of a list of locally optimized biomechanics monitoring best practices, where specific sensor combinations with precise placements and parameterized computational algorithms are tuned to maximize monitoring accuracy for use on specific clinical populations and pathologies. I believe this more nuanced approach to biomechanics monitoring will enable the development of the next generation of rehabilitative strategies to improve and sustain more widespread musculoskeletal well-being.
... Over the years, various SLAM solutions have been developed, each employing distinct methodologies and technological advancements to address the challenges of mapping and localization. Early approaches [9][10][11] relied heavily on expensive and high-precision sensors, like lasers, to achieve detailed environmental mapping and accurate localization. These systems, while effective, were often limited by their high costs and substantial computational requirements, which deviated from the original intention of SLAM technology. ...
Article
Full-text available
Advancements in robotics and mapping technology have spotlighted the development of Simultaneous Localization and Mapping (SLAM) systems as a key research area. However, the high cost of advanced SLAM systems poses a significant barrier to research and development in the field, while many low-cost SLAM systems, operating under resource constraints, fail to achieve high-precision real-time mapping and localization, rendering them unsuitable for practical applications. This paper introduces a cost-effective SLAM system design that maintains high performance while significantly reducing costs. Our approach utilizes economical components and efficient algorithms, addressing the high-cost barrier in the field. First, we developed a robust robotic platform based on a traditional four-wheeled vehicle structure, enhancing flexibility and load capacity. Then, we adapted the SLAM algorithm using the LiDAR-inertial Odometry framework coupled with the Fast Iterative Closest Point (ICP) algorithm to balance accuracy and real-time performance. Finally, we integrated the 3D multi-goal Rapidly exploring Random Tree (RRT) algorithm with Nonlinear Model Predictive Control (NMPC) for autonomous exploration in complex environments. Comprehensive experimental results confirm the system’s capability for real-time, autonomous navigation and mapping in intricate indoor settings, rivaling more expensive SLAM systems in accuracy and efficiency at a lower cost. Our research results are published as open access, facilitating greater accessibility and collaboration.
... However, these approaches often become computationally impractical in high-dimensional settings, such as those encountered in SLAM. FastSLAM [20] introduced a method that combines the strengths of particle filters and parametric techniques to address high-dimensional settings effectively. Nevertheless, it has limitations. ...
Preprint
Full-text available
We introduce an innovative method for incremental nonparametric probabilistic inference in high-dimensional state spaces. Our approach leverages \slices from high-dimensional surfaces to efficiently approximate posterior distributions of any shape. Unlike many existing graph-based methods, our \slices perspective eliminates the need for additional intermediate reconstructions, maintaining a more accurate representation of posterior distributions. Additionally, we propose a novel heuristic to balance between accuracy and efficiency, enabling real-time operation in nonparametric scenarios. In empirical evaluations on synthetic and real-world datasets, our \slices approach consistently outperforms other state-of-the-art methods. It demonstrates superior accuracy and achieves a significant reduction in computational complexity, often by an order of magnitude.
... In contrast to the gaussian methods, The RBPF SLAM can naturally estimate both continuous and discrete map representations [34,35]. Additionally, the posterior density of the robot's trajectory is estimated with a set of particles so that it's possible to hold multiple hypotheses about the state, a trait that is especially valuable in dynamic environments such as construction sites. ...
Article
Full-text available
Today, automated techniques for the update of as-built Building Information Models (BIM) make use of offline algorithms restricting the update frequency to an extent where continuous monitoring becomes nearly impossible. To address this problem, we propose a new method for robotic monitoring that updates an as-built BIM in real-time by solving a Simultaneous Localization and Mapping (SLAM) problem where the map is represented as a collection of elements from the as-planned BIM. The suggested approach is based on the Rao-Blackwellized Particle Filter (RBPF) which enables explicit injection of prior knowledge from the building’s construction schedule, i.e., from a 4D BIM, or its elements’ spatial relations. In the methods section we describe the benefits of using an exact inverse sensor model that provides a measure for the existence probability of elements while considering the entire probabilistic existence belief map. We continue by outlining robustification techniques that include both geometrical and temporal dimensions and present how we account for common pose and shape mistakes in constructed elements. Additionally, we show that our method reduces to the standard Monte Carlo Localization (MCL) in known areas. We conclude by presenting simulation results of the proposed method and comparing it to adjacent alternatives.
... There is a plethora of algorithms, such as the extended Kalman filter (EKF), the unscented Kalman filter, and the particle filter, for solving non-linear filtering and smoothing problems; each algorithm has its pros and cons in terms of estimation performance and computational complexity. Several recursive and batch algorithms specially tailored to utilize the mathematical structure of the SLAM problem have also been developed, see, e.g., the FastSLAM algorithm in [33], the Expectation-Maximization algorithm in [34], and the graph-based SLAM algorithms in [35]. Here, the EKF SLAM algorithm will be used due to its simplicity, while more advanced SLAM algorithms may be better suited in cases where the state space model contains highly nonlinear components. ...
Article
A signal-of-opportunity-based method to automatically calibrate the orientations and shapes of a set of hydrophone arrays using the sound emitted from nearby ships is presented. The calibration problem is formulated as a simultaneous localization and mapping problem, where the locations, orientations, and shapes of the arrays are viewed as the unknown map states, and the position, velocity, etc., of the source as the unknown dynamic states. A sequential likelihood ratio test, together with a maximum a posteriori source location estimator, is used to automatically detect suitable sources and initialize the calibration procedure. The performance of the proposed method is evaluated using data from two 56-element hydrophone arrays. Results from two sea trials indicate that: 1) signal sources suitable for the calibration can be automatically detected; 2) the shapes and orientations of the arrays can be consistently estimated from the different data sets with shape variations of a few decimeters and orientation variations of less than 2 $^{\circ }$ ; and 3) the uncertainty bounds calculated by the calibration method are in agreement with the true calibration uncertainties. Furthermore, the bearing time record from a sea trial with an autonomous mobile underwater signal source also shows the efficacy of the proposed calibration method. In the studied scenario, the root-mean-square bearing tracking error was reduced from 4 $^{\circ }$ to 1 $^{\circ }$ when using the calibrated array shapes compared to assuming the arrays' to be straight lines. Also, the beamforming gain increased by approximately 1 dB.
... Moreover, excellent schemes for 2D LiDAR SLAM have been proposed in recent years. The classic filter-based algorithms include Fast SLAM [46] and Gmapping [47], and graph-based algorithms include Karto SLAM [48] and Cartographer [49]. Cartographer, developed by Google engineers, has been proven to be a complete SLAM system that integrates localization, mapping, and loop-closure detection. ...
... With the rapid development of machine learning and computer science, autonomous navigation technology has been widely applied in technological and industrial fields, such as autonomous driving. Simultaneous localization and mapping (SLAM) [1] technology can realize autonomous navigation in unknown environments. Although the existing visual SLAM is still applied to realistic scenarios of some problems, previous studies often treated the external environment as a static assumption, thus ignoring the influence of dynamic objects in real environments on the accuracy of the SLAM algorithm, which eventually leads to inaccurate positioning information and serious deviations in the construction of an environmental map. ...
Article
Full-text available
Simultaneous localization and mapping (SLAM) is the technological basis of environmental sensing, and it has been widely applied to autonomous navigation. In combination with deep learning methods, dynamic SLAM algorithms have emerged to provide a certain stability and accuracy in dynamic scenes. However, the robustness and accuracy of existing dynamic SLAM algorithms are relatively low in dynamic scenes, and their performance is affected by potential dynamic objects and fast-moving dynamic objects. To solve the positioning interference caused by these dynamic objects, this study proposes a geometric constraint algorithm that utilizes a bidirectional scoring strategy for the estimation of a transformation matrix. First, a geometric constraint function is defined according to the Euclidean distance between corresponding feature points and the average distance of the corresponding edges. This function serves as the basis for determining abnormal scores for feature points. By utilizing these abnormal score values, the system can identify and eliminate highly dynamic feature points. Then, a transformation matrix estimation based on the filtered feature points is adopted to remove more outliers, and a function for evaluating the similarity of key points in two images is optimized during this process. Experiments were performed based on the TUM dynamic target dataset and Bonn RGB-D dynamic dataset, and the results showed that the added dynamic detection method effectively improved the performance compared to state of the art in highly dynamic scenarios.
... Most state-of-the-art methods for construction of maps of unknown environments without localization fall under the SLAM literature and require precise metric information (such as range or bearing measurements), rely on relatively precise odometry measurements, and in order to build a complete map, require extensive post-processing for correcting accumulated errors [1,7,10,13,18,37]. Such methods usually construct a grid-based coordinate representation, making the amount of information that need to be shared between robots extremely large, and a precise transformation between the different robots' grid maps difficult to compute [26,47]. ...
Article
Full-text available
In this paper, we address the problem of autonomous multi-robot mapping, exploration and navigation in unknown, GPS-denied indoor or urban environments using a team of robots equipped with directional sensors with limited sensing capabilities and limited computational resources. The robots have no a priori knowledge of the environment and need to rapidly explore and construct a map in a distributed manner using existing landmarks, the presence of which can be detected using onboard senors, although little to no metric information (distance or bearing to the landmarks) is available. In order to correctly and effectively achieve this, the presence of a necessary density/distribution of landmarks is ensured by design of the urban/indoor environment. We thus address this problem in two phases: (1) During the design/construction of the urban/indoor environment we can ensure that sufficient landmarks are placed within the environment. To that end we develop a filtration-based approach for designing strategic placement of landmarks in an environment. (2) We develop a distributed algorithm which a team of robots, with no a priori knowledge of the environment, can use to explore such an environment, construct a topological map requiring no metric/distance information, and use that map to navigate within the environment. This is achieved using a topological representation of the environment (called a Landmark Complex), instead of constructing a complete metric/pixel map. The representation is built by the robot as well as used by them for navigation through a balanced strategy involving exploration and exploitation. We use tools from homology theory for identifying “holes” in the coverage/exploration of the unknown environment and hence guide the robots towards achieving a complete exploration and mapping of the environment. Our simulation results demonstrate the effectiveness of the proposed metric-free topological (simplicial complex) representation in achieving exploration, localization and navigation within the environment.
... FBA (e.g. Bayesian filtering) infer the most likely state from available measurements and B Liang Shao shaoliangtjauto@163.com 1 College of Mechanical and Electrical Engineering, Wenzhou University, Wenzhou 325035, China uncertainties [2][3][4]. SMM are registered directly by converting the original point cloud to maximize the map probability [5,6]. ...
Article
Full-text available
Simultaneous localization and mapping (SLAM) is a critical technology in the field of robotics. Over the past decades, numerous SLAM algorithms based on 2D LiDAR have been proposed. In general, these algorithms achieve good results in indoor environments. However, for geometrically degenerated environments such as long hallways, robust localization of robots remains a challenging problem. In this study, we focus on the challenges faced by LiDAR SLAM in such conditions. We propose 2D LiDAR SLAM algorithm Lmapping that employs an IMU-centric data-processing pipeline. In the front-end, a point cloud is directly registered to a probabilistic map; environment recognition is accomplished using a new method that relies on LiDAR measurements. And this method is more suitable for front-end matching based on grid maps. LiDAR odometry and IMU pre-integration are then integrated to build a local factor graph in the sliding window of the submap. When the environment is degraded, an Error-State Kalman Filter (ESKF) is added as a constraint to correct the IMU bias. In the back-end, through mutual matching within and between submaps, and loop detection, accumulated errors from the front-end are reduced. To improve flexibility for different sensor combinations, Lmapping supports multiple LiDAR inputs and facilitates initialization with a common six-axis IMU. Extensive experiments have shown that Lmapping greatly outperforms the current mainstream 2D-SLAM algorithm (Cartographer) in terms of the mapping effect and localization accuracy, with high efficiency in degraded environments.
... Over the years, several commonly used methods have been proposed, starting from the 1990s until the present day. These include EKF-SLAM [3], Fast-SLAM [4], GraphSLAM [5], and others, which have all contributed significantly to the field of SLAM. ...
Article
Full-text available
Autonomous robots heavily rely on simultaneous localization and mapping (SLAM) techniques and sensor data to create accurate maps of their surroundings. When multiple robots are employed to expedite exploration, the resulting maps often have varying coordinates and scales. To achieve a comprehensive global view, the utilization of map merging techniques becomes necessary. Previous studies have typically depended on extracting image features from maps to establish connections. However, it is important to note that maps of the same location can exhibit inconsistencies due to sensing errors. Additionally, robot-generated maps are commonly represented in an occupancy grid format, which limits the availability of features for extraction and matching. Therefore, feature extraction and matching play crucial roles in map merging, particularly when dealing with uncertain sensing data. In this study, we introduce a novel method that addresses image noise resulting from sensing errors and applies additional corrections before performing feature extraction. This approach allows for the collection of features from corresponding locations in different maps, facilitating the establishment of connections between different coordinate systems and enabling effective map merging. Evaluation results demonstrate the significant reduction of sensing errors during the image stitching process, thanks to the proposed image pre-processing technique.
... One of the methods of applying a particle filter to solve the SLAM problem is known as Fast SLAM. Fast SLAM breaks down a SLAM problem into a robot positioning problem and a set of landmark estimation problems that are conditional on the robot status estimation [7]. So far, advanced versions of Fast SLAM have been offered by Lei et al. [8], but all of them are based on one basic rule; as reported by Murphy [9], the representation as such is accurate due to the natural conditional independence in the SLAM problem. ...
... ProbZelus combines, in a single source program, deterministic controllers and probabilistic models that can interact with each other to perform inference-in-the-loop. A classic example is the Simultaneous Localization and Mapping problem (SLAM) [Montemerlo et al. 2002] where an autonomous agent tries to infer both its position and a map of its environment to adapt its trajectory. ...
Preprint
Full-text available
Synchronous languages are now a standard industry tool for critical embedded systems. Designers write high-level specifications by composing streams of values using block diagrams. These languages have been extended with Bayesian reasoning to program state-space models which compute a stream of distributions given a stream of observations. However, the semantics of probabilistic models is only defined for scheduled equations -- a significant limitation compared to dataflow synchronous languages and block diagrams which do not require any ordering. In this paper we propose two schedule agnostic semantics for a probabilistic synchronous language. The key idea is to interpret probabilistic expressions as a stream of un-normalized density functions which maps random variable values to a result and positive score. The co-iterative semantics interprets programs as state machines and equations are computed using a fixpoint operator. The relational semantics directly manipulates streams and is thus a better fit to reason about program equivalence. We use the relational semantics to prove the correctness of a program transformation required to run an optimized inference algorithm for state-space models with constant parameters.
... Reasoning about uncertainties and the use of probabilistic representations, as opposed to relying on a single, most-likely estimate, have been central to many domains of robotics research, even before the advent of deep learning (Thrun 2002). In robot perception, several uncertainty-aware methods have been proposed in the past, starting from localization methods (Fox 1998;Fox et al. 2000;Thrun et al. 2001) Montemerlo et al. 2002;Kaess et al. 2010). As a result, many probabilistic methods such as factor graphs (Dellaert et al. 2017;Loeliger 2004) are now the work-horse of advanced consumer products such as robotic vacuum cleaners and unmanned aerial vehicles. ...
Article
Full-text available
Over the last decade, neural networks have reached almost every field of science and become a crucial part of various real world applications. Due to the increasing spread, confidence in neural network predictions has become more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over- or under-confidence, i.e. are badly calibrated. To overcome this, many researchers have been working on understanding and quantifying uncertainty in a neural network’s prediction. As a result, different types and sources of uncertainty have been identified and various approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. For that, a comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and irreducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks (BNNs), ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for calibrating neural networks, and give an overview of existing baselines and available implementations. Different examples from the wide spectrum of challenges in the fields of medical image analysis, robotics, and earth observation give an idea of the needs and challenges regarding uncertainties in the practical applications of neural networks. Additionally, the practical limitations of uncertainty quantification methods in neural networks for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.
... Montemerlo M et al. proposed FastSLAM [7], which combines the characteristics of Kalman filterand particle filter, so that each particle carries two kinds of data of robot trajectory and corresponding environment map, and estimates the pose and map of the robot at the same time. The map can be output in real time, but there is a problem of particle degradation. ...
Article
Full-text available
With the rapid development of robotics, intelligent robots have gradually entered human daily life and take responsibility for specific tasks, which brings tremendous convenience to people’s lives. In indoor places such as restaurants, hotels, logistics warehouses, SLAM technology can guide mobile robots to complete delivery tasks. Aiming at solving the problems of particle degradation in the traditional RBPF-based two-dimensional SLAM algorithm, our algorithm firstly proposes an improved particle filter method based on cross-mutation, which changes the information of the small weight particles in the system by designing cross-mutation operators. Thus, a certain amount of small weight particles are retained in the system to increase the diversity of particles. Secondly, our algorithm proposes a scan matching method based on point and line features, which makes full use of the geometric information of the two-dimensional point cloud by designing a new error function. Compared with traditional matching methods that only rely on point features, it has better matching accuracy. We conduct Two-dimensional SLAM experiments in the Gazebo simulation environment and the real natural environment. The results show that the proposed algorithm has higher mapping accuracy than the commonly used two-dimensional SLAM algorithm Gmapping at this stage. Using improved A* algorithm to conduct path planning experiments on the two-dimensional grid map built with our algorithm, the results prove the effectiveness of our algorithm.
... The development and implementation of SLAM algorithms for mobile robots has garnered significant attention in academic and engineering communities. Approaches generally involve recursive Bayesian estimation-via various kinds of Kalman Filters (Smith et al., 1990;Brossard et al., 2018), Particle Filters (Montemerlo et al., 2002;Sim et al., 2005), or occupancy grid methods (Stachniss et al., 2004)-or graph optimization (Thrun and Montemerlo, 2006;Sünderhauf and Protzel, 2012). In recent years, researchers have focused on incorporating semantic information into SLAM systems, using deep artificial neural networks, particularly convolutional or recurrent neural networks for object detection and semantic segmentation. ...
Article
Full-text available
To navigate in new environments, an animal must be able to keep track of its position while simultaneously creating and updating an internal map of features in the environment, a problem formulated as simultaneous localization and mapping (SLAM) in the field of robotics. This requires integrating information from different domains, including self-motion cues, sensory, and semantic information. Several specialized neuron classes have been identified in the mammalian brain as being involved in solving SLAM. While biology has inspired a whole class of SLAM algorithms, the use of semantic information has not been explored in such work. We present a novel, biologically plausible SLAM model called SSP-SLAM—a spiking neural network designed using tools for large scale cognitive modeling. Our model uses a vector representation of continuous spatial maps, which can be encoded via spiking neural activity and bound with other features (continuous and discrete) to create compressed structures containing semantic information from multiple domains (e.g., spatial, temporal, visual, conceptual). We demonstrate that the dynamics of these representations can be implemented with a hybrid oscillatory-interference and continuous attractor network of head direction cells. The estimated self-position from this network is used to learn an associative memory between semantically encoded landmarks and their positions, i.e., an environment map, which is used for loop closure. Our experiments demonstrate that environment maps can be learned accurately and their use greatly improves self-position estimation. Furthermore, grid cells, place cells, and object vector cells are observed by this model. We also run our path integrator network on the NengoLoihi neuromorphic emulator to demonstrate feasibility for a full neuromorphic implementation for energy efficient SLAM.
... With the advance of scientific research, FastSLAM has gradually become the mainstream algorithm for solving the SLAM problem [22,23]. State estimation is based on Monte Carlo sampling and the Rao-Blackwell particle filter (RBPF) [24]. ...
Article
Full-text available
Simultaneous localization and mapping (SLAM) is crucial and challenging for autonomous underwater vehicle (AUV) autonomous navigation in complex and uncertain ocean environments. However, inaccurate time-varying observation noise parameters may lead to filtering divergence and poor mapping accuracy. In addition, particles are easily trapped in local extrema during the resampling, which may lead to inaccurate state estimation. In this paper, we propose an innovative simulated annealing particle swarm optimization-adaptive unscented FastSLAM (SAPSO-AUFastSLAM) algorithm. To cope with the unknown observation noise, the maximum a posteriori probability estimation algorithm is introduced into SLAM to recursively correct the measurement noise. Firstly, the Sage–Husa (SH) based unscented particle filter (UPF) algorithm is proposed to estimate time-varying measurement noise adaptively in AUV path estimation for improving filtering accuracy. Secondly, the SH-based unscented Kalman filter (UKF) algorithm is proposed to enhance mapping accuracy in feature estimation. Thirdly, SAPSO-based resampling is proposed to optimize posterior particles. The random judgment mechanism is used to update feasible solutions iteratively, which makes particles disengage local extreme values and achieve optimal global effects. The effectiveness and accuracy of the proposed algorithm are evaluated through simulation and sea trial data. The average AUV navigation accuracy of the presented SAPSO-AUFastSLAM method is improved by 18.0% compared to FastSLAM, 6.5% compared to UFastSLAM, and 5.9% compared to PSO-UFastSLAM.
... Still, before directing the platform, the pose of the next station must be determined. In the last two decades, many SLAM approaches used a probabilistic method for reducing the impact of the inaccurate sensor on the map, using millimeter waves for building relative maps [120], integrating Particle Filter and Extended Kalman Filter [121], or introducing Square Root Smoothing and Mapping [122]. Another SLAM system is Visual SLAM which can produce a fast 3D reconstruction onthe-fly [123]. ...
Article
Full-text available
Unmanned aerial vehicles (UAVs) are widely used platforms to carry data capturing sensors for various applications. The reason for this success can be found in many aspects: the high maneuverability of the UAVs, the capability of performing autonomous data acquisition, flying at different heights, and the possibility to reach almost any vantage point. The selection of appropriate viewpoints and planning the optimum trajectories of UAVs is an emerging topic that aims at increasing the automation, efficiency and reliability of the data capturing process to achieve a dataset with desired quality. On the other hand, 3D reconstruction using the data captured by UAVs is also attracting attention in research and industry. This review paper investigates a wide range of model-free and model-based algorithms for viewpoints and path planning for 3D reconstruction of large-scale objects. It presents a bibliography of more than 200 references to cover different aspects of the topic. The analyzed approaches are limited to those that employ a single-UAV as a data capturing platform for outdoor 3D reconstruction purposes. In addition to discussing the evaluation strategies, this paper also highlights the innovations and limitations of the investigated approaches. It concludes with a critical analysis of the existing challenges and future research perspectives.
... Nevertheless, the above two visual SLAM systems are highly affected by environmental conditions and are not suitable for field environments. Gmapping [18][19][20][21], Hector SLAM [22], and Cartographer [23] are the famous two-dimensional (2D) lidar SLAM algorithms but they are also not suitable for field environments due to the small amount of information in single-line lidar. LOAM [24] is a milestone in the 3D lidar SLAM algorithm [25]. ...
Article
Full-text available
Quickly grasping the surrounding environment’s information and the location of the vehicle is the key to achieving automatic driving. However, accurate and robust localization and mapping are still challenging for field vehicles and robots due to the characteristics of emptiness, terrain changeability, and Global Navigation Satellite System (GNSS)-denied in complex field environments. In this study, an LVI-SAM-based lidar, inertial, and visual fusion using simultaneous localization and mapping (SLAM) algorithm was proposed to solve the problem of localization and mapping for vehicles in such open, bumpy, and Global Positioning System (GPS)-denied field environments. In this method, a joint lidar front end of pose estimation and correction was designed using the Super4PCS, Iterative Closest Point (ICP), and Normal Distributions Transform (NDT) algorithms and their variants. The algorithm can balance localization accuracy and real-time performance by carrying out lower-frequency pose correction based on higher-frequency pose estimation. Experimental results from the complex field environment show that, compared with LVI-SAM, the proposed method can reduce the translational error of localization by about 4.7% and create a three-dimensional point cloud map of the environment in real time, realizing the high-precision and high-robustness localization and mapping of the vehicle in complex field environments.
... Search and rescue robots can be broadly classified into two categories. One is a method in which the robot autonomously performs search and rescue [1][2][3][4][5][6], and the other is a method in which humans and robots cooperate to search for and rescue survivors [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23]. Of the two methods, the second is more effective at searching for and rescuing survivors. ...
Article
Full-text available
At a disaster site, unforeseen circumstances can severely limit the activities of rescue workers. The best solution is for a cooperative team of robots and rescue workers to complete the rescue work. Therefore, in this paper, we propose a simple and low-cost sensing and control architecture for a search and rescue assistant robot using a thermal infrared sensor array, an ultrasonic sensor, and a three-axis accelerometer. In the proposed architecture, we estimate the location of human survivors using a low-cost thermal IR sensor array and generate and control the trajectory of approaching the searched human survivors. Obstacle avoidance and control are also possible through 3D position estimation of obstacles using 1D ultrasonic sensor integration. In addition, a three-axis accelerometer is used to estimate the tilt angle of the robot according to terrain conditions, and horizontal control of the storage box angle is performed using this feature. A prototype robot was implemented to experimentally validate its performance and can be easily constructed from inexpensive, commonly available parts. The implementation of this system is simple and cost-effective, making it a viable solution for search and rescue operations. The experimental results have demonstrated the effectiveness of the proposed method, showing that it is capable of achieving a level storage box and identifying the location of survivors while moving on a sloped terrain.
... Online SLAM is a process which sequentially estimates the current position of the robot and updates the map of the environment visible from that position. Self-position estimations with the Kalman filters [5], particle filters [9,15,18], and FastSLAM [21] have been proposed for online SLAM. They are real-time as the self-position estimation is carried out sequentially. ...
... In SLAC, we now replace the unknown static landmarks' positions with the calibration parameters. Therefore, we can employ the idea of FastSLAM (Montemerlo et al., 2002). In FastSLAM, the joint posterior of the robot and landmark positions is decomposed into two parts and, then, estimated with a Rao-Blackwellized particle filter. ...
Article
Magnetic field localization is based on the fact that the Earth magnetic field is distorted in the vicinity of ferromagnetic objects. When ferromagnetic objects are at fixed positions, the distortions are also fixed and thus, contain location information. In our prior work, we proposed a simultaneous localization and calibration (SLAC) algorithm based on a Rao-Blackwellized particle filter that enables magnetic train localization using only uncalibrated magnetometer measurements. In this paper, a lower-complexity version of the SLAC algorithm is proposed that only estimates a subset of calibration parameters. An evaluation compares the full and reduced SLAC approach to a particle filter in which the magnetometer is pre-calibrated with a fixed set of parameters. The results show a clear advantage for both SLAC approaches and that the SLAC algorithm with a reduced set of calibration parameters achieves the same performance as the one with a full set of parameters.
... SLAM is formulated and solved in different ways in which the probabilistic SLAM [39] is generally considered to be the most widely accepted formulation. As of now, with various solutions at the theoretical and conceptual level, it can be considered a solved problem, e.g., Rao-Blackwellized particle filter (RBPF) SLAM [40], extended Kalman filter (EKF) SLAM [41], and GraphSLAM [42]. However, when addressing the practical execution of large-scale problems, significant problems remain available. ...
Article
Full-text available
Learning how to navigate autonomously in an unknown indoor environment without colliding with static and dynamic obstacles is important for mobile robots. The conventional mobile robot navigation system does not have the ability to learn autonomously. Unlike conventional approaches, this paper proposes an end-to-end approach that uses deep reinforcement learning for autonomous mobile robot navigation in an unknown environment. Two types of deep Q-learning agents, such as deep Q-network and double deep Q-network agents are proposed to enable the mobile robot to autonomously learn about collision avoidance and navigation capabilities in an unknown environment. For autonomous mobile robot navigation in an unknown environment, the process of detecting the target object is first carried out using a deep neural network model, and then the process of navigation to the target object is followed using the deep Q-network or double deep Q-network algorithm. The simulation results show that the mobile robot can autonomously navigate, recognize, and reach the target object location in an unknown environment without colliding with static and dynamic obstacles. Similar results are obtained in real-world experiments, but only with static obstacles. The DDQN agent outperforms the DQN agent in reaching the target object location in the test simulation by 5.06%.
... SLAM is formulated and solved in different ways in which the probabilistic SLAM [39] is generally considered to be the most widely accepted formulation. As of now, with various solutions at the theoretical and conceptual level, it can be considered a solved problem, e.g., Rao-Blackwellized particle filter (RBPF) SLAM [40], extended Kalman filter (EKF) SLAM [41], and GraphSLAM [42]. However, when addressing the practical execution of large-scale problems, significant problems remain available. ...
Article
Full-text available
Learning how to navigate autonomously in an unknown indoor environment without colliding with static and dynamic obstacles is important for mobile robots. The conventional mobile robot navigation system does not have the ability to learn autonomously. Unlike conventional approaches, this paper proposes an end-to-end approach that uses deep reinforcement learning for autonomous mobile robot navigation in an unknown environment. Two types of deep Q-learning agents, such as deep Q-network and double deep Q-network agents are proposed to enable the mobile robot to autonomously learn about collision avoidance and navigation capabilities in an unknown environment. For autonomous mobile robot navigation in an unknown environment, the process of detecting the target object is first carried out using a deep neural network model, and then the process of navigation to the target object is followed using the deep Q-network or double deep Q-network algorithm. The simulation results show that the mobile robot can autonomously navigate, recognize, and reach the target object location in an unknown environment without colliding with static and dynamic obstacles. Similar results are obtained in real-world experiments, but only with static obstacles. The DDQN agent outperforms the DQN agent in reaching the target object location in the test simulation by 5.06%.
... The SLAM problem is split into filtering and smoothing techniques [48] and can be described by three main probabilistic formulations based on Kalman-filter (EKF-SLAM [233]), particle-filter (FastSLAM [184], FastSLAM 2.0 [185,26]) and a graph (GraphSLAM [252,253]). Filter-based approaches are classified as online SLAM [252] since they only estimate the current state of the camera pose and maybe parts of the map [83]. ...
Thesis
Full-text available
This work aims at providing a novel camera motion estimation pipeline from large collections of unordered omnidirectional images. In order to keep the pipeline as general and flexible as possible, cameras are modelled as unit spheres, allowing to incorporate any central camera type. For each camera an unprojection lookup is generated from intrinsics, which is called P2S-map (Pixel-to-Sphere-map), mapping pixels to their corresponding positions on the unit sphere. Consequently the camera geometry becomes independent of the underlying projection model. The pipeline also generates P2S-maps from world map projections with less distortion effects as they are known from cartography. Using P2S-maps from camera calibration and world map projection allows to convert omnidirectional camera images to an appropriate world map projection in oder to apply standard feature extraction and matching algorithms for data association. The proposed estimation pipeline combines the flexibility of SfM (Structure from Motion) - which handles unordered image collections - with the efficiency of PGO (Pose Graph Optimization), which is used as back-end in graph-based Visual SLAM (Simultaneous Localization and Mapping) approaches to optimize camera poses from large image sequences. SfM uses BA (Bundle Adjustment) to jointly optimize camera poses (motion) and 3d feature locations (structure), which becomes computationally expensive for large-scale scenarios. On the contrary PGO solves for camera poses (motion) from measured transformations between cameras, maintaining optimization managable. The proposed estimation algorithm combines both worlds. It obtains up-to-scale transformations between image pairs using two-view constraints, which are jointly scaled using trifocal constraints. A pose graph is generated from scaled two-view transformations and solved by PGO to obtain camera motion efficiently even for large image collections. Obtained results can be used as input data to provide initial pose estimates for further 3d reconstruction purposes e.g. to build a sparse structure from feature correspondences in an SfM or SLAM framework with further refinement via BA. The pipeline also incorporates fixed extrinsic constraints from multi-camera setups as well as depth information provided by RGBD sensors. The entire camera motion estimation pipeline does not need to generate a sparse 3d structure of the captured environment and thus is called SCME (Structureless Camera Motion Estimation).
... However, there are also some limitations in the application, such as the quadratic complexity and sensitivity of single hypothesis data association. The FastSLAM algorithm based on Rao-Blackwellized Particle Filter (RBPF) is another commonly used SLAM framework [15]. The RBPF decomposes the complete posterior distribution into robot localization and features estimation, which reduces the complexity of the SLAM algorithm. ...
Article
Full-text available
Simultaneous Localization and Mapping (SLAM) is a well-known solution for mapping and realizing autonomous navigation of an Autonomous Underwater Vehicle (AUV) in unknown underwater environments. However, the inaccurate time-varying observation noise will cause filtering divergence and reduce the accuracy of localization and feature estimation. In this paper, VB-AUFastSLAM based on the unscented-FastSLAM (UFastSLAM) and the Variational Bayesian (VB) is proposed. The UFastSLAM combines unscented particle filter (UPF) and unscented Kalman filter (UKF) to estimate the AUV poses and features. In addition, to resist the unknown time-varying observation noise, the method of Variational Bayesian learning is introduced into the SLAM framework. Firstly, the VB method is used to estimate the joint posterior probability of the AUV path and observation noise. The Inverse-Gamma distribution is used to model the observation noise and real-time noise parameters estimation is performed to improve the AUV localization accuracy. Secondly, VB is reused to estimate the noise parameters in the feature update stage to enhance the performance of the feature estimation. The proposed algorithms are first validated in an open-source simulation environment. Then, an AUV SLAM system based on the Inertial Navigation System (INS), Doppler Velocity Log (DVL), and single-beam Sonar are also built to verify the effectiveness of the proposed algorithms in the marine environment. The accuracy of the proposed methods can reach 0.742% and 0.776% of the range, respectively, which is much better than 1.825% and 1.397% of the traditional methods.
... Predictive technology is a very active area of research and it's already implemented in many research programs, e.g., in robot navigation (Montemerlo, Thrun, Koller, & Wegbreit, 2002), in self-driving cars (Thrun et al., 2006) or in Bayesian machine learning and artificial intelligence (Ghahramani, 2015). However, these are still incomplete solutions to real-time applications involving planning and prediction of future states. ...
Article
Full-text available
Simultaneous Localization and Mapping (SLAM) is a technique in robotics, mainly used in mobile robots. The fundamental task of SLAM is to interact with the environment through various sensors on a robot or other platform, while simultaneously determining the position and generating a cartographic representation, i.e., a model of the observed space. Since its initial introduction in 1986, SLAM has become a major topic in the world of robotics, and it is fair to say that not a day goes by without something being reported about it. Beyond robotics, experts from different fields also use SLAM, mainly for two different purposes, depending on what they are interested in, whether it is generating a model of the observed space or determining the position of robots and other platforms. Over time, various forms of SLAM have emerged, including Extended Kalman Filter SLAM, Particle Filter based SLAM, and Graph-Based SLAM. Several studies have shown that Graph-Based SLAM consistently produces better results. Successful implementations of SLAM depend on several factors, with the choice of appropriate sensors and the type of the environment being of paramount importance. The goal of this paper is to provide a brief insight into the definition of SLAM, its types, advantages and limitations, practical implementations, and to highlight the close connection between SLAM and geodesy.
Article
Full-text available
Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.
Article
Full-text available
Indoor spraying is a crucial component of the construction industry. The future evolution of the spraying industry will inexorably involve the substitution of robot-assisted autonomous spraying for manual spraying. In this paper, a fully autonomous indoor spraying robot is designed to handle a variety of manual spraying issues. At the system level, the robot’s mechanical structure, hardware system, general control logic, and crucial technologies are meticulously constructed. The robot is equipped with a chassis that can travel in any direction, a detachable material carrier, a secondary upgrade mechanism, and a spraying system. The robot performs indoor mapping and navigation with the lidar sensor, identifies non-sprayable areas, such as windows, using the camera, and then evaluates the spraying result after spraying. After analysis of the robot’s working environment, the SLAM algorithm and the deep learning-based object detection algorithm are improved in conjunction with the actual scene to assure accuracy and meet the criteria for real-time operation on embedded devices. The robot’s ability to perform a succession of indoor spraying activities without human intervention and to automatically adapt and adjust to varied indoor conditions is an important reference for its practical application in the field of interior design.
Chapter
Device localization and radar-like mapping are at the heart of integrated sensing and communication, enabling not only new services and applications, but also improving communication quality with reduced overheads. These forms of sensing are however susceptible to data association problems, due to the unknown relation between measurements and detected objects or targets. In this chapter, we provide an overview of the fundamental tools used to solve mapping, tracking, and simultaneous localization and mapping (SLAM) problems. We distinguish the different types of sensing problems and then focus on mapping and SLAM as running examples. Starting from the applicable models and definitions, we describe the different algorithmic approaches, with a particular focus on how to deal with data association problems. In particular, methods based on random finite set theory and Bayesian graphical models are introduced in detail. A numerical study with synthetic and experimental data is then used to compare these approaches in a variety of scenarios.
Article
Full-text available
The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system.
Technical Report
Full-text available
This work presents the implementation and subsequent assessment of the FastSLAM method, an algorithm to solve the SLAM problem. Our group has defined a testbed of various environments to evaluate the performance and results of our implementation of FastSLAM. Experimentations have been done in simulated (micro-simulator) and natural environments using a TurtleBot3 robot. In both cases, we analyze the applicability of the method to build sufficiently accurate maps and localize the robot in real time.
Article
Full-text available
New approaches to 3D vision are enabling new advances in artificial intelligence and autonomous vehicles, a better understanding of how animals navigate the 3D world, and new insights into human perception in virtual and augmented reality. Whilst traditional approaches to 3D vision in computer vision (SLAM: simultaneous localization and mapping), animal navigation (cognitive maps), and human vision (optimal cue integration) start from the assumption that the aim of 3D vision is to provide an accurate 3D model of the world, the new approaches to 3D vision explored in this issue challenge this assumption. Instead, they investigate the possibility that computer vision, animal navigation, and human vision can rely on partial or distorted models or no model at all. This issue also highlights the implications for artificial intelligence, autonomous vehicles, human perception in virtual and augmented reality, and the treatment of visual disorders, all of which are explored by individual articles. This article is part of a discussion meeting issue ‘New approaches to 3D vision’.
Article
Full-text available
The ability to recognize humans and their activities by vision is key for a machine to interact intelligently and effortlessly with a human-inhabited environment. Because of many potentially important applications, “looking at people” is currently one of the most active application domains in computer vision. This survey identifies a number of promising applications and provides an overview of recent developments in this domain. The scope of this survey is limited to work on whole-body or hand motion; it does not include work on human faces. The emphasis is on discussing the various methodologies; they are grouped in 2-D approaches with or without explicit shape models and 3-D approaches. Where appropriate, systems are reviewed. We conclude with some thoughts about future directions.
Conference Paper
Full-text available
Abstract In this paper we propose a new data structure, called the Bkd - tree, for indexing large multi - dimensional point data sets The Bkd - tree is an I/O - efficient dynamic data structure based on the kd - tree We present the results of an extensive experimental study showing that unlike previous attempts on making external versions of the kd - tree dynamic, the Bkd - tree maintains its high space utilization and excellent query and update performance regardless of the number of updates performed on it
Conference Paper
Full-text available
The ability to simultaneously localise a robot and accurately map its surroundings is considered by many to be a key prerequisite of truly autonomous robots. This paper presents a real-world implementation of FastSLAM, an algorithm that recursively estimates the full posterior distribution of both robot pose and landmark locations. In particular, we present an extension to FastSLAM that addresses the data association problem using a nearest neighbor technique. Building on this, we also present a novel multiple hypotheses tracking implementation (MHT) to handle uncertainty in the data association. Finally an extension to the multi-robot case is introduced. Our algorithm has been run successfully using a number of data sets obtained in outdoor environments. Experimental results are presented that demonstrate the performance of the algorithms when compared with standard Kalman filter-based approaches.
Conference Paper
Full-text available
This paper describes Atlas, a hybrid metrical/topological approach to SLAM that achieves efficient mapping of large-scale environments. The representation is a graph of coordinate frames, with each vertex in the graph representing a local frame, and each edge representing the transformation between adjacent frames. In each frame, we build a map that captures the local environment and the current robot pose along with the uncertainties of each. Each map's uncertainties are modeled with respect to its own frame. Probabilities of entities with respect to arbitrary frames are generated by following a path formed by the edges between adjacent frames, computed via Dijkstra's shortest path algorithm. Loop closing is achieved via an efficient map matching algorithm. We demonstrate the technique running in real-time in a large indoor structured environment (2.2 km path length) with multiple nested loops using laser or ultrasonic ranging sensors.
Conference Paper
Full-text available
This paper addresses the problem of Simultaneous Localization and Mapping (SLAM) when working in very large environments. A Hybrid architecture is presented that makes use of the Extended Kalman Filter to perform SLAM in a very efficient form and a Monte Carlo type filter to resolve the data association problem potentially present when returning to a known location after a large exploration task. The proposed algorithm incorporates significant integrity to the standard SLAM algorithms by providing the ability to handle multimodal distributions in real time. Experimental results in outdoor environments are presented to demonstrate the performance of the algorithm proposed.
Article
Full-text available
In this paper, we address the problem of robust data association for simultaneous vehicle localization and map building. We show that the classical gated nearest neighbor approach, which considers each matching between sensor observations and features independently, ignores the fact that measurement prediction errors are correlated. This leads to easily accepting incorrect matchings when clutter or vehicle errors increase. We propose a new measurement of the joint compatibility of a set of pairings that successfully rejects spurious matchings. We show experimentally that this restrictive criterion can be used to efficiently search for the best solution to data association. Unlike the nearest neighbor, this method provides a robust solution in complex situations, such as cluttered environments or when revisiting previously mapped regions.
Article
Full-text available
This paper presents a technique for mapping partially observable features from multiple un- certain vantage points. The problem of concurrent mapping and localization (CML) is stated as follows: starting from an initial known position, a mobile robot travels through a sequence of posi- tions, obtaining a set of sensor measurements at each position. The goal is to process the sensor data to produce an estimate of the trajectory of the robot while concurrently building a map of the envi- ronment. In this paper, we describe a generalized framework for CML that incorporates temporal as well as spatial correlations. The representation is expanded to incorporate past vehicle positions in the state state vector. Estimates of the correlations between current and previous vehicle states are explicitly maintained. This enables the consistent initialization of map features using data from multiple time steps. Updates to the map and the vehicle trajectory can also be performed in batches of data acquired from multiple vantage points. The method is illustrated with sonar data from a testing tank and via experiments with a B21 land mobile robot, demonstrating the ability to perform CML with sparse and ambiguous data.
Article
Full-text available
This paper describes a general method for estimating the nominal relationship and expected error (covariance) between coordinate frames representing the relative locations of objects. The frames may be known only indirectly through a series of spatial relationships, each with its associated error, arising from diverse causes, including positioning errors, measurement errors, or tolerances in part dimensions. This estimation method can be used to answer such questions as whether a camera attached to a robot is likely to have a particular reference object in its field of view. The calculated estimates agree well with those from an independent Monte Carlo simulation. The method makes it possible to decide in advance whether an uncertain relationship is known accurately enough for some task and, if not, how much of an improvement in locational knowledge a proposed sensor will provide. The method presented can be generalized to six degrees of freedom and provides a practical means of estimating the relationships (position and orientation) among objects, as well as estimating the uncertainty associated with the relationships.
Article
Full-text available
Chemical analyses returned by Mars Pathfinder indicate that some rocks may be high in silica, implying differentiated parent materials. Rounded pebbles and cobbles and a possible conglomerate suggest fluvial processes that imply liquid water in equilibrium with the atmosphere and thus a warmer and wetter past. The moment of inertia indicates a central metallic core of 1300 to 2000 kilometers in radius. Composite airborne dust particles appear magnetized by freeze-dried maghemite stain or cement that may have been leached from crustal materials by an active hydrologic cycle. Remote-sensing data at a scale of generally greater than ∼1 kilometer and an Earth analog correctly predicted a rocky plain safe for landing and roving with a variety of rocks deposited by catastrophic floods that are relatively dust-free.
Article
Full-text available
Inferring scene geometry and camera motion from a stream of images is possible in principle, but it is an ill-conditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion without computing depth as an intermediate step. An image stream can be represented by the 2F x P measurement matrix of the image coordinates of P points tracked through F frames. Under orthographic projection this matrix is of rank 3. Using this observation, the factorization method uses the singular value decomposition technique to factor the measurement matrix into two matrices, which represent object shape and camera motion, respectively. The method can also handle and obtain a full solution from a partially filled-in measurement matrix, which occurs when features appear and disappear in the image sequence due to occlusions or tracking failures. The method gives accurate results and does not introduce smoothing in either shape or motion. We demonstrate this with a series of experiments on laboratory and outdoor image streams, with and without occlusions.
Conference Paper
Full-text available
The problem of generating maps with mobile robots has received considerable attention over the past years. However, most of the approaches assume that the environment is static during the data-acquisition phase. In this paper we consider the problem of creating maps with mobile robots in populated environments. Our approach uses a probabilistic method to track multiple people and to incorporate the results of the tracking technique into the mapping process. The resulting maps are more accurate since corrupted readings are treated accordingly during the matching phase and since the number of spurious objects in the resulting maps is reduced. Our approach has been implemented and tested on real robot systems in indoor and outdoor scenarios. We present several experiments illustrating the capabilities of our approach to generate accurate 2D and 3D maps.
Conference Paper
Full-text available
This paper describes a real-time implementation of feature-based concurrent mapping and localization (CML) running on a mobile robot in a dynamic indoor environment. Novel characteristics of this work include: (1) a hierarchical representation of uncertain geometric relationships that extends the SPMap framework, (2) use of robust statistics to perform extraction of line segments from laser data in real-time, and (3) the integration of CML with a "roadmap" path planning method for autonomous trajectory execution. These Innovations are combined to demonstrate the ability for a mobile robot to autonomously return back to its starting position within a few centimeters of precision, despite the presence of numerous people walking through the environment.
Conference Paper
Full-text available
To navigate reliably in indoor environments, a mobile robot must know where it is. Thus, reliable position estimation is a key problem in mobile robotics. We believe that probabilistic approaches are among the most promising candidates to providing a comprehensive and real-time solution to the robot localization problem. However, current methods still face considerable hurdles. In particular the problems encountered are closely related to the type of representation used to represent probability densities over the robot's state space. Earlier work on Bayesian filtering with particle-based density representations opened up a new approach for mobile robot localization based on these principles. We introduce the Monte Carlo localization method, where we represent the probability density involved by maintaining a set of samples that are randomly drawn from it. By using a sampling-based representation we obtain a localization method that can represent arbitrary distributions. We show experimentally that the resulting method is able to efficiently localize a mobile robot without knowledge of its starting location. It is faster, more accurate and less memory-intensive than earlier grid-based methods,
Article
Full-text available
The problem of generating maps with mobile robots has received considerable attention over the past years. However, most of the approaches assume that the environment is static during the data-acquisition phase. In this paper we consider the problem of creating maps with mobile robots in populated environments. Our approach uses a probabilistic method to track multiple people and to incorporate the results of the tracking technique into the mapping process. The resulting maps are more accurate since corrupted readings are treated accordingly during the matching phase and since the number of spurious objects in the resulting maps is reduced. Our approach has been implemented and tested on real robot systems in indoor and outdoor scenarios. We present several experiments illustrating the capabilities of our approach to generate accurate 2d and 3d maps.
Article
Full-text available
Monte Carlo localization (MCL) is a Bayesian algorithm for mobile robot localization based on particle filters, which has enjoyed great practical success. This paper points out a limitation of MCL which is counter-intuitive, namely that better sensors can yield worse results. An analysis of this problem leads to the formulation of a new proposal distribution for the Monte Carlo sampling step. Extensive experimental results with physical robots suggest that the new algorithm is significantly more robust and accurate than plain MCL. Obviously, these results transcend beyond mobile robot localization and apply to a range of particle filter applications.
Article
This paper presents a strategy for achieving practical mapping navigation using a wheeled mobile robot equipped with an advanced sonar sensor. The original mapping navigation experiment, carried out with the same robot configuration, built a feature map consisting of commonplace indoor landmarks crucial for localization; namely, planes, corners, and edges. The map exhaustively maintained covariance matrices among all features, and thus presented a time and memory impediment to practical navigation in large environments. The new local mapping strategy proposed here breaks down a large environment into a topology of local regions, only maintaining the covariance among features in the same local region, and the covariance among local maps. This notion of two-hierarchy representation drastically improves the memory and processing-time requirements of the original global approach, while preserving the statistical details, in the authors' opinions, necessary for an accurate map and prolonged navigation. The new local mapping scheme also extends the endeavor toward reducing error accumulation made in the global mapping strategy by eliminating errors accumulated between visits to the same part of the environment. This is achieved with a map-matching strategy developed exclusively for the advanced sonar sensor that is employed. The local mapping strategy has been tested in two large, real-life indoor environments, and successful results are reported here.
Article
Many future missions for mobile robots demand multi-robot systems which are capable of operating in large environments for long periods of time. A critical capability is that each robot must be able to localize itself. However, GPS cannot be used in many environments (such as within city streets, under water, indoors, beneath foliage or extra-terrestrial robotic missions) where mobile robots are likely to become commonplace. A widely researched alternative is Simultaneous Localization and Map Building (SLAM): the vehicle constructs a map and, concurrently, estimates its own position. In this paper we consider the problem of building and maintaining an extremely large map (of one million beacons). We describe a fully distributed, highly scaleable SLAM algorithm which is based on distributed data fusion systems. A central map is maintained in global coordinates using the Split Covariance Intersection (SCI) algorithm. Relative and local maps are run independently of the central map and their estimates are periodically fused with the central map.
Article
The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimo dal, cannot represent simultaneous alternative hypotheses. The Condensation algorithm uses factored sampling, previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set. Condensation uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time.
Article
The classical filtering and prediction problem is re-examined using the Bode-Sliannon representation of random processes and the “state-transition” method of analysis of dynamic systems. New results are: (1) The formulation and methods of solution of the problem apply without modification to stationary and nonstationary statistics and to growing-memory and infinitememory filters. (2) A nonlinear difference (or differential) equation is derived for the covariance matrix of the optimal estimation error. From the solution of this equation the coefficients of the difference (or differential) equation of the optimal linear filter are obtained without further calculations. (3) The filtering problem is shown to be the dual of the noise-free regulator problem. The new method developed here is applied to two well-known problems, confirming and extending earlier results. The discussion is largely self-contained and proceeds from first principles; basic concepts of the theory of random processes are reviewed in the Appendix.
Conference Paper
An algorithm for tracking multiple targets in a cluttered environment is developed. The algorithm is capable of initiating tracks, accounting for false or missing reports, and processing sets of dependent reports. As each measurement is received, probabilities are calculated for the hypotheses that the measurement came from previously known targets in a target file, or from a new target, or that the measurement is false. Target states are estimated from each such data-association hypothesis, using a Kalman filter. As more measurements are received, the probabilities of joint hypotheses are calculated recursively using all available information such as density of unknown targets, density of false targets, probability of detection, and location uncertainty. This branching technique allows correlation of a measurement with its source based on subsequent, as well as previous, data. To keep the number of hypotheses reasonable, unlikely hypotheses are eliminated and hypotheses with similar target estimates are combined. To minimize computational requirements, the entire set of targets and measurements is divided into clusters that are solved independently. In an illustrative example of aircraft tracking, the algorithm successfully tracks targets over a wide range of conditions.
Article
This paper discusses how local measurements of positions and surface normals may be used to identify and locate overlapping objects. The objects are modeled as polyhedra (or polygons) having up to six degrees of positional freedom relative to the sensors. The approach operates by examining all hypotheses about pairings between sensed data and object surfaces and efficiently discarding inconsistent ones by using local constraints on: distances between faces, angles between face normals, and angles (relative to the surface normals) of vectors between sensed points. The method described here is an extension of a method for recognition and localization of nonoverlapping parts previously described in [18] and [15].
Article
S ummary A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value situations, applications to grouped, censored or truncated data, finite mixture models, variance component estimation, hyperparameter estimation, iteratively reweighted least squares and factor analysis.
Conference Paper
The possibility of calibrating a camera from image data alone, based on matched points identified in a series of images by a moving camera was suggested by Mayband and Faugeras. This result implies the possibility of Euclidean reconstruction from a series of images with a moving camera, or equivalently, Euclidean structure-from- motion from an uncalibrated camera. No tractable algorithm for implementing their methods for more than three images have been previously reported. This paper gives a practical algorithm for Euclidean reconstruction from several views with the same camera. The algorithm is demonstrated on synthetic and real data and is shown to behave very robustly in the presence of noise giving excellent calibration and reconstruction results. In this paper a method is given based partly on the well known Levenberg-Marquardt (LM) parameter estimation algorithm, partly on new non-iterative algorithms and partly on techniques of Projective Geometry for solving this self-calibration problem. This algorithm has the advantage of being applicable to large numbers of views, and in fact performs best when many views are given. As a consequence, the algorithm can be applied to the structure-from-motion problem to determine the structure of a scene from a sequence of views with the same uncalibrated camera. Indeed, since the calibration of the camera may be determined from the correspondence data, it is possible to compute a Euclidean reconstruction of the scene. That is, the scene is reconstructed, relative to the placement of one of the cameras used as reference, up to an unknown scaling. The algorithm is demonstrated on real and synthetic data and is shown to perform robustly in the presence of noise.
Article
A numerical representation of uncertain and incomplete sensor knowledge we call Certainty Grids has been used successfully in several of our past mobile robot control programs, and has proven itself to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. We had good early success with ad-hoc formulas for updating grid cells with new information. A new Bayesian statistical foundation for the operations promises further improvement. We propose to build a software framework running on processors onboard our new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot’s surroundings as it moves. The “certainty grid” representation will allow this map to be incrementally updated in a uniform way from various sources including sonar, stereo vision, proximity and contact sensors. The approach can correctly model the fuzziness of each reading, while at the same time combining multiple measurements to produce sharper map features, and it can deal correctly with uncertainties in the robot’s motion. The map will be used by planning programs to choose clear paths, identify locations (by correlating maps), identify well known and insufficiently sensed terrain, and perhaps identify objects by shape. The certainty grid representation can be extended in the time dimension and used to detect and track moving objects. Even the simplest versions of the idea allow us fairly straightforwardly to program the robot for tasks that have hitherto been out of reach. We look forward to a program that can explore a region and return to its starting place, using map “snapshots” from its outbound journey to find its way back, even in the presence of disturbances of its motion and occasional changes in the terrain.
Article
2 7212 Bellona Ave. 3 Numbers in brackets designate References at end of paper. 4 Of course, in general these tasks may be done better by nonlinear filters. At present, however, little or nothing is known about how to obtain (both theoretically and practically) these nonlinear filters. Contributed by the Instruments and Regulators Division and presented at the Instruments and Regulators Conference, March 29-Apri1 2, 1959, of THE AMERICAN SOCIETY OF MECHANICAL ENGINEERS. NOTE: Statements and opinions advanced in papers are to be understood as individual expressions of their authors and not those of the Society. Manuscript received at ASME Headquarters, February 24, 1959.Paper No. 59, IRD-11.
Article
A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. The authors describe the application of RANSAC to the Location Determination Problem (LDP): given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing and analysis conditions. Implementation details and computational examples are also presented
Article
In an earlier paper, [1] an approach to the problem of systematic sampling was formulated, and the associated variance obtained. Several forms of the population were assumed. The efficiency of the systematic design as compared with the random and stratified random design was evaluated for these forms. It was remarked that as the size of sample increased the variance of a systematic design might also increase, contrary to the behavior of variances in the random sampling design. This possibility was verified in [2]. One approach to the study of systematic designs, given by Cochran [3] removed this difficulty to some extent by changing the problem to one of the expected variance, and supposing the elements of the population to be random variables. He showed that if the correlogram of these random variables is concave upwards, then the expected variance of the systematic design would be less, and often considerably less, than the variance of a stratified design. In the present paper the results of the earlier papers are extended to the systematic sampling of clusters of equal and unequal sizes. Some comments on systematic sampling in two dimensions are included. In section 2 we derive two theorems that have considerable applications in many parts of sampling. Although it has been common for people working in sampling theory to tell each other that these theorems ought to be true, yet no reference seems to exist. In section 3 we develop the implications of a remark [1, p. 13] that in designing sample surveys we should try to induce negative correlation between strata. In Theorem 3 we obtain sufficient conditions for the correlation to be negative. The lemma and Theorem 4 given in Section 4 enable us to extend the uses of Theorem 3 in practice. As an application of these results, we show that if a population has a concave upwards correlogram, and if strata are defined in an optimum fashion for the selection of one element at random from each stratum, then we can define a systematic type design that will be more efficient than independent random selection from each stratum. In sections 5 and 6 we obtain various results in the systematic sampling of clusters largely as applications of the more general theorems of the earlier sections. In general the results are of a nature similar to those of [1] and [3] in that the formulae show the conditions under which systematic sampling may be expected to be more efficient than random or stratified random sampling. We have not, however, applied these formulae to specified types of populations. From [1, 2 and 3] it is already apparent that this work will be useful and such studies should be more valuable when made in connection with important types of surveys or data than when made as illustrations in a general paper.
Article
We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent sampling importance resampling algorithm (HySIR). In terms of computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequential Monte Carlo techniques. The new algorithm may be viewed as a global optimization strategy that allows us to learn the probability distributions of the network weights and outputs in a sequential framework. It is well suited to applications involving on-line, nonlinear, and nongaussian signal processing. We show how the new algorithm outperforms extended Kalman filter training on several problems. In particular, we address the problem of pricing option contracts, traded in financial markets. In this context, we are able to estimate the one-step-ahead probability density functions of the options prices.
Article
If a problem in functional data analysis is low dimensional then the methodology for its solution can often be reduced to relatively conventional techniques in multivariate analysis. Hence, there is intrinsic interest in assessing the finite dimensionality of functional data. We show that this problem has several unique features. From some viewpoints the problem is trivial, in the sense that continuously distributed functional data which are exactly finite dimensional are immediately recognizable as such, if the sample size is sufficiently large. However, in practice, functional data are almost always observed with noise, for example, resulting from rounding or experimental error. Then the problem is almost insolubly difficult. In such cases a part of the average noise variance is confounded with the true signal and is not identifiable. However, it is possible to define the unconfounded part of the noise variance. This represents the best possible lower bound to all potential values of average noise variance and is estimable in low noise settings. Moreover, bootstrap methods can be used to describe the reliability of estimates of unconfounded noise variance, under the assumption that the signal is finite dimensional. Motivated by these ideas, we suggest techniques for assessing the finiteness of dimensionality. In particular, we show how to construct a critical point such that, if the distribution of our functional data has fewer than "q" - 1 degrees of freedom, then we should be willing to assume that the average variance of the added noise is at least . If this level seems too high then we must conclude that the dimension is at least "q" - 1. We show that simpler, more conventional techniques, based on hypothesis testing, are gene
Conference Paper
The extended Kalman filter (EKF) has been the de facto approach to the Simultaneous Localization and Mapping (SLAM) problem for nearly fifteen years. However, the EKF has two serious deficiencies that prevent it from being applied to large, real-world environments: quadratic complexity and sensitivity to failures in data association. FastSLAM, an alternative approach based on the Rao-Blackwellized Particle Filter, has been shown to scale logarithmically with the number of landmarks in the map. This efficiency enables FastSLAM to be applied to environments far larger than could be handled by the EKF. In this paper, we show that FastSLAM also substantially outperforms the EKF in environments with ambiguous data association. The performance of the two algorithms is compared on a real-world data set with various levels of odometric noise. In addition, we show how negative information can be incorporated into FastSLAM in order to improve the accuracy of the estimated map.
Conference Paper
The simultaneous localization and mapping (SLAM) with detection and tracking of moving objects (DATMO) problem is not only to solve the SLAM problem in dynamic environments but also to detect and track these dynamic objects. In this paper, we derive the Bayesian formula of the SLAM with DATMO problem, which provides a solid basis for understanding and solving this problem. In addition, we provide a practical algorithm for performing DATMO from a moving platform equipped with range sensors. The probabilistic approach to solve the whole problem has been implemented with the Navlab11 vehicle. More than 100 miles of experiments in crowded urban areas indicated that SLAM with DATMO is indeed feasible.
Conference Paper
Presents a probabilistic algorithm for simultaneously estimating the pose of a mobile robot and the positions of nearby people in a previously mapped environment. This approach, called the conditional particle filter, tracks a large distribution of person locations conditioned upon a smaller distribution of robot poses over time. This method is robust to sensor noise, occlusion, and uncertainty in robot localization. In fact, conditional particle filters can accurately track people in situations with global uncertainty over robot pose. The number of samples required by this filter scales linearly with the number of people being tracked, making the algorithm feasible to implement in real-time in environments with large numbers of people. Experimental results illustrate the accuracy of tracking and model selection, as well as the performance of an active following behavior based on this algorithm.
Article
The simultaneous localization and map building (SLAM) problem asks if it is possible for an autonomous vehicle to start in an unknown location in an unknown environment and then to incrementally build a map of this environment while simultaneously using this map to compute absolute vehicle location. Starting from estimation-theoretic foundations of this problem, the paper proves that a solution to the SLAM problem is indeed possible. The underlying structure of the SLAM problem is first elucidated. A proof that the estimated map converges monotonically to a relative map with zero uncertainty is then developed. It is then shown that the absolute accuracy of the map and the vehicle location reach a lower bound defined only by the initial vehicle uncertainty. Together, these results show that it is possible for an autonomous vehicle to start in an unknown location in an unknown environment and, using relative observations only, incrementally build a perfect map of the world and to compute simultaneously a bounded estimate of vehicle location. The paper also describes a substantial implementation of the SLAM algorithm on a vehicle operating in an outdoor environment using millimeter-wave radar to provide relative map observations. This implementation is used to demonstrate how some key issues such as map management and data association can be handled in a practical environment. The results obtained are cross-compared with absolute locations of the map landmarks obtained by surveying. In conclusion, the paper discusses a number of key issues raised by the solution to the SLAM problem including suboptimal map-building algorithms and map management
Article
This article provides a comprehensive introduction into the field of robotic mapping, with a focus on indoor mapping. It describes and compares various probabilistic techniques, as they are presently being applied to a vast array of mobile robot mapping problems. The history of robotic mapping is also described, along with an extensive list of open research problems.
Article
This paper presents a strategy for achieving practical mapping navigation using a wheeled mobile robot equipped with an advanced sonar sensor. The original mapping navigation experiment, carried out with the same robot configuration, builds a feature map consisting of commonplace indoor landmarks crucial for localisation, namely planes, corners and edges. The map exhaustively maintains covariance matrices among all features, thus presents a time and memory impediment to practical navigation in large environments. The new local mapping strategy proposed here breaks down a large environment into a topology of local regions, only maintaining the covariance among features in the same local region, and the covariance among local maps. This notion of two hierarchy representation drastically improves the memory and processing time requirements of the original global approach, while preserving the statistical details, in the authors' opinions, necessary for an accurate map and prolon...
Article
Decoupled stochastic mapping (DSM) is a computationally efficient approach to large-scale concurrent mapping and localization. DSM reduces the computational burden of conventional stochastic mapping by dividing the environment into multiple overlapping submap regions, each with its own stochastic map. Two new approximation techniques are utilized for transferring vehicle state information from one submap to another, yielding a constant-time algorithm whose memory requirements scale linearly with the size of the operating area. The performance of two different variations of the algorithm is demonstrated through simulations of environments with 110 and 1200 features. Experimental results are presented for an environment with 93 features using sonar data obtained in a 3 by 9 by 1 meter testing tank. 1. Introduction The objective of concurrent mapping and localization (CML) is to enable a mobile robot to build a map of an unknown environment while concurrently using that map to navigate....