Figure 3 - uploaded by Ebrahim Abdulla Mattar
Content may be subject to copyright.
A flowchart of the complicated implemented ICP. 

A flowchart of the complicated implemented ICP. 

Source publication
Conference Paper
Full-text available
Abstract— This manuscript presents an autonomous navigation of a mobile robot using SLAM, while relying on an active stereo vision. We show a framework of low-level software coding which is necessary when the vision is used for multiple purposes such as obstacle discovery. The built system incorporated a number of SLAM based routines while replying...

Context in source publication

Context 1
... part of the development of MCL layer, we customized the standard stereovision based Iterative Closest Point (ICP) method to suit our environment and sensor configuration. The resultant algorithm is able to localize the mobile platform with an upper bound of with respect to translational localization. Upper bound for rotational motion was stood at. Execution cost for this algorithm turns out to be very high and the robot translational speed has to be restricted at 50 (mm/sec). The robot's angular speed is restricted at a much slower. Reasons for such upper bounds is the high computational load on the on-board PC. The flow chart for such employed (ICP) algorithm was already presented and detailed in Fig. 3. ...

Similar publications

Conference Paper
Full-text available
Missions to small celestial bodies are gaining interest in recent years and their mission profiles are evolving from scientific exploration and sample return missions to more advanced in-situ resource extraction and utilisation. This suggests the spacecraft operate more in the proximity of an asteroid than before, but the accuracies from ground-bas...
Article
Full-text available
Simultaneous localization and mapping (SLAM) represents a crucial algorithm in the autonomous navigation of ground vehicles. Several studies were conducted to improve the SLAM algorithm using various sensors and robot platforms. However, only a few works have focused on applications inside low-illuminated featureless tunnel environments. In this wo...
Article
Full-text available
Localization is a fundamental capability for an autonomous mobile robot, especially in the navigation process. The commonly used laser-based simultaneous localization and mapping (SLAM) method can build a grid map of the indoor environment and realize localization task. However, when a robot comes to a long corridor where there exists many geometri...
Article
Full-text available
Although the research on autonomous mobile robot SLAM has received extensive research, the current mobile robot still exists in practical applications: it may move under the condition of disordered and irregular obstacle placement; the shape of the obstacle and the position of the obstacle change; and indoor and outdoor scene switching occurs at di...
Article
Full-text available
Automatic valet parking (AVP) is the autonomous driving function that may take the lead in mass production. AVP is usually needed in an underground parking lot, where the light is dim, the parking space is narrow, and the GPS signal is denied. The traditional visual-based simultaneous location and mapping (SLAM) algorithm suffers from localization...

Citations

... Firstly, the image is segmented to achieve a more uniform distribution of feature points throughout the image. For an image of original size W × H, given the segmentation coefficients s w and s w for width and height, the width and height of the image are divided equally as follows [20]: ...
Article
Full-text available
To address the problem of system instability during vehicle low-speed driving, we propose improving the visual odometer using ORB (Oriented FAST and Rotated BRIEF) features. The homogeneity of ORB features leads to poor corner point properties of some feature points. When the environmental texture lacks richness, it leads to poor matching performance and low matching accuracy of the feature points. We solve the problem of the corner point properties of feature points using weight calculation for regions with different textures. When the vehicle speed is too low, the continuous frames captured by the camera will overlap significantly, causing large fluctuations in the system error. We use motion model estimation to solve this problem. Meanwhile, experimental validation using the KITTI dataset achieves good results.
... The nature of augmentation can be divided into two parts which are execution and evaluation. For the execution part, the user can perform a lot of quality tasks, while the evaluation part is based on the user perception in which much realistic information will be provided to the user [17]. ...
... Active binocular vision is a binocular vision system that could actively change its own view direction. It is beneficial for multiple applications such as manipulation [1], three-dimensional (3D) reconstruction [2], navigation [3], 3D mapping [4], and so on. 3D coordinates estimation based on active binocular vision attracts extensive research interests. ...
Article
Full-text available
Three-dimensional (3D) triangulation based on active binocular vision has increasing amounts of applications in computer vision and robotics. An active binocular vision system with non-fixed cameras needs to calibrate the stereo extrinsic parameters online to perform 3D triangulation. However, the accuracy of stereo extrinsic parameters and disparity have a significant impact on 3D triangulation precision. We propose a novel eye gaze based 3D triangulation method that does not use stereo extrinsic parameters directly in order to reduce the impact. Instead, we drive both cameras to gaze at a 3D spatial point P at the optical center through visual servoing. Subsequently, we can obtain the 3D coordinates of P through the intersection of the two optical axes of both cameras. We have performed experiments to compare with previous disparity based work, named the integrated two-pose calibration (ITPC) method, using our robotic bionic eyes. The experiments show that our method achieves comparable results with ITPC.
... After that, the system will calculate the approximate distance between objects detected by a calibrated camera [1,2]. A stereo camera-based navigation system implemented in [3,4]. The method improved with the detection process and navigation path recognition. ...
... Furthermore, the method of navigation, especially for the process of avoiding the obstacle, is Simultaneous Localization and Mapping (SLAM) [17,18]. This method has always been developed from the outset [4]. The research developed the SLAM method using stereo cameras -the SLAM method implemented in robots using ultrasonic sensors. ...
Article
Full-text available
p>This paper presents a new rules-based of a real-time decision system for an autonomous wheeled robot with the holonomic-drive system. The robot uses decisions to avoid collisions with obstacles. The decision rules based on Grid-Edge-Depth Map. The Grid-Edge-Depth map represents the obstacle’s position and distance in the environment. The generation process of the Grid-Edge-Depth map presented in previous research. The decisions of the first scenario with no destination point are forward, stop, 90<sup>o</sup> right turn, and 90<sup>o</sup> left turn. The decisions of the second, and third scenarios with a destination point are forward, stop, 90<sup>o</sup> right turn, 90<sup>o</sup> left turn, 45<sup>o</sup> forward to the right, 45<sup>o</sup> forward to the left, slide to the right, and slide to the left. The proposal tested in a 5x3 metre living environment. Finally, the experiment resulted in 93.3% of navigation’s success for all the scenarios.</p
... Two of them - [81] and [117] -do not guarantee the complete exploration of the environment, as they choose a single direction to follow in case there are multiple available paths, leaving the other directions unexplored. The method from [4] completes all the tasks from navigation, but it received a low score from the quality assessment due to the omission of the validation and evaluation methodologies, lack of comparison to other methods, and insufficient information about the theory, implementation, and testing of the proposed method. In [112], the authors present a method based on image profiles, but it shows significant errors in localization and mapping due to the use of odometry. ...
... In [51], the authors utilize a classifier to detect regions similar to the ground, while in [81,103,115,121,122,126,135,140] they detect the frontiers (borders between the floor and obstacles) in order to determine the direction in which the robot should explore next. As for the path planning algorithms, [139] computes a potential field between the current robot state and the selected goal, [1] uses Wavefront algorithm, [125] uses Dijkstra's algorithm, and [4,78,137] use the A* algorithm. ...
Article
Autonomous mobile robots are required to move throughout map the environment, locate themselves, and plan paths between positions. Vision stands out among the other senses for its richness and practicality. Even though there are well-established autonomous navigation solutions, as far as we can tell, no complete autonomous navigation system that is solely based on vision and that is suitable for dynamic indoor environments has fully succeeded. This article presents a systematic literature review of methods and techniques used to solve the complete autonomous navigation problem or its parts: localization, mapping, path planning, and locomotion. The focus of this review lays on vision-based methods for indoor environments and ground robots. A total of 121 studies were considered, comprising methods, conceptual models, and other literature reviews published between 2000 and 2017. To the best of our knowledge, this is the first systematic review about vision-based autonomous navigation suitable for dynamic indoor environments. It addresses navigation methods, autonomous navigation requirements, vision benefits, methods testing, and implementations validation. The results of this review show a deficiency in testing and validation of presented methods, poor requirements specification, and a lack of complete navigation systems in the literature. These results should encourage new works on computer vision techniques, requirements specification, development, integration, and systematic testing and validation of general navigation systems. In addition to these findings, we also present the complete methodology used for the systematic review, which provides a documentation of the process (allowing quality assessment and repeatability), the criteria for selecting and evaluating the studies, and a framework that can be used for future reviews in this research area.
... In most types of SLAM framework, research considers the environment as motionless, ignoring the influence of moving obstacles [2,3]. In fact, in real surroundings, like a moving car or a person coming from the opposite direction, these obstacles cannot be prevented. ...
... We formulated MPT in a generic way, where U denotes the motion-sensor-measurement observation continuous points; Z represents visual-sensor-measurement observation continuous points; X represents pose-hidden continuous points; Y conveys position of moving object hidden discontinuity point; and S is motion mode [2]. ...
... VSLAM and MPT Formulation: U represents the motion-sensor-measurement observation continuous points; z denotes the visual-sensor-measurement observation continuous points; x is the pose-hidden continuous points; S is the motion mode; and p denotes the moving disabled person. Hidden discontinuity points are represented by M. We formulated MPT in a generic way, where U denotes the motion-sensor-measurement observation continuous points; Z represents visual-sensor-measurement observation continuous points; X represents pose-hidden continuous points; Y conveys position of moving object hidden discontinuity point; and S is motion mode [2]. ...
Article
Full-text available
People with disabilities (PWD) face a number of challenges such as obstacle avoidance or taking a minimum path to reach a destination while travelling or taking public transport, especially in airports or bus stations. In some cases, PWD, and specifically visually impaired people, have to wait longer to overcome these situations. In order to solve these problems, the computer-vision community has applied a number of techniques that are nonetheless insufficient to handle these situations. In this paper, we propose a visual simultaneous localization and mapping for moving-person tracking (VSLAMMPT) method that can assist PWD in smooth movement by knowing a position in an unknown environment. We applied expected error reduction with active-semisupervised-learning (EER-ASSL)-based person detection to eliminate noisy samples in dynamic environments. After that, we applied VSLAMMPT for effective smoothing, obstacle avoidance, and uniform navigation in an indoor environment. We analyze the joint approach symmetrically and applied the proposed method to benchmark datasets and obtained impressive performance.
... In the last few years, visual navigation has been more concerned by using RGB-D camera such as Kinect sensor [3][4][5][6][7][8][9][10][11][12]. The works [3,5,7,10,12] present the development of a perception system for indoor environments to allow autonomous navigation for surveillance mobile robots. ...
... Especially, when a large region misses depth values completely, there is lack of repetitive patches for this surface structure and then this method may fail. Therefore, an efficient approach is applied to improve the quality of Kinect's depth image [9] by using a region growing method and a joint bilateral filter to fill the black holes. ...
Article
Full-text available
This paper presents a ground-based navigation method for an indoor robot to reach a predetermined target. By mining depth maps effectively, a mobile robot could find right path and self-locate with a proposed algorithm named Always Move Straight to the Destination (AMDS). The proposed navigation system extracts the ground plane from the depth map provided by an RGB-D camera. Then the navigation system has established an optimal obstacle avoiding strategy with a success rate of 98.7% which is better than some recent comparison methods based on Artificial Neural Network (ANN) classifiers or method of combination of two algorithm of Dynamic Window Approach (DWA) and Anytime Repairing A* (ARA*). The robot's navigational capability is more flexible than the comparison methods because the angle of direction adjustment is 1 degree. The proposed ground-based navigation method could be integrated into low cost robots.
... A complexity of the KSU-IMR Neuro-fuzzy mobile robot decision making layout has already been shown in Fig. 1. In reference to Al-Mutib et al. [33], NF has been used for making the most valid navigation decisions. This is determined upon the learned occupancy maps and optimal navigation paths, which came as results of using the SLAM routine. ...
Article
Full-text available
For efficient, and knowledge based navigation, it is essential to blend mobile robot navigation details with information and details from navigation paths-localities. In this respect, the presented scheme was focused towards building intelligence for mobile robot navigation. Intelligence was achieved by considering the navigation capabilities while the mobile robot was in motion. The adopted learning paradigm was a five layers Neuro-Fuzzy learning architecture, with to ability to create an FIS inference for enhanced navigation. To capture the enormous visual and non-visual sensory data, the mobile robot platform has fully computer-interfaced stereo vision, and reliable 3D perception system onboard the mobile platform. A Neuro-Fuzzy intelligence paradigm was used to learn navigation maps (SLAM) main visual features, distances, nature of localities as it travels within spaces. Blinding intelligence with visual maps and non-visual sensory data, has indeed resulted in improved navigation capabilities.
... A typical KSU-IMR Neuro-fuzzy mobile robot decision making layout is shown in Fig. (3). In reference to (16), NF has been used for making the most valid navigation decisions. This is determined upon the learned occupancy maps and optimal navigation paths, which came as results of using the SLAM routine. ...
Conference Paper
Full-text available
The presented approach is described as follows: to build intelligence for mobile robot path planning. This is be achieved by creating navigation intelligence capabilities while robot is in motion. This will be rather based on intelligent path planning techniques i.e. soft-computing techniques. To meet such visual gathering requirements the proposed mobile robot platform is to have fully computer-controllable video cameras (binocular camera) and cameras mount head. They are to be designed and implemented for sophisticated active vision algorithms and for reliable 3D perception. The main softcomputing and intelligence based algorithm which has been adopted within this research is based on a Neuro-fuzzy System architecture.
... PCA is used to reduce navigated maps dimensionality, capture maps only important details, hence to learn inherent details and characteristics of the environment. Navigation maps were created as based on using a stereo vision measurement techniques (VSLAM), Al-Mutib et al. [1]. Maps sizes are fixed, however their inside details are not static, as the environment is a moving dynamic space. ...
... Understanding inherent characterizations of navigation localities does assist to undertake the most appropriate behaviour and task by the mobile robot. The upper level of mobile intelligence has been based on mixing a number of AI related functionaries, as further given in Al-Mutib et al., [1]. The three levels of mobile intelligence were based on; (firstly) using Fuzzy System for motion control and VSLAM, (secondly) ANN-PCA for navigating maps learning, and (thirdly) using NeuroFuzzy architecture as decision based system. ...
... Within this paper, we shall be describing the extra work which has been achieved for mobile robot metrictopological learning and understanding. For mobile VSLAM and related details hardware implementation, this is found Al-Mutib et al., [1]. The adopted methodology is based by understanding each of individual navigated map, the mobile robot has traveled over. ...