Conference PaperPDF Available

Mobile Robot Simulation and Navigation in ROS and Gazebo

Authors:

Abstract and Figures

Abstract— mobile robots are entering our daily lives as well as in the industry. Their task is usually associated with carrying out transportation. This leads to the need to perform autonomous movement of mobile robots. On the other hand, modern practice is that the planning of most processes is done through simulations. Thus, various future production problems can be anticipated and remedied or improved. The article describes the creation of a mobile robot model in the Gazebo simulation environment. Specific settings and features for running a mobile robot in autonomous navigation mode under the robot operating system are presented. The steps for creating a map, localization and navigation are presented. Experiments have been conducted to optimize and tune the parameters of both the robot model itself and the simulation control parameters.
Content may be subject to copyright.
XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE
Mobile Robot Simulation and Navigation in ROS
and Gazebo
line 1: 1st Denis Chikurtev
line 2: Department of Cyber-Physical Systems
line 3: Institute of Information and Communication Technologies
line 4: Sofia, Bulgaria
line 5: dchikurtev@gmail.com, ORCID: 0000-0003-4903-7544
Abstractmobile robots are entering our daily lives as well
as in the industry. Their task is usually associated with carrying
out transportation. This leads to the need to perform
autonomous movement of mobile robots. On the other hand,
modern practice is that the planning of most processes is done
through simulations. Thus, various future production problems
can be anticipated and remedied or improved. The article
describes the creation of a mobile robot model in the Gazebo
simulation environment. Specific settings and features for
running a mobile robot in autonomous navigation mode under
the robot operating system are presented. The steps for creating
a map, localization and navigation are presented. Experiments
have been conducted to optimize and tune the parameters of
both the robot model itself and the simulation control
parameters.
Keywords mobile robot, simulation model, autonomous
navigation, ROS, Gazebo
I. INTRODUCTION
Service robotics is a very popular area that has undergone
significant development in recent years. Service robots are
widely used. They are becoming more and more
commonplace in our daily lives as well as in various fields of
industry, healthcare, medicine, education, construction,
entertainment and more [1, 2, 3]. Most service robots are
mobile because they can perform their tasks related to
assisting humans and / or machines [4, 6]. The mobility of the
robots enables them to perform the tasks for which they are
intended without difficulty [2].
Nowadays there is a very wide variety of mobile robots.
The main types of mobile robots, according to their
locomotion, are divided into wheeled, chain, walking and
floating [14]. The subject of research in the article are wheeled
mobile robots. They are among the most common and are
relatively easy to operate, unlike walking robots. Wheeled
mobile robots are equipped with different types of wheels -
standard, omni wheels, mecanum wheels and more. There are
different configurations of mobile platforms according to the
location and number of wheels. Thus, we have differentially
positioned wheels (such as tank-type platform) with one or
two caster wheels, triangular omni-wheel platforms [15],
standard four-wheel platforms with mecanum wheels [16] and
other specific and unique types.
All these varieties and configurations of mobile platforms
are characterized by specific kinematic and dynamic
characteristics. These specifics are at the heart of managing a
mobile robot. They determine the behavior of the platform and
controllers. In a simulation model, these characteristics must
be correctly set in order to get as close as possible to the real
model.
Simulating a mobile robot can be of great benefit. On the
one hand, having a robot model in a simulation environment
can be used by many people without the need for everyone to
own a physical robot. On the other hand, a number of
experiments and studies can be conducted in this way without
compromising the security of humans, the robot or the objects
that the robot handles and interacts with. Last but not least,
running time through simulation can be greatly reduced, since
many activities such as preparing for experiments or restoring
the robot's initial parameters and the characteristics of the
environment in which it operates are eliminated.
Another major problem with mobile robots is their
autonomous movement. Autonomous navigation involves
localization, path planning, and motion control of the mobile
platform [8]. All these problems have a number of solutions
and can be investigated in a simulation environment. The
article describes the creation of a simulation model of a
differential drive mobile platform equipment with a laser
sensor and a simulation controller in a Gazebo environment.
Then describes the use of the navigation package ROS and the
necessary tools and programs for its implementation.
II. ROS AND GAZEBO
The robotics operating system is widespread and used by
all those involved in robotics. The system allows the use of
numerous ready-made packages, tools, models and libraries
for robots [6]. And because it is open source each user has the
ability to use and modify the right one for him pack, tool or
library.
The Gazebo simulator can connect directly to the ROS
through special packages. These packages provide the
necessary interfaces to simulate a Gazebo robot using ROS
messages, services, and dynamic reconfigure.
In order to work correctly with a robot, ROS needs a
description of the kinematics of the robot. In this way,
trajectories, navigation, and more can be planned and
executed. ROS way of describing a robot is by specifying its
properties in URDF (Universal Robot Description Format)
files. URDF supports XML and xacro (XML macro)
languages. Xacro code is easier to implement, maintain and
has better readability. To use a URDF file in Gazebo, some
additional simulation-specific tags must be added to work
properly with Gazebo.
The relationship between ROS and Gazebo is the same as
that between ROS and the hardware of a real robot. The
controller in ROS receives data on one topic from both models
and publishes data on another topic to both models.
Simulation and real robot control can be performed at the
same time. This way, their behavior and work can be easily
compared. In fact, ROS makes no difference whether it
controls a real robot or a simulation model of a robot. The
important thing here is to implement the correct nodes and to
have the necessary topics for communication.
Figure 1 shows the structure of the Gazebo simulation
model. It consists of a model with a description of the robot,
its plugins and Gazebo libraries. The model receives data from
the Joint Command Interface, and as a feedback sends data
through the Joint State Interface. The same applies to the
hardware model (Figure 2), which is connected to the ROS
controller via the same interfaces. The difference between the
two models is that the hardware model includes components
such as an embedded controller, actuators and sensors, which
in Gazebo are replaced by simulated ones by plugins and
libraries. The ROS controller is shown in Figure 3. It
processes the data received from any model and sends control
commands accordingly [13].
Fig. 1. Principal Robot Simulation in Gazebo.
Fig. 2. Principal Robot Hardware.
Fig. 3. ROS Contrrller Manager.
III. DESIGN OF A MOBILE ROBOT IN GAZEBO
Robot simulation is at the heart of creating and testing
robot models [5]. A well-designed simulation model enables
testing algorithms, controllers, robot design, artificial
intelligence training, and more. The Gazebo platform offers
the ability to accurately and effectively simulate robots in
complex environments, both outdoors and indoors. Gazebo
offers physics simulation at a much higher degree of fidelity,
a suite of sensors, and interfaces for both users and programs.
A few key features of Gazebo include:
multiple physics engines,
a rich library of robot models and environments,
a wide variety of sensors,
convenient programmatic and graphical interfaces
Creating a robot simulation model can happen by using
Gazebo's graphical user interface or by directly programming
the robot's parameters and properties in files. In both cases, the
features and rules for creating a robot and describing the
parameters of its units are the same. There are a number of
steps that can be taken to create a mobile platform. The article
describes the minimum number of steps required to create a
working model of a mobile robot.
creation on the basis of the platform: the following
properties are defined - shape, dimensions, mass,
moment of inertia (according to the form: cylinder,
cube, parallelepiped), type of unit: parent unit;
creation of the driving wheels: according to the type
of platform, the required number of wheels are
created with the following properties - shape,
dimensions, mass, moment of inertia, type of unit:
child unit;
creation of caster wheels: they are set in the shape of
a ball with defined radius and positioning (applied for
differential drive platform);
set joints between the base and the driven wheels: set
the type of joint, set the type of units - the base is the
parent, the rest are children, set the axis of rotation of
the wheel, positioning the child units relative to the
base;
fixing joints between the base and the caster wheels:
because they rotate in all directions, we choose the
type of the joint to be ball, then position them against
the base;
adding sensors: Sensors are described like other units
and the most important thing for them is to determine
the connection to the joint to the base. Then a
description of the sensor properties is added such as:
update rate, resolution, range, scan angle;
adding plugins: in our case we add a gazebo plugins
to control the differential platform and scanning the
area with a Lidar. Gazebo controllers set a number of
parameters such as update rate, connection names,
wheel diameter, wheel spacing, torque, range,
scanning angle, input and output topics.
The dimensions and parameters of all robot components
are presented in Table 1. There are two files in which the robot
model information is programmed and stored. One file is of
type xacro and contains information about the size and
location of all links and joints of the robot. Each individual
element of the robot is represented as a link with a specific
name. The link is described with the following properties:
collision, visual and inertial. After all the links have been
described, the joints should be described in order to obtain the
connection between the links.
TABLE I. ROBOTS PARTS DIMENTIONS.
The other file is a gazebo extension, it contains
information about how each element looks in the simulation,
the plugins used and their settings, as well as the properties
and settings of all the sensors. The
differential_drive_controller plugin is used to drive the mobile
robot. The controller parameters are:
<legacyMode>false; <alwaysOn>true; <updateRate>30;
<leftJoint>left_wheel_hinge;
<rightJoint>right_wheel_hinge; <wheelSeparation>0.4;
<wheelDiameter>0.16; <torque>15;
<commandTopic>cmd_vel; <odometryTopic>odom;
<odometryFrame>odom; <robotBaseFrame>chassis.
Wheel and platform color references have been added. The
following is a reference for the laser scanner and its features:
<pose>0 0 0.06 0 0 0; <visualize>false; <update_rate>40;
<scan> <horizontal><samples>720; <resolution>1;
<min_angle>-3.14159265; <max_angle>3.14159265;
<range><min>0.2; <max>25.0; <resolution>0.01;
<noise><type>gaussian; <mean>0.0; <stddev>0.01.
Finally, we use the gazebo_ros_head_rplidar_controller
plugin to publish the scanned data into a topic: <topicName>
/ mybot / laser / scan; <frameName> laser.
There is another way to describe the elements is to import
a mesh file with dae extension. Such a file can be exported
from a drawing in Blender. This type of file describes the
characteristics of the element in great detail. In our case, the
description of the RPLidar A2 laser sensor is inserted [11].
Fig. 4. 3D Simulated Model of Differential Drive Mobile Robot.
After completing all these steps, the mobile platform is
ready for use. Figure 4 shows the platform designed in
Gazebo. After launching the robot model in ROS and Gazebo,
we can check what the active nodes and topics are. So far, we
have two output topics - odom of odometry, LaserScan of
lidar, and one input topic (cmd_vel) for wheel control.
The final stage of modeling in Gazebo is to create a model
of a simulated world. Thus, the robot will be located in a
certain environment with its properties. To make it easier for
developers, Gazebo offers a significant range of ready-to-use
items and even buildings. Therefore, we can easily create our
world and save it. The created world information is saved in a
.world file. Finally, in order to launch the robot in the world,
we have to launch together the robot model and the world file
in a .launch file (Figure 5).
Fig. 5. Robot Model in Simulated World.
IV. ROS NAVIGATION STACK
Once we have made a simulation model of the robot, we
can configure it to operate at autonomous navigation. Several
new steps should be taken to achieve this. These steps are
determined by the navigation package requirements.
The principle of operation of the navigation package is
presented in Figure 6. Autonomous navigation is performed at
the move_base node [12]. This node receives the desired
destination data from the move_base_simple / goal topic.
Then move_base reads the data from the odom, LaserScan, tf
- transform library [7], and GetMap topics, processes the data
and plans the path. Finally, the necessary commands are
published in the cmd_vel topic for controlling the mobile
platform.
Fig. 6. ROS Navigation Stack Configuration.
In order for navigation to work correctly, both the mapping
and localization packages must be configured [8]. These two
packages are part of the packages in the ROS and can be used
Element
Shape, Dimensions
Parameters
base
cylinder, radius = 0.2 m, length
= 0.05 m
mass = 10 kg
driving wheels
cylinder, radius = 0.08 m, length
= 0.03 m
mass = 2 kg
caster wheels
sphere, radius = 0.04 m
mass = 0.5 kg
ready-made. The Mapping package is used to create and
retrieve map information [9]. The AMCL package provides
self-localization algorithms [10]. In addition, some global and
local planer parameters, as well as global_costmap and
local_costmap parameters can be set. When the entire
navigation system is already configured, we move on to
experiments.
Initially, a map of the work environment should be
created. The map creation process is done manually. The
following programs and nodes are started:
launch the model of the robot in the simulated world;
slam_gamapping node that reads odometry and laser
scanner data and returns data to create a new map;
tf node [7];
node for manual control of the robot, via keyboard or
joystick;
rviz interface for rendering the map made;
after the map is ready, we save it. Figure 7 shows the map
made.
Fig. 7. Created Map of the Gazebo World.
When the map is ready, we can start the autonomous
navigation mode. To do this, the following programs should
be started:
launch the model of the robot in the simulated world;
the nodes in figure 6: map_server, tf, amcl,
move_base;
rviz interface to visualize and set desired destinations.
So now we have a working simulation model of the robot
under the control of ROS, in the mode of autonomous
navigation. To make sure we have all the nodes we need, we
can open rqt_graph. An interface opens showing all running
nodes and topics (Fig. 8). This check is very useful because
we can easily find out if the connections between the nodes
are correct and if there are missing links.
Fig. 8. Active ROS Nodes in Navigation Mode.
V. EXPERIMENTS AND RESULTS
As only the minimum requirements for operating a robot
in autonomous navigation mode are met, the robot is not
expected to move perfectly and always achieving the set
coordinates. However, experiments can be performed to
verify that the robot can perform the tasks.
The experiments performed are of two types. The first type
of experiments checks whether the robot can move
independently in a room without obstacles. In the second type
of experiments, additional objects were added into the room
and then we check if the robot could detect obstacles and avoid
them.
Both types of experiments were successful. As expected,
when navigating the robot in an empty room with no obstacles
added, all experiments are successful. The system
successfully locates the robot, generates a path, and navigates
the robot. The robot reaches the set position each time. The
navigation system has no difficulty since it has a lot of free
space.
However, this changes when there are additional
obstacles. Figure 5 shows the added cubes and cylinders in the
middle of the room. It should be noted that they were added
after the card was already created and do not exist on the card
itself, as can be seen in Figure 5. In these experiments, the
navigation system itself recognizes the obstacles and even
during the movement of the robot changes the planned path if
necessary.
Figures 9, 10, 11, 12 show the steps of completing one of
the tasks. The starting position of the robot is the beginning of
the blue line. The destination given is the position that the
robot has reached in Figure 12. The robot is shown in orange,
in blue is the planned path by global_planner, in the green is
planned local path by local_planner, in red and purple are the
contours of the map and obstacles.
Fig. 9. Navigation experiments starting.
In black, the zone of collision is indicated as a shadow, i.e.
the maximum distance that the robot can approach to an
outline / obstacle. The distance to the contours can be adjusted
depending on the settings and behavior of the robot.
Just before the experiment started, we added a new
obstacle - a cylinder in the middle among others. It can be seen
that the laser scanner recognizes the contour of the cylinder,
but the navigation system has not yet added a black shadow
because the object is not yet within the scope of local_planner.
However, when the robot approaches and the object become
within range, it is recognized as a new object and the system
corrects the original path.
Fig. 10. Navigation Experiments - the robot aligns with the planned path.
Fig. 11. Navigation Experiments recognition and avoiding the new
obstacle.
In some of the experiments, the robot failed to reach its
target. In these cases, the robot falls into the area of collision
with an object after deviating too much from its task or after
not recognizing an obstacle in time. This happens for two
reasons: lack of a controller for robust robot control and delay
in refreshing rate of the local_costmap.
Fig. 12. Navigation Experiments reach the final goal.
While for the first reason an ROS controller for speed
control of the robot must be set up, the second is easily
corrected by changing the refresh rate parameter in
local_costmap.
In addition, it is advisable to adjust the base_local_planer
parameters because their default values may not be
appropriate for each platform. The parameters in
base_local_planer are maximum and minimum velocity on the
x and y axes, maximum and minimum rotation speed and
acceleration limits on the three axes x, y and z.
VI. CONCLUSION
The Robot Operating System and the Gazebo Platform
offer very good capabilities for creating and controlling
robots. The article describes the main steps for creating a
simulation model of a mobile robot in ROS and Gazebo. The
model developed is a differential robot with two caster wheels
and is equipped with a laser scanner. The ROS navigation
system was then configured to control the developed robot
model. The experiments show that without making special
modifications to the settings of the mapping, localization and
navigation packages, the robot can move independently in the
simulated environment.
However, to achieve accurate path tracking, smooth
movement and precise positioning, it is necessary to add
additional systems such as a speed controller and an inertial
sensor. This is planned to be as future work on this study.
ACKNOWLEDGMENT (Heading 5)
The research presented in this paper was supported by the
Bulgarian National Science Fund under the contracts No. KP-
06-М27/1 04.12.2018. The work was partially supported by
the Bulgarian Ministry of Education and Science under the
National Research Programme “Young scientists and
postdoctoral students” approved by DCM # 577 / 17.08.2018.
REFERENCES
[1] Zheng, Chuanqi, and Kiju Lee. "WheeLeR: Wheel-Leg Reconfigurable
Mechanism with Passive Gears for Mobile Robot Applications." In
2019 International Conference on Robotics and Automation (ICRA),
pp. 9292-9298. IEEE, 2019.
[2] Kim, Pileun, Jingdao Chen, Jitae Kim, and Yong K. Cho. "SLAM-
driven intelligent autonomous mobile robot navigation for construction
applications." In Workshop of the European Group for Intelligent
Computing in Engineering, pp. 254-269. Springer, Cham, 2018.
[3] Arvin, Farshad, Jose Luis Espinosa Mendoza, Benjamin Bird, Andrew
West, Simon Watson, and Barry Lennox. "Mona: an affordable mobile
robot for swarm robotic applications." In UK-RAS Conference on
‘Robotics and Autonomous Systems, pp. 49-52. 2017.
[4] Ambrus, Rares, Nils Bore, John Folkesson, and Patric Jensfelt.
"Autonomous meshing, texturing and recognition of object models
with a mobile robot." In 2017 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), pp. 5071-5078. IEEE, 2017.
[5] Takaya, Kenta, Toshinori Asai, Valeri Kroumov, and Florentin
Smarandache. "Simulation environment for mobile robots testing using
ROS and Gazebo." In 2016 20th International Conference on System
Theory, Control and Computing (ICSTCC), pp. 96-101. IEEE, 2016.
[6] Pietrzik, S., and B. Chandrasekaran. "Setting up and Using ROS-
Kinetic and Gazebo for Educational Robotic Projects and Learning."
In Journal of Physics: Conference Series, vol. 1207, no. 1, p. 012019.
IOP Publishing, 2019.
[7] Foote, Tully. "tf: The transform library." In 2013 IEEE Conference on
Technologies for Practical Robot Applications (TePRA), pp. 1-6.
IEEE, 2013.
[8] Guimarães, Rodrigo Longhi, André Schneider de Oliveira, João
Alberto Fabro, Thiago Becker, and Vinícius Amilgar Brenner. "ROS
navigation: Concepts and tutorial." In Robot Operating System (ROS),
pp. 121-160. Springer, Cham, 2016.
[9] da Silva, Bruno MF, Rodrigo S. Xavier, and Luiz MG Gonçalves.
"Mapping and Navigation for Indoor Robots under ROS: An
Experimental Analysis." (2019).
[10] Talwar, Dhruv, and Seul Jung. "Particle Filter-based Localization of a
Mobile Robot by Using a Single Lidar Sensor under SLAM in ROS
Environment." 제어로봇시스템학회 국제학술대회 논문집 (2019):
1112-1115.
[11] Guirguis, Silvana, Mark Gergis, Catherine M. Elias, Omar M. Shehata,
and Slim Abdennadher. "ROS-based Model Predictive Trajectory
Tracking Control Architecture using LiDAR-Based Mapping and
Hybrid A* Planning." In 2019 IEEE Intelligent Transportation Systems
Conference (ITSC), pp. 2750-2756. IEEE, 2019.
[12] ROS Navigation:
http://wiki.ros.org/navigation/Tutorials/RobotSetup?action=AttachFil
e&do=get&target=overview_tf.png
[13] Gazebo ROS Control:
http://gazebosim.org/tutorials?tut=ros_control&cat=connect_ros
[14] Nie, Chenghui, Xavier Pacheco Corcho, and Matthew Spenko.
"Robots on the move: Versatility and complexity in mobile robot
locomotion." IEEE Robotics & Automation Magazine 20, no. 4
(2013): 72-82.
[15] Doroftei, Ioan, Victor Grosu, and Veaceslav Spinu. Omnidirectional
mobile robot-design and implementation. INTECH Open Access
Publisher, 2007.
[16] Tlale, Nkgatho, and Mark de Villiers. "Kinematics and dynamics
modelling of a mecanum wheeled mobile platform." In 2008 15th
International Conference on Mechatronics and Machine Vision in
Practice, pp. 657-662. IEEE, 2008.
... In this way, trajectories, navigation, and more can be planned and executed. To describe a robot, the robot's properties are specified in URDF (Universal Robot Description Format) files [68,69]. In this work, all experiments were carried out in the Gazebo environment on the map shown in Figure 7. ...
... In this way trajectories, navigation, and more can be planned and executed. To describe a robot, th robot's properties are specified in URDF (Universal Robot Description Format) file [68,69]. In this work, all experiments were carried out in the Gazebo environment on th map shown in Figure 7. ...
Article
Full-text available
People use natural language to express their thoughts and wishes. As robots reside in various human environments, such as homes, offices, and hospitals, the need for human–robot communication is increasing. One of the best ways to achieve this communication is the use of natural languages. Natural language processing (NLP) is the most important approach enabling robots to understand natural languages and improve human–robot interaction. Also, due to this need, the amount of research on NLP has increased considerably in recent years. In this study, commands were given to a multiple-mobile-robot system using the Turkish natural language, and the robots were required to fulfill these orders. Turkish is classified as an agglutinative language. In agglutinative languages, words combine different morphemes, each carrying a specific meaning, to create complex words. Turkish exhibits this characteristic by adding various suffixes to a root or base form to convey grammatical relationships, tense, aspect, mood, and other semantic nuances. Since the Turkish language has an agglutinative structure, it is very difficult to decode its sentence structure in a way that robots can understand. Parsing of a given command, path planning, path tracking, and formation control were carried out. In the path-planning phase, the A* algorithm was used to find the optimal path, and a PID controller was used to follow the generated path with minimum error. A leader–follower approach was used to control multiple robots. A platoon formation was chosen as the multi-robot formation. The proposed method was validated on a known map containing obstacles, demonstrating the system’s ability to navigate the robots to the desired locations while maintaining the specified formation. This study used Turtlebot3 robots within the Gazebo simulation environment, providing a controlled and replicable setting for comprehensive experimentation. The results affirm the feasibility and effectiveness of employing NLP techniques for the formation control of multiple mobile robots, offering a robust and effective method for further research and development on human–robot interaction.
... This necessitates the development of enhanced navigation algorithms with sensor technologies crucial for the robot to navigate safely and effectively in such workspace. Recent developments in vision-based navigation systems have seen significant progress, leveraging robot operating system (ROS) navigation stack local and global planners [2]. ROS 1 provides a modular and flexible set of software packages for developing autonomous navigation applications. ...
Article
Full-text available
The integration of machine learning and robotics brings promising potential to tackle the application challenges of mobile robot navigation in industries. The real-world environment is highly dynamic and unpredictable, with increasing necessities for efficiency and safety. This demands a multi-faceted approach that combines advanced sensing, robust obstacle detection, and avoidance mechanisms for an effective robot navigation experience. While hybrid methods with default robot operating system (ROS) navigation stack have demonstrated significant results, their performance in real time and highly dynamic environments remains a challenge. These environments are characterized by continuously changing conditions, which can impact the precision of obstacle detection systems and efficient avoidance control decision-making processes. In response to these challenges, this paper presents a novel solution that combines a rapidly exploring random tree (RRT)-integrated ROS navigation stack and a pre-trained YOLOv7 object detection model to enhance the capability of the developed work on the NAV-YOLO system. The proposed approach leveraged the high accuracy of YOLOv7 obstacle detection and the efficient path-planning capabilities of RRT and dynamic windows approach (DWA) to improve the navigation performance of mobile robots in real-world complex and dynamically changing settings. Extensive simulation and real-world robot platform experiments were conducted to evaluate the efficiency of the proposed solution. The result demonstrated a high-level obstacle avoidance capability, ensuring the safety and efficiency of mobile robot navigation operations in aviation environments.
... Over the years, conventional vision-based navigation systems based on the Robot Operating System (ROS) framework have been widely used. ROS is an open-source robotic platform that was first released in 2007 [3] to provide classical support tools, packages, and libraries ready-to-use [4,5] for developing robotic applications. The ROS navigation stack is an aspect of ROS ecosystem with a toolset and functionalities that equip robots with the capability to interact with their environment, avoid obstacles, and plan paths with reasonable intelligence [6]. ...
Article
Full-text available
Intelligent robotics is gaining significance in Maintenance, Repair, and Overhaul (MRO) hangar operations, where mobile robots navigate complex and dynamic environments for Aircraft visual inspection. Aircraft hangars are usually busy and changing, with objects of varying shapes and sizes presenting harsh obstacles and conditions that can lead to potential collisions and safety hazards. This makes Obstacle detection and avoidance critical for safe and efficient robot navigation tasks. Conventional methods have been applied with computational issues, while learning-based approaches are limited in detection accuracy. This paper proposes a vision-based navigation model that integrates a pre-trained Yolov5 object detection model into a Robot Operating System (ROS) navigation stack to optimise obstacle detection and avoidance in a complex environment. The experiment is validated and evaluated in ROS-Gazebo simulation and turtlebot3 waffle-pi robot platform. The results showed that the robot can increasingly detect and avoid obstacles without colliding while navigating through different checkpoints to the target location.
... This section investigates notable publications on classic ROS-based and machine learning methods. The concept of robot operating system (ROS) navigation planners provides a detailed framework that can integrate different types of algorithms and sensors to develop complex robotics applications [112]. Emerging solutions have utilised neural networks (NNs) to optimise traditional path planning and obstacle avoidance techniques in recent times [43,113]. ...
Article
Full-text available
The field of learning-based navigation for mobile robots is experiencing a surge of interest from research and industry sectors. The application of this technology for visual aircraft inspection tasks within a maintenance, repair, and overhaul (MRO) hangar necessitates efficient perception and obstacle avoidance capabilities to ensure a reliable navigation experience. The present reliance on manual labour, static processes, and outdated technologies limits operation efficiency in the inherently dynamic and increasingly complex nature of the real-world hangar environment. The challenging environment limits the practical application of conventional methods and real-time adaptability to changes. In response to these challenges, recent years research efforts have witnessed advancement with machine learning integration aimed at enhancing navigational capability in both static and dynamic scenarios. However, most of these studies have not been specific to the MRO hangar environment, but related challenges have been addressed, and applicable solutions have been developed. This paper provides a comprehensive review of learning-based strategies with an emphasis on advancements in deep learning, object detection, and the integration of multiple approaches to create hybrid systems. The review delineates the application of learning-based methodologies to real-time navigational tasks, encompassing environment perception, obstacle detection, avoidance, and path planning through the use of vision-based sensors. The concluding section addresses the prevailing challenges and prospective development directions in this domain.
... The URDF file is used to record all information about the virtual robot. It is common to create a robot model using Computer Aided Design (CAD) tools such as Solidworks, Pro-engineer, or Blender [10]. The URDF file contains descriptions of the model and the robot to be imported into ROS. ...
Article
Full-text available
This research describes the virtual humanoid robot R-SCUAD using the Gazebo simulator. In its development, humanoid robots often perform movements that have a negative impact on the robot's hardware, therefore the development of a virtual robot model is a solution to overcome this problem. So that the robot can be simulated before running. Gazebo is a robot simulator that allows to accurately simulate, design and test robots in various environments. Gazebo itself is a simulation used by ROS (robotic operating system). The simulation is built by doing a 3D design process in solidwork software and exported to a URDF file that matches the format on the ROS. Tests carried out on robots are by comparing virtual robots with real robots. From the tests carried out on the robot, it was found that the virtual robot can walk according to the real robot, such as falling if the robot's condition is not balanced. The simulation robot also moves according to the real robot when the controls are carried out.
... The software part of the algorithm was implemented using the C++ programming language, and the overall framework inherited and modified the standard TEB local path planning package [23]. The simulation platform was tested using Gazebo [24] and Stage [25] on Ubuntu 20.04, simulating two different experimental scenarios. The real-life test was conducted using our car-like mobile robot "Little Ant". ...
Article
Full-text available
Motion planning between dynamic obstacles is an essential capability to achieve real-world navigation. In this study, we investigated the problem of avoiding dynamic obstacles in complex environments for a car-like mobile robot with an incompletely constrained Ackerman front wheel steering. To address the problems of weak dynamic obstacle avoidance and poor path smoothing in motion planning with the traditional Timed Elastic Band (TEB) algorithm, We proposed a hybrid motion planning algorithm (TEB-CA,Timed Elastic Band-Collision Avoidance) that combines an improved traditional TEB algorithm and Optimal Reciprocal Collision Avoidance (ORCA) model to improve the ability of the robot to predict dynamic obstacles in advance and avoid collisions safely. Moreover, We also add new constraints to the traditional TEB algorithm, including: jerk constraints, smoothness constraints, and curvature constraints. The algorithm is implemented in C++ and evaluated experimentally in the Gazebo and Rviz simulation environments of the Robot Operating System (ROS) , as well as in actual experimental tests on our car-like autonomous mobile robot " Little Ant " which proves the effectiveness of the method, and that the motion planning scheme is more effective in avoiding dynamic obstacles than the traditional TEB and DWA algorithms.
... They examine how fuzzy logic can enhance the navigation abilities of mobile robots. Chikurtev [26] focuses on simulating and navigating mobile robots in ROS and Gazebo. The author explores using these platforms to simulate and implement mobile robot navigation algorithms. ...
Article
Full-text available
The primary objective of this study is to investigate the effects of mobile robot navigation using a fuzzy logic framework based on Z-numbers implemented within the Robot Operating System (ROS) Noetic. The methodology addresses uncertainty and imprecise information in robot navigation using extensive simulations performed using the TurtleBot3 robot in the ROS framework. Our unique approach enables the autonomous navigation of mobile robots in unknown environments, utilizing fuzzy rules with multiple inputs and outputs. The navigation strategy relies on the laser scan sensor, the Adaptive Monte Carlo Localization (AMCL) algorithm, and particle filter mapping, enabling real-time localization and mapping capabilities. Path planning incorporates local and global planners, while obstacle avoidance generates collision-free paths by dynamically detecting and circumventing obstacles in the robot’s proximity. We employ Simultaneous Localization and Mapping (SLAM) techniques to estimate the robot’s position and create a map of the environment. Our integration of these methods presents a promising solution for autonomous mobile robot navigation in real-world applications, thereby advancing the capabilities of robot systems in complex environments. Our results demonstrate the suitability and effectiveness of using a Z-number-based system in the navigation scenarios of mobile robots.
Conference Paper
Full-text available
One of the most popular issues in autonomous mobile robots is mapping, localizing and autonomous navigation. In this paper, Adaptive Monte Carlo Localization (AMCL) as particle filters method is presented to show how effectively it localizes the mobile robot in an indoor environment. During the simulations, the robot is localized and autonomously navigates in Gazebo and Rviz environment. The simulation results demonstrated that the particles in the filter quickly converge on the pose and the robot was successfully able to follow the path to reach its goal position. Simulations for a mobile robot to be localized in two different environments, static and dynamic, were carried out. The robot was successful in reaching its goal every time. Simulation results thus point out that AMCL performed effectively in these environments (PDF) Particle Filter-based Localization of a Mobile Robot by Using a Single Lidar Sensor under SLAM in ROS Environment. Available from: https://www.researchgate.net/publication/336056468_Particle_Filter-based_Localization_of_a_Mobile_Robot_by_Using_a_Single_Lidar_Sensor_under_SLAM_in_ROS_Environment [accessed Dec 06 2019].
Preprint
Full-text available
Since it was proposed, the Robot Operating System (ROS) has fostered solutions for various problems in robotics in the form of ROS packages. One of these problems is Simultaneous Localization and Mapping (SLAM), a problem solved by computing the robot pose and a map of its environment of operation at the same time. The increasingly availability of robot kits ready to be programmed and also of RGB-D sensors often pose the question of which SLAM package should be used given the application requirements. When the SLAM subsystem must deliver estimates for robot navigation, as is the case of applications involving autonomous navigation, this question is even more relevant. This work introduces an experimental analysis of GMapping and RTAB-Map, two ROS compatible SLAM packages, regarding their SLAM accuracy, quality of produced maps and use of produced maps in navigation tasks. Our analysis aims ground robots equipped with RGB-D sensors for indoor environments and is supported by experiments conducted on datasets from simulation, benchmarks and from our own robot.
Conference Paper
Full-text available
Mobile robots are playing a significant role in multi and swarm robotic research studies. The high cost of commercial mobile robots is a significant challenge that limits the number of swarm based research studies that implement real robotic platforms. On the other hand, the observed results from simulated robots using simulation software are not representative of results that would be obtained using real robots. There are therefore considerable benefits in the development of an affordable open-source and flexible platform that allows students and researchers to implement experiments using real robot systems. Mona is an open-source and open-hardware mobile robot that has been developed at the University of Manchester for this purpose. Mona provides a robotic solution that can be programmed and operated using a user-friendly interface, Arduino, with relative ease. The low cost of the platform means that it is feasible for a large number of these robots to be used in swarm robotic scenarios.
Chapter
Full-text available
The demand for construction site automation with mobile robots is increasing due to its advantages in potential cost-saving, productivity, and safety. To be realistically deployed in construction sites, mobile robots must be capable of navigating in unstructured and cluttered environments. Furthermore, mobile robots should recognize both static and dynamic obstacles to determine drivable paths. However, existing robot navigation methods are not suitable for construction applications due to the challenging environmental conditions in construction sites. This study introduces an autonomous as-is 3D spatial data collection and perception method for mobile robots specifically aimed for construction job sites with many spatial uncertainties. The proposed Simultaneous Localization and Mapping (SLAM)-based navigation and object recognition methods were implemented and tested with a custom-designed mobile robot platform, Ground Robot of Mapping Infrastructure (GRoMI), which uses multiple laser scanners and a camera to sense and build a 3D environment map. Since SLAM, itself, did not detect uneven surface conditions and spatiotemporal objects on the ground, an obstacle detection algorithm was developed to recognize and avoid obstacles and the highly uneven terrain in real time. Given the 3D real-time scan map generated by 3D laser scanners, a path-finding algorithm was developed for autonomous navigation in an unknown environment with obstacles. Overall, the 3D color-mapped point clouds of construction sites generated by GRoMI were of sufficient quality to be used for many construction management applications such as construction progress monitoring, safety hazard identification, and defect detection.
Chapter
Full-text available
This tutorial chapter aims to teach the main theoretical concepts and explain the use of ROS Navigation Stack. This is a powerful toolbox to path planning and Simultaneous localization and mapping (SLAM) but its application is not trivial due to lack of comprehension of the related concepts. This chapter will present the theory inside this stack and explain in an easy way how to perform SLAM in any robot. Step by step guides, example codes explained (line by line) and also both simulated and real robot testing will be available. We will present the requisites and the how-to's that will make the readers able to set the odometry, stablish reference frames and its transformations, configure perception sensors, tune the navigation controllers and plan the path on their own virtual or real robots.