Conference PaperPDF Available

Abstract and Figures

Simultaneous Localization and Mapping (SLAM) accomplishes the goal of concurrent localization and map creation based on self-recognition. LiDAR-based SLAM technology has advanced exponentially as a result of the widespread use of LiDAR sensors in a variety of technological sectors. This paper begins with a brief comparison of the different sensor technologies such as Radar, Ultrawideband positioning, Wi-Fi with LiDAR, and its functional importance in automation, robotics, and other fields. Classification of LiDAR sensors is also briefly discussed in tabular form. Then, a LiDAR-based SLAM is introduced by discussing its general graphical and mathematical modeling. After it three main features of LiDAR SLAM i.e., mapping, localization, and navigation are discussed. Finally, the comparison of LiDAR SLAM is discussed with other SLAM technologies and the challenges faced during its implementation.
Content may be subject to copyright.
978-1-6654-2413-4/21/$31.00 ©2021 IEEE
A Comparative Survey of LiDAR-SLAM and
LiDAR based Sensor Technologies
Misha Urooj Khan
Department of Electronics Engineering
University of Engineering and
Technology Taxila
Pakistan
mishauroojkhan@gmail.com
Syeda Ume Rubab Bukhari
Department of Electrical Engineering
University of Engineering and
Technology Taxila
Pakistan
syedaumerubab16@gmail.com
Syed Azhar Ali Zaidi
Department of Electronics Engineering
University of Engineering and
Technology Taxila
Pakistan
azhar.ali@uettaxila.edu.pk
Sana Samer
Department of Electronics Engineering
University of Engineering and
Technology Taxila
Pakistan
samersana01@gmail.com
Arslan Ishtiaq
Department of Electrical Engineering
University of Engineering and
Technology Taxila
Pakistan
arslanishtiaq21@gmail.com
Ayesha Farman
Department of Electronics Engineering
University of Engineering and
Technology Taxila
Pakistan
ayesha.farman@students.uettaxila.edu.
pk
AbstractSimultaneous Localization and Mapping (SLAM)
accomplishes the goal of concurrent localization and map
creation based on self-recognition. LiDAR-based SLAM
technology has advanced exponentially as a result of the
widespread use of LiDAR sensors in a variety of technological
sectors. This paper begins with a brief comparison of the
different sensor technologies such as radar, ultrawideband
positioning, and Wi-Fi with LiDAR, and their functional
importance in automation, robotics, and other fields. The
classification of LiDAR sensors is also briefly discussed in
tabular form. Then, a LiDAR-based SLAM is introduced by
discussing its general graphical and mathematical modeling.
After that, three main features of LiDAR SLAM, i.e., mapping,
localization, and navigation, are discussed. Finally, the
comparison of LiDAR SLAM is discussed with other SLAM
technologies, and the challenges faced during its implementation
are also discussed.
KeywordsSLAM, LiDAR, Robotics, Robot Operating
System, Deep learning, Human Computer Interaction.
I. INTRODUCTION
Position and localization are the two main features of
Simultaneous Localization and Mapping (SLAM). This is a
big open issue in mobile robotics. An autonomous robot has a
complex problem; that is, it needs to move precisely and draw
an accurate map of the surrounding environment, but the
sensors in these robots need to know exactly where they are to
construct a specific map [1]. As a result, simultaneous map
creation and localization are first-hand challenges. The
Extended Kalman Filter-based SLAM (EKF-SLAM) was
proposed in 1990 for calculating dorsal distributions over
robot stances as well as benchmark positions [2]. In addition,
the robot identifies its location and orientation through
repeated identification of spatial characteristics throughout the
movement and then produces a progressive map of the
surrounding region based on its position, achieving the
objective of simultaneous positioning and map formation [3-
7]. In recent years, localization has been a highly complex and
contentious problem [8].
Localization technologies differ based on the environment
and requirements for efficiency, accuracy, velocity, and
reliability, which can be achieved by Global Positioning
System (GPS) wireless signals, Inertial Measurement Unit
(IMU), and other things [9-11]. GPS can fail to operate for a
variety of reasons, including power loss, inaccurate
transmissions, intense environmental hazards such as
heatwaves, and non-penetration through concrete walls or
buildings [12]. A major drawback of routing for IMUs is that
they usually have a cumulative malfunction [13]. As the
control system continuously integrates acceleration in time to
quantify speed and distance, any calculation errors are accrued
over time, however minor they may be [14]. One may think of
the Global Navigation Satellite System (GNSS) as a resolution
to this issue of localization, but it was soon found that GNSS
alone was not enough. While the exact limitations of the
classical GNSS solutions are removed, the existence of
accurately located ground units continues to be a problem
[15]. Satellite transmissions are being affected by the
unpredictability of environmental phenomena. They can also
disturb immediate signal reception and trigger multipath or
non-line-of-sight damage, which may be disastrous for the
area covered [16]. This form of transmission degradation is
difficult to detect and usually results in a credibility deficiency
that is difficult to resolve [17]. These problems are most
common in densely populated urban areas with massive
structures suspected of concealing satellites [18]. With the fast
development of SLAM equipped with a camera, IMU,
LiDAR, and other sensors, these problems have sprung up in
recent years, and these problems are being overcome
gradually and consistently [19].
II. LIDAR SENSORS
LIDAR is an acronym for Light Detection and Ranging. It
is a virtual perception device utilized to check the earth's
exterior surface. It comes under the category of Time of Flight
(ToF) sensors [20]. LiDAR measures the object's distance by
emitting a laser onto the object and capturing the travelling
time [21]. A typical LiDAR sensor and its main elements are
shown in Fig. 1.
Fig. 1. Typical LIDAR Sensor and its main elements [22].
The precise formula for measuring the distance travelled
by a returned light particle to and from an object is defined by

(1)
Where, is distance,  is speed of light and  is flight
time. This helps in calculating precise distances to the land
points, altitudes, as well as base ground buildings, roads, and
trees [29]. The LiDAR equation (2) is used to quantify
individual estimates on the atmosphere as well as assess the
geometry and efficiency of structures [30] is as follows,
   (2)
Where,  is power being received depending on range,
is continuous system-dependent consideration such as
energy transmission and optics performance, geometry
measurement as a distance function,  transmitting
element of propagation channel and  is backscattering
attributes of the mark. These elements can be extended and
modified to accommodate for the unique characteristics of
each device and operation.
A. 2D vs. 3D LiDAR Sensors Technologies
2D LiDAR sensors record X and Y parameters using a
single axis of beams [17]. 3D LiDAR sensors work in the
same way as their 2D versions, but additional measurements
around the Z-axis are needed to collect true 3D data. Data
from the third axis is usually collected using several lasers at
various angles or longitudinal projections. 3D LiDAR
sensors have higher precision and resolution than 2D
versions, but they are significantly more expensive. 3D
LiDAR is ideal for visualization and thorough analysis of
technological structures such as bend radius.
TABLE I. 2D VS 3D LIDAR SENSOR TECHNOLOGY
LiDAR
Sensor
Technology
Parameters
Wave
length
(nm)
Scanning
Range
(o)
Weight
(kg)
Precision
(mm)
2D
905
0o-180o
40o-140o
4.5
15
3D
750
360o
05
0.95
B. Mechanical VS Solid-state LiDAR
Due to its large volume and mass, mechanical LiDAR is
difficult to incorporate into a small computer. For example,
keeping a mechanic's LiDAR for building inspections [16]
dramatically reduces the life cycle of surveillance drones.
That’s why a mechanical LiDAR cannot be built into a
handheld device due to its massive size. The recent
implementation of solid-state LiDAR provides a low-cost and
compact option for LiDAR SLAM systems. A solid-state
LiDAR is a structure that is entirely based on a
microprocessor and has no mechanical components [19]. As
a result, when compared to mechanical LiDAR, both the size
and mass can be greatly decreased. Furthermore, by replacing
a rotating mechanical base, the LiDAR solid-state is vibration
resistant. A mechanical LiDAR costs just 10% of the existing
solid-state LiDAR, is as compact as a cellphone and has
enormous potential for becoming a dominant small-sized
sensing system, such as Augmented Reality [17], aerial
exploration, and target detection. In terms of reliability, solid-
state LiDAR is said to have a precision of 1-2 cm and a visible
range of hundreds of meters. As mechanical and solid-state
LiDAR outputs are similar, LiDAR SLAM has not been
challenged till now. Current techniques for mechanical
LiDAR sensors have mainly been developed, which gather
data from nearby objects by rotating a high-frequency laser
device. Even though large-scale mapping studies have shown
promising results [13], [14], they are seldom used owing to
their high expense.
C. LiDAR VS Other Localization And Positioning Systems
LiDAR is among the many popular perceptive
mechanisms in robotics due to its extreme precision, wide
coverage, and long longevity. It measures the target length and
time traveled by emitting lasers on the target. Techniques for
localization frequently rely on external setup, resulting in a
lack of durability in a range of circumstances. Ultrawideband
positioning [13] needs multiple connectors to be preinstalled
and optimized, while Wi-Fi [12] necessitates several routers,
in general, to be extremely precise. Most active radars have a
wavelength of 230 cm, while LiDARs have an operating
wavelength of 3002000 nm [29]. This is a five-magnitude
difference that can have serious implications for surveillance
applications. Due to the closer spacing of laser beams, LiDAR
has smaller tracks and better angular and temporal resolution
than radar. Low-wave radar is susceptible to water droplets
and snow falling, while LiDAR uses these aspects to detect
and measure atmospheric molecular quantities such as oxygen
and liquid water content [30].
D. Practical Importance of LiDAR Sensing Technologies
In terms of data recognition and application, LiDAR has
advanced from a variety of other technologies and sensors that
were not sensitive enough. LiDAR has proven to be a
powerful tool for a variety of problems, including scanning
between trees, by providing a smooth, precise, and direct 3D-
mapping system that produces accurate and easy-to-
understand results [3]. LiDAR technology is also acquiring
prominence rapidly in robotics technologies where good
validity and consistency are desired. These characteristics set
LiDAR apart from other alternatives, such as photographic
techniques, which have difficulty detecting ground elevations.
Modern LiDARs can also operate 24 hours a day, giving them
a significant edge against sensors such as cameras, which are
almost useless in the darkness or mist [30]. The robust
versatility of LiDAR, as well as its large range of up to 200 m
and increased distance width, allows it to easily identify
targets. Furthermore, the cost of such higher-performance 3D
sensors has been greatly reduced with the advent and adoption
of solid-state technology, making them better suited for a
multitude of applications.
E. LiDAR Pros and Cons
LiDAR sensors gather data from inaccessible areas easily
and accurately [2-4]. They can be combined with other sensors
such as IMUs, cameras, GPS, sonar, and ToF sensors [7-9].
Because of efficient lighting sensor technology, it can also be
used in both daytime and darkness [1115]. Once fully
configured, LiDAR is a self-contained piece of hardware that
can operate on its own [2230]. Depending on the
development's specifications, LiDAR can be expensive [3].
This sensor technology is made inefficient in the rainy season
and low-hanging weather [18]. Analyzing vast amounts of
data can be time-consuming and resource intensive [20]. The
strong laser beams used in certain LiDAR systems are harmful
to the naked eye. It has a tough time penetrating dense
materials [32].
TABLE II. COMPARIOSN OF DIFFERENT TYPES OF LIDAR SENSORS
LiDAR
Type
Attribute’s
Drawbacks
Applications
Airborne
[23]
Gives correct distance
estimate and core land
property information.
To travel long distances,
very powerful transmitters
are needed, making them a
very complex and expensive
technology.
Forests Topographic
mapping.
Ocean bed scanning.
Topographic
[22]
Used in the development
of topographic maps.
When gathering data,
impulses may not be able to
pass dense foliage.
Silviculture,
climatology,
geochemistry, and
environmental
development.
Terrestrial
[21]
Retrieve sets of data with
a maximum degree of
accuracy and intensity.
They are unable to infiltrate
buildings foliage and frost.
Earth tracking, shift
tracking, reporting,
simulation, and other
landscape insights.
Static
[19]
It is a series of LiDAR
laser scanning collected
from a fixed spot.
The interval between the
captured light and the
referential causes the mixed
beam to shift frequency
constantly or break
frequency in a static
objective.
Logging, inspecting,
archaeology, and
engineering.
Solid-state
[19]
It is a cutting-edge
innovation that is better in
performance, faster, quite
space-saving, more
robust, and less
expensive.
It cannot spin 360 degrees
and can only track objects in
front of it.
Autonomous driving
vehicles, automotive
sector, and
autonomous robots.
Mechanical
[32]
They use strong collinear
beams and highly
targeted optics to
concentrate the reflected
signals on the detectors.
They are expensive, less
technical, have performance
problems and are wide in
size.
Airborne laser swath
mapping, geomatics
and geomatics.
III. 2-D LIDAR SLAM
A 2D-LiDAR-SLAM architecture is equipped with a
LiDAR sensor for creating a 2D map of its surroundings [5].
The LiDAR sensor illuminates the target with an active laser
"pulse" and calculates the distance from the object. In various
circumstances and landscapes [17], LiDAR-based SLAM is a
quick and precise solution for map generation.
Fig. 2. General 2-D LiDAR based SLAM Graphical model.
A. Mathematical Modeling
The bare minimum requirements for solving the LiDAR
SLAM problem are a robot's agility and the availability of a
system retrieving knowledge about its surroundings. The
general scheme of single-robot-based LiDAR SLAM can be
interpreted from Fig. 2. The problem of LiDAR SLAM is
described as: A robot wanders in an unfamiliar area and starts
from a known location/points. Its motion is unpredictable, and
the determination of the next coordinates increasingly
becomes more complex. The robot perceives its surroundings
as it travels, but it is quite difficult to create a map at the same
time as deciding the robot's location. 2D LiDAR SLAM is a
probabilistic term. As seen in Fig. 3, the orientation of both
the robot and the landmarks is measured at the same time. The
precise location of a robot and milestones is unknown and
unmeasured. The conclusions are based on actual robots and
prominent locations.



 
󰆒
  

For robots, is made up of its plane location (2-D array)
and in-plane alignment (3-D array). The robot route is
represented as follows when time t = 0:
󰇝    󰇞 (3)
Relative motion of the robot between time t-1 and t is:
󰇝    󰇞 (4)
It is not sufficient to depend solely on robot odometry.
to determine robot position inside a plane because it lacks the
precision needed for precise localization in real-world
applications. It is caused by the surface structure of the
environment and robot misbehavior, such as wheel slippage.
The continuous sensory estimation with respect to time is
described as follows:
 󰇝    󰇞 (5)
Following the collection and specification of all relevant
data, the next step is to forecast the landscape and location
map. LiDAR SLAM approach makes use of a probability
system that uses a probability density function to predict the
robot and the location of the generated map. The probability
distribution structure, P, is defined as follows:
󰇛  󰇜 (6)
The relationship between the position of the robot and
change in position which is defined as:
󰇛 󰇜 (7)
Data collected from other landmarks and the previous
robot position at time step t will be used to improve the robot's
location and milestone measurement at time step t - 1. The
process is repeated indefinitely to adjust the robot's
measurements until the robot has finished exploring the field.
Fig. 3. Mathematical Modeling of 2-D LiDAR SLAM.
B. Features of 2D-LiDAR SLAM
LiDAR SLAM is equipped with three main characteristics,
map of surrounding areas, localization and path planning.
(a) Mapping: It contains a map of the surrounding area
that the robot wishes to visit before beginning to navigate
across new areas. Atmosphere Mapping allows autonomous
robots to build an atmosphere map using the hardware sensor
for receiving environmental data [5]. For map representation
forms, data generates a diagram that is a mixed or topological
map [3].
(b) Localization: One of the SLAM capabilities is
identified as a robotic system that can measure and predict
the milestone location and direction based on the mapping
[3]. Localization allows wireless entities to "think," find
points of reference and identify narrow barriers with map-
info [2] by calculating and evaluating the route. The
positioning allows the robot to acknowledge its position and
surroundings and to stop when an object is in sight.
(c) Navigation: These capabilities combine navigation
and positioning features so that the robot has a reasonable
direction schedule for the data collected during detection and
localization. The visualization and position protocol were
done recursively to adjust the robot's knowledge of its
surroundings as it navigates its surroundings [2]. Navigation
chooses a suitable path dependent on the details obtained,
responds to the local area, and can return to the beginning or
stopping points after exploring.
Fig. 4. Overview of 2D-LIDAR SLAM.
C. LiDAR SLAM VS other SLAM technologies
Table III shows the comparison of LiDAR SLAM with
other SLAM technologies in terms of operation advantages
and disadvantages.
TABLE III. COMPARIOSN OF DIFFERENT TYPES OF SLAM TECHNOLOGIES
Sr No
SLAM Type
Operation
Advantages
Disadvantages
1.
2D-LiDAR SLAM [5]
Using a laser sensor, this
algorithm creates a 2D view of
its surroundings. It calculates
the distance from an object by
lighting it with an active laser
"pulse."
Best to spot unfunctional
structures like hollow ceilings.
Has maximum performance,
good precision, significant
distance range.
2D-LiDAR based
methods fail in bad
weather like thunder.
2.
EFK (Extended
Kalman Filter)
SLAM [15]
EKF-SLAM derives the device
status value estimation by triple
phase iterations i.e., prediction,
observation and update. [15].
It can be regarded as a Bayesian
filter variant.
Recurrent or more accurate
estimates of the condition of a
complex framework.
Tackles unsuspected,
combinatorial assessment
challenges until a strong
screening challenge is screened.
Sophisticated probability
statistical calculation.
Forecasting is not
dependent on facts.
3.
FAST-SLAM [14]
Resolves the dilemma of data
correlation with the highest
estimate of probability [14].
M · K tiny EKFs (K of them in
each particle).
Absolute probability
statistical calculation.
Estimation focused on
non-reality
4.
LSD (Large-Scale
Direct) SLAM [13]
Specific vision - based
methodology: a pixel-specific
stero-sphere correlation
provides effective photo
orientation via webcam and
geometry through semi-densely
deep masks [13].
The picture intensity works
directly rather than the major
step process.
Greater precision, further
strength in a sporadically
patterned setting,
Follow Sim (3)-constraints
(e.g., solid body movement
scale) within the current frame
specifically [13].
Costly.
Less precise.
5.
GRAPH-SLAM [12]
The technique utilizes a
stochastic gradient descent
analogous to an adaptive
procedure for non-linear
optimization. With regard to
the posterior robot path, each
restraint gives a circle.
Rapid without including
estimated density factoring [3].
The limitations in the
stance diagram are taken
one by one time.
Knowledge ambiguity,
disturbance data.
6.
PF (Particle Filter)
SLAM [11]
A Particle filter is a Monte
Carlo sequence filtering
system, with particle filters or
other filters being obtained via
the PF-SLAM [11].
The filtration is carried out with
state identification, mass
modification and resampling
[11].
Many particles in high
spatial domain are
likelihood to be spaced
and faraway.
7.
Visual-SLAM [10]
Uses the whole picture by
removing aspects to identify
and map most prominent and
essential positions.
SWAP-C Low (Low Size,
Weight, Power, and Cost).
Semi-dense graph, no
identification function
necessary.
Quite thick, identification of
specific range, sample rate
"Infinite."
Because of their
illumination adjustments
vulnerability and low
texture climate, it is
sensitive to flaws.
Costly sensors only
detect environmental
changes.
8.
ORB (Oriented
FAST and Rotated
BRIEF) SLAM 2 [9]
It is more feature based and
uses ORB features because of
the speed in which these can be
extracted from images and their
rotational invariance [9].
High speed extraction
processes.
High computational cost
due to high pixels
cameras, mathematical
computation complexity.
9.
Open Rat SLAM [8]
It designs the mechanism of
steering on the hippocampus (a
part of the mammalian brain)
[8].
With only a few modification
parameters, the Expression
Navigation System evolved a
variety of cognitive models for
navigational elements and
topological maps across these
dramatically different datasets
[8].
Needs high pixels
cameras.
High computational cost.
LiDAR-based SLAM offers a device-free environment for
robot navigation both indoors and outdoors [35].
Furthermore, unlike other SLAM systems like ORB-SLAM
[9] and Graph-SLAM [12], 2D-LiDAR SLAM [5] [17] is
unaffected by disruptions like temperature and light changes.
It has been commonly used in a wide range of autonomous
applications, including autonomous driving [10], building
inspections [11], and smart fabrication [12].
IV. CHALLENGES
In SLAM, volatility, interaction, data combinations, and
time variability arise from many main problems. Here we
addressed each issue is to see its effects on the LiDAR SLAM.
A. Uncertainty
Due to uncertainty [16], two main problems, called
position and hardware instability, rise. Both problems have
enormous effects on LiDAR SLAM's results. The complexity
of position is one of SLAM's challenges in determining how
the mobile robot can navigate the many paths in the
environment. The robot can travel easily from one point to
another in a single linear direction, as it can be tracked to the
original point [15]. In actual environments, however, the
mobile robot can crawl and maneuver from one point to the
next in several ways. This dilemma is also highly disruptive.
Thus, this dilemma makes the mobile robot take the right
direction so that it can know its current or absolute status, as
it is highly unsure about its position. The generation of chipset
noise in different parts of autonomous robots led to derived
information being inaccurate [32] in the case of hardware
instability. This incorrect information is therefore measured,
analyzed, and subtracted to identify the real or absolute
location, reference, and other related information of the robot.
B. Correspondence
Interaction is known as the toughest challenge in LiDAR-
SLAM, and these issues have a major effect on the SLAM
method of identifying landmarks. This is because SLAM is
distinctive and differs from other known landmarks [17] in its
capacity to identify one specific landmark. Two different
hurdles, such as two rocks, rock A and rock B, are used as a
basic example. The one thing that differs is that rock A is
somewhat larger than rock B. Both rocks have identical
forms. The distinction is recognizable by humans but not by
robots. As humans know, the capability of a robotic system
to distinguish landmark identifiers is not easy, so it depends
on the hardware to display or quantify the environment [15].
Since the knowledge from autonomous vehicle hardware
such as a laser sensor is retrieved, the new landmark cannot
be recognized by the robotic manipulator if it still differs or
is equal to the previously identified landmarks.
C. Data Association
In the context of problems with data association, it affects
the LiDAR SLAM capacities to allow the robot to revert to
its original or previously mapped area [16]. This problem was
noted when the autonomous robot tried to link the current
location with the previously identified landmark to return to
the previous source or mapped region. To approximate the
main relationship of robotic manipulators, data analysis was
used to return to the robot’s origin based on a prior map and
known points of reference [17].
D. Time Complexity
Problems regarding time complexity tell us the difficulty
of how quickly the LiDAR SLAM can perform calculations
or do processing and compute the gathered data points that
produce the intended effects that the autonomous robot will
use in the future [31]. As we know, during orientation,
LiDAR SLAM simultaneously and iteratively performs
mapping and location processes. These multiple
simultaneous systems need to be coordinated and configured
properly in a limited period. The efficiency and timescales in
the SLAM methodology or methods then become the main
elements to provide the robotic system with accurate results
so it can experience new things effectively and decrease error
rates [32].
E. Physical Devices
LiDAR is influenced by physical technology. The spatial
accuracy in the longitudinal plane is low in comparison to the
diagonal tilt [2327]. The sweeping theorem also affects it.
As one step farther from the LiDAR center, the point cloud
shrinks in size, and when the limit is reached, the LiDAR
stops receiving data points [28]. The above three points
spread the LiDAR point cloud on the scanner scale [29].
Thus, LiDAR-based SLAM technology must solve issues
such as massive computing, scattered coordinate systems,
and movement manipulation.
CONCLUSION AND FUTURE WORK
Several engineering experiments have been conducted to
determine the best method for producing a functional
autonomous system. The majority of research relies on
simulation or location as a differentiation process. However,
as time passed, a new concept known as LiDAR SLAM was
added. LiDAR SLAM demonstrates the ability of autonomous
mobile robots to execute navigation and positioning
operations at the same time. It increases robot efficiency while
managing a dynamic environment without human
intervention. LiDAR SLAM has been a tremendous advance
in the resolution of the portable automated robotics quandary,
and it is also regarded as an aspiration in the world of
autonomous robots. Its effectiveness in resolving mobile robot
mapping and positioning problems leads significantly to the
self-exploration of robots. In a nutshell, LiDAR SLAM is a
convincing solution, but it remains to be seen how far this
evolved algorithm can effectively accomplish the ultimate
goal of SLAM innovation in the automation of the robotic
system. Many questions exist about the core aspects of the 2D
Lidar-SLAM and its implementation. As a result, a thorough
understanding of LiDAR-SLAM and its use in mobile robots
for artificial intelligence is needed. With recent advancements
in LiDAR-SLAM growth, it can be easily extended in a
variety of fields, with positive outcomes in the future. We are
proposing to integrate the software into the LiDAR SLAM as
future work. Soft computing is a tool utilized to solve
computational challenges that are complicated and
arithmetically persistent [18]. It provides a flexible and
effective distribution mechanism for SLAM problem-solving.
The principal goal of this concept is to boost and maximize
the robot's accuracy and measurement and reduce failure
frequency to increase the efficiency of LiDAR SLAM. The
idea is not novel, and many works have already been
completed; this concept has been proposed in [1920]. The
suggested hybrid approach incorporates the FastSLAM
algorithm with genetics [19], specifically swarm optimization
soft computing methodologies [20].
REFERENCES
[1] M. Deans and M. Hebert, “Experimental comparison of techniques for
localization and mapping using a bearing-only sensor,” in
Experimental Robotics VII, Springer Berlin Heidelberg, 2007, pp.
395404.
[2] R. Lemus, S. Díaz, C. Gutiérrez, D. Rodríguez, and F. Escobar,
“SLAM-R algorithm of simultaneous localization and mapping using
RFID for obstacle location and recognition,” J. Appl. Res. Technol.,
vol. 12, no. 3, pp. 551559, Jun. 2014, doi: 10.1016/S1665-
6423(14)71634-7.
[3] T. J. Chong, X. J. Tang, C. H. Leng, M. Yogeswaran, O. E. Ng, and Y.
Z. Chong, “Sensor Technologies and Simultaneous Localization and
Mapping (SLAM),” in Procedia Computer Science, Jan. 2015, vol. 76,
pp. 174179, doi: 10.1016/j.procs.2015.12.336.
[4] Z. Meng, C. Wang, Z. Han, and Z. Ma, “Research on SLAM navigation
of wheeled mobile robot based on ROS,” in Proceedings - 5th
International Conference on Automation, Control and Robotics
Engineering, CACRE 2020, Sep. 2020, pp. 110116, doi:
10.1109/CACRE50138.2020.9230186.
[5] S. H. Chan, P. T. Wu, and L. C. Fu, “Robust 2D Indoor Localization
Through Laser SLAM and Visual SLAM Fusion,” in Proceedings -
2018 IEEE International Conference on Systems, Man, and
Cybernetics, SMC 2018, Jan. 2019, pp. 12631268, doi:
10.1109/SMC.2018.00221.
[6] J. Danping, D. Guangxue, W. Nan, Z. Zhigang, Z. Zhenyu, and L.
Huan, “Simultaneous Localization and Mapping based on LiDAR,” in
Proceedings of the 31st Chinese Control and Decision Conference,
CCDC 2019, Jun. 2019, pp. 55285532, doi:
10.1109/CCDC.2019.8833308.
[7] Y. Jia, X. Yan, and Y. Xu, “A Survey of simultaneous localization and
mapping for robot,” in Proceedings of 2019 IEEE 4th Advanced
Information Technology, Electronic and Automation Control
Conference, IAEAC 2019, Dec. 2019, pp. 857861, doi:
10.1109/IAEAC47372.2019.8997820.
[8] D. Ball, S. Heath, J. Wiles, G. Wyeth, P. Corke, and M. Milford,
“OpenRatSLAM: An open source brain-based SLAM system,” Auton.
Robots, vol. 34, no. 3, pp. 149176, Apr. 2013, doi: 10.1007/s10514-
012-9317-9.
[9] R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: An Open-Source
SLAM System for Monocular, Stereo, and RGB-D Cameras,” IEEE
Trans. Robot., vol. 33, no. 5, pp. 12551262, Oct. 2017, doi:
10.1109/TRO.2017.2705103.
[10] H. Lategahn, A. Geiger, and B. Kitt, “Visual SLAM for autonomous
ground vehicles,” in Proceedings - IEEE International Conference on
Robotics and Automation, 2011, pp. 17321737, doi:
10.1109/ICRA.2011.5979711.
[11] Q. M. Chen, C. Y. Dong, Y. Z. Mu, B. C. Li, Z. Q. Fan, and Q. L.
Wang, “An Improved Particle Filter SLAM Algorithm for AGVs,” in
2020 IEEE 6th International Conference on Control Science and
Systems Engineering, ICCSSE 2020, Jul. 2020, pp. 2731, doi:
10.1109/ICCSSE50399.2020.9171985.
[12] M. Holder, S. Hellwig, and H. Winner, “Real-time pose graph SLAM
based on radar,” in IEEE Intelligent Vehicles Symposium,
Proceedings, Jun. 2019, vol. 2019-June, pp. 11451151, doi:
10.1109/IVS.2019.8813841.
[13] J. Engel, J. Stückler, and D. Cremers, “Large-scale direct SLAM with
stereo cameras,” in IEEE International Conference on Intelligent
Robots and Systems, Dec. 2015, vol. 2015-December, pp. 19351942,
doi: 10.1109/IROS.2015.7353631.
[14] J. Zhang, Y. Jiang, and K. Wang, “A modified FastSLAM for an
autonomous mobile robot,” in 2016 IEEE International Conference on
Mechatronics and Automation, IEEE ICMA 2016, Sep. 2016, pp.
17551759, doi: 10.1109/ICMA.2016.7558829.
[15] L. M. Paz, P. Jensfelt, J. D. Tardós, and J. Neira, “EKF SLAM updates
in O(n) with Divide and Conquer SLAM,” in Proceedings - IEEE
International Conference on Robotics and Automation, 2007, pp.
16571663, doi: 10.1109/ROBOT.2007.363561.
[16] R. Yagfarov, M. Ivanou, and I. Afanasyev, “Map Comparison of
LiDAR-based 2D SLAM Algorithms Using Precise Ground Truth,” in
2018 15th International Conference on Control, Automation, Robotics
and Vision, ICARCV 2018, Dec. 2018, pp. 19791983, doi:
10.1109/ICARCV.2018.8581131.
[17] W. Hess, D. Kohler, H. Rapp, and D. Andor, “Real-time loop closure
in 2D LIDAR SLAM,” in Proceedings - IEEE International Conference
on Robotics and Automation, Jun. 2016, vol. 2016-June, pp. 1271
1278, doi: 10.1109/ICRA.2016.7487258.
[18] F. Farzadpour, P. Church, and X. Chen, “Modeling and optimizing the
coverage performance of the LIDAR sensor network,” in IEEE/ASME
International Conference on Advanced Intelligent Mechatronics, AIM,
Aug. 2018, vol. 2018-July, pp. 504509, doi:
10.1109/AIM.2018.8452260.
[19] D. Van Nam and K. Gon-Woo, “Solid-state LiDAR based-SLAM: A
concise review and application,” in Proceedings - 2021 IEEE
International Conference on Big Data and Smart Computing, BigComp
2021, Jan. 2021, pp. 302305, doi:
10.1109/BigComp51126.2021.00064.
[20] M. Kolakowski, V. Djaja-Josko, and J. Kolakowski, “Static LiDAR
Assisted UWB Anchor Nodes Localization,” IEEE Sens. J., 2020, doi:
10.1109/JSEN.2020.3046306.
[21] H. Wang, X. Liu, X. Yuan, and D. Liang, “Multi-perspective terrestrial
LiDAR point cloud registration using planar primitives,” in
International Geoscience and Remote Sensing Symposium (IGARSS),
Nov. 2016, vol. 2016-November, pp. 67226725, doi:
10.1109/IGARSS.2016.7730755.
[22] S. R. Lach and J. P. Kerekes, “Robust extraction of exterior building
boundaries from topographic LiDAR data,” in International
Geoscience and Remote Sensing Symposium (IGARSS), 2008, vol. 2,
no. 1, doi: 10.1109/IGARSS.2008.4778933.
[23] F. Stetina et al., “Progress on the Airborne LiDAR topographic
mapping system (ALTMS) sensor,” in International Geoscience and
Remote Sensing Symposium (IGARSS), 1993, vol. 2, pp. 656658,
doi: 10.1109/igarss.1993.322612.
[24] A. Singandhupe and H. La, “A Review of SLAM Techniques and
Security in Autonomous Driving,” in Proceedings - 3rd IEEE
International Conference on Robotic Computing, IRC 2019, Mar.
2019, pp. 602607, doi: 10.1109/IRC.2019.00122.
[25] J. K. Makhubela, T. Zuva, and O. Y. Agunbiade, “A review on vision
simultaneous localization and mapping (VSLAM),” Jan. 2019, doi:
10.1109/ICONIC.2018.8601227.
[26] Z. Kong and Q. Lu, “A brief review of simultaneous localization and
mapping,” in Proceedings IECON 2017 - 43rd Annual Conference of
the IEEE Industrial Electronics Society, Dec. 2017, vol. 2017-January,
pp. 55175522, doi: 10.1109/IECON.2017.8216955.
[27] A. R. Khairuddin, M. S. Talib, and H. Haron, “Review on simultaneous
localization and mapping (SLAM),” in Proceedings - 5th IEEE
International Conference on Control System, Computing and
Engineering, ICCSCE 2015, May 2016, pp. 8590, doi:
10.1109/ICCSCE.2015.7482163.
[28] F. Hidalgo and T. Braunl, “Review of underwater SLAM techniques,”
in ICARA 2015 - Proceedings of the 2015 6th International Conference
on Automation, Robotics and Applications, Apr. 2015, pp. 306311,
doi: 10.1109/ICARA.2015.7081165.
[29] G. Dissanayake, S. Huang, Z. Wang, and R. Ranasinghe, “A review of
recent developments in Simultaneous Localization and Mapping,” in
2011 6th International Conference on Industrial and Information
Systems, ICIIS 2011 - Conference Proceedings, 2011, pp. 477482,
doi: 10.1109/ICIINFS.2011.6038117.
[30] M. Zaffar, S. Ehsan, R. Stolkin, and K. M. D. Maier, “Sensors, SLAM
and Long-term Autonomy: A Review,” in 2018 NASA/ESA Conference
on Adaptive Hardware and Systems, AHS 2018, Nov. 2018, pp. 285
290, doi: 10.1109/AHS.2018.8541483.
[31] [C. Cadena et al., “Past, present, and future of simultaneous
localization and mapping: Toward the robust-perception age,” IEEE
Trans. Robot., vol. 32, no. 6, pp. 13091332, Dec. 2016, doi:
10.1109/TRO.2016.2624754.
[32] Y. Chen, Y. Zhou, Q. Lv, and K. K. Deveerasetty, “A review of V-
SLAM,” in 2018 IEEE International Conference on Information and
Automation, ICIA 2018, Aug. 2018, pp. 603608, doi:
10.1109/ICInfA.2018.8812387.
... Despite its benefits, wheel odometry alone fails to determine global position during indoor localization, prompting the exploration of simultaneous localization and mapping (SLAM) techniques (Liao et al. 2019;Nam and Gon-Woo 2021;Huang 2021;Khan, et al. 2021;Xu et al. 2022;Zou et al. 2021). SLAM in indoor environments often employs point cloud registration using range finder sensors like LiDAR (Liao et al. 2019). ...
Article
Full-text available
Accurate localization is essential for enabling intelligent autonomous navigation in indoor environments. While global navigation satellite systems (GNSS) provide efficient outdoor solutions, applications in indoor environments require alternative approaches to determine the vehicle's global position. This study investigates a ROS-based multi-sensor integrated localization system utilizing wheel odometry, inertial measurement unit (IMU), and 2D light detection and ranging (LiDAR) based simultaneous localization and mapping (SLAM) for cost-effective and accurate indoor autonomous vehicle (AV) navigation. The paper analyzes the limitations of wheel odometry and IMU, highlighting their susceptibility to errors. To address these limitations, the proposed system leverages LiDAR SLAM for real-time map generation and pose correction. The Karto SLAM package from robot operating system (ROS) is chosen due to its superior performance according to the literature. Results indicate that the integration of these technologies reduces localization errors significantly, with the system achieving a high degree of accuracy in pose estimation under various test conditions. The experimental validation shows that the proposed system maintains consistent performance, proving its potential for widespread application in environments where GNSS is unavailable.
... In the last decade, many techniques have emerged featuring Simultaneous Localization and Mapping (SLAM) methods (Mur-Artal et al., 2015), (Khan et al., 2021). These techniques focus on calculating the position and orientation of robots based on data obtained from their own sensors, as opposed to the use of external sensors such as GPS. ...
Conference Paper
Full-text available
With the advancement of artificial intelligence and embedded hardware development, the utilization of various autonomous navigation methods for mobile robots has become increasingly feasible. Consequently, the need for robust validation methodologies for these locomotion methods has arisen. This paper presents a novel ground truth positioning collection method relying on computer vision. In this method, a camera is positioned overhead to detect the robot's position through a computer vision technique. The image used to retrieve the positioning ground truth is collected synchronously with data from other sensors. By considering the camera-derived position as the ground truth, a comparative analysis can be conducted to develop, analyze, and test different robot odometry methods. In addition to proposing the ground truth collection methodology in this article, we also compare using a DNN to perform odometry using data from different sensors as input. The results demonstrate the efficacy of our ground truth collection method in assessing and comparing different odometry methods for mobile robots. This research contributes to the field of mobile robotics by offering a reliable and versatile approach to assess and compare odometry techniques, which is crucial for developing and deploying autonomous robotic systems.
... They perform triangulation or time-of-flight (TOF) measurements of the reflected signal [4]. The wavelength in the visible spectrum surroundings and the use of coherent signals allow higher resolutions than any other ecometric, radioelectric, or acoustic systems [5], [6]. This technology can replace or coexist with other systems, such as global positioning system-a system of limited use in indoor environments-LiDAR odometry [7], [8], inertial systems [9], or conventional cameras and radar [2], [3]. ...
Article
Full-text available
The ability to map an unknown environment is a fundamental milestone for autonomous robotic vehicles. Solutions in this field must combine efficiency, accuracy, and precision. We propose a novel methodology for map feature extraction in indoor environments. The mathematical model and its implementation are designed to operate with 2-D light detection and ranging (LiDAR) measurements. Map parameters and associated uncertainty levels are determined through bivariate linear regression. The final step is experimental validation, using a low-cost commercial LiDAR sensor. The main contributions of the proposed methodology lie in the domains of computational efficiency and uncertainty. In addition, the results prove the ability of our methodology to handle large volumes of data while maintaining restrained growth in computational time. This outcome suggests considerable potential for real-time applications with limited hardware resources. A second methodology, extracted from the current state of the art, is used in parallel for benchmarking purposes.
... The SLAM method exists in various types depending on the robot form and the sensor. Furthermore, SLAM can be categorized into lidar SLAM [13][14][15][16][17][18], visual SLAM [19][20][21][22][23], and others based on the type of sensor, and can also be divided into feature-based SLAM [15,16], fraph-based SLAM [17,18], etc., based on the method of data processing. ...
Article
Full-text available
The ongoing expansion of the Fourth Industrial Revolution has led to a diversification of drone applications. Among them, this paper focuses on the critical technology required for load management using drones. Generally, when using autonomous drones, global positioning system (GPS) receivers attached to the drones are used to determine the drone’s position. However, GPS integrated into commercially available drones have an error margin on the order of several meters. This paper, proposes a method that uses fixed-size quick response (QR) codes to maintain the error of drone 3D localization within a specific range and enable accurate mapping. In the drone’s 3D localization experiment, the errors were maintained within a specific range, with average errors ranging from approximately 0 to 3 cm, showing minimal differences. During the mapping experiment, the average error between the actual and estimated positions of the QR codes was consistently around 0 to 3 cm.
Article
Full-text available
This study aims to realize self-position estimation for indoor robots using only a single acoustic channel. When a single omnidirectional transmitter/receiver is used as an object detection sensor, detected objects are identified on concentric circles with the transmitter/receiver as the center point. Self-position estimation method using this sensor cannot use the directional information of the detected object. This fact makes it impossible to specify the robot's turning angle using environmental information. In this paper, we propose a self-position estimation method using a single omnidirectional transmitter/receiver that can consider the direction of the reflected object by estimating the direction of the reflected wave from the Doppler effect generated during the robot's movement. The self-position estimation was implemented by using echo images of the direction of arrival of sound waves estimated from the Doppler effect and the distance of arrival of sound waves estimated from the impulse response and matching them with a previously generated map image. The accuracy of the proposed method was evaluated by simulation and experiment. In the simulation, an average position estimation error of 0.042 m was achieved; in the experiment, it was 0.051 m. Furthermore, experimental and simulation results show that using the Doppler effect contributes to selfposition estimation accuracy.
Preprint
Full-text available
While engaging with the unfolding revolution in autonomous driving, a challenge presents itself, how can we effectively raise awareness within society about this transformative trend? While full-scale autonomous driving vehicles often come with a hefty price tag, the emergence of small-scale car platforms offers a compelling alternative. These platforms not only serve as valuable educational tools for the broader public and young generations but also function as robust research platforms, contributing significantly to the ongoing advancements in autonomous driving technology. This survey outlines various small-scale car platforms, categorizing them and detailing the research advancements accomplished through their usage. The conclusion provides proposals for promising future directions in the field.
Article
Full-text available
Appropriate environmental sensing methods and visualization representations are crucial foundations for the in situ exploration of planets. In this paper, we developed specialized visualization methods to facilitate the rover’s interaction and decision-making processes, as well as to address the path-planning and obstacle-avoidance requirements for lunar polar region exploration and Mars exploration. To achieve this goal, we utilize simulated lunar polar regions and Martian environments. Among them, the lunar rover operating in the permanently shadowed region (PSR) of the simulated crater primarily utilizes light detection and ranging (LiDAR) for environmental sensing; then, we reconstruct a mesh using the Poisson surface reconstruction method. After that, the lunar rover’s traveling environment is represented as a red-green-blue (RGB) image, a slope coloration image, and a theoretical water content coloration image, based on different interaction needs and scientific objectives. For the rocky environment where the Mars rover is traveling, this paper enhances the display of the rocks on the Martian surface. It does so by utilizing depth information of the rock instances to highlight their significance for the rover’s path-planning and obstacle-avoidance decisions. Such an environmental sensing and enhanced visualization approach facilitates rover path-planning and remote–interactive operations, thereby enabling further exploration activities in the lunar PSR and Mars, in addition to facilitating the study and communication of specific planetary science objectives, and the production and display of basemaps and thematic maps.
Article
In recent years, unmanned surface vehicles (USVs) have played an increasingly important role in various applications. Due to the expansion of USV application scenes from common marine areas to inland waters with complex environments, environmental perception has become an essential requirement for autonomous navigation systems of USVs. Traditional perception methods utilize either LiDAR or radar to construct volumetric maps for environmental perception. To improve the accuracy of perception systems and reduce deployment costs, this paper proposes a novel radar and camera fusion volumetric map network named FVMNet for real-time volumetric perception. FVMNet is based on a novel radar and image fusion architecture and comprises four modules. 1) The radar and image encoders can extract different features; 2) Only using in training stage without extra valid time costs, auxiliary segmentation head advances the image encoder; 3) To eliminate the representation difference between image features and radar features, the BEV spatial transformer module transfers image feature representations from the perspective view to BEV space; 4) The fusion segmentation head predicts the volumetric perception results. Compared to other baseline methods that use a single modality, FVMNet achieves state-of-the-art accuracy in public USVInland dataset and our collected wharf dataset. We conducted comprehensive ablation experiments to validate the efficacy of the designed modules. Moreover, the proposed method demonstrates generalization in zero-shot real-world scenarios and robustness under extreme weather conditions.
Conference Paper
Full-text available
Breath sounds provide substantial information about the state of a respiratory system. These signals are frequently influenced by noises from heart and muscles which complicate the research process. Chronic Lung Virus (CLV) is a very common lung disease that necessitates an accurate diagnosis at the outset to receive adequate care. A smart framework for diagnosing breath sounds-based viruses is proposed in this study. Breath signals will first be preprocessed by Interval Dependent Denoising (IDD), which divides a signal into intervals by different variance shift points, and then filtered by the Savitzky-Golay (SG) filter that eliminates high-preference muscle noise and components. Next, five Non-Linear Dynamic System (NLDS) features are extracted. Initially, 5-fold cross-validation is used to train and validate the K-Nearest Neighbor (KNN) classifier. The suggested solution is experimentally assessed on several different categories of a locally collected dataset from Pakistan that includes 1454 breathing sounds from healthy and chronic lung virus (flu virus, rhinovirus) affected subjects. KNN with medium kernel obtained 99.4% accuracy, 99.99% sensitivity, 97.82% specificity and 99% precision.
Conference Paper
Full-text available
Cardiovascular diseases (CVD) have been one of the top two causes of death globally, accounting for 633,842 fatalities. An intelligent system capable of detecting these disorders is needed urgently. Phonocardiogram (PCG) signals are useful in the earlier detection of CVDs as they help determine the actual nature and condition of the heart. Cardiac auscultation is the most used procedure for examining, classifying, and analyzing the cardiac sounds in a PCG. We formulated an algorithm for classifying various types of cardiovascular diseases using PCG auscultations. Dataset repository (Normal & Extrahls) is made up of personally acquired PCGs from different clinical facilities. Empirical Mode Decomposition (EMD) helps denoise and pre-process these raw signals. To extract the area of interest, soft threshold-based signal segmentation is applied. Then, four Impulsive domain features are extracted from each class's pre-processed signal and fed to six separate machine learning-based ensemble classifiers to evaluate optimum accuracy. The proposed methodology obtained a cumulative accuracy of 98.8 %, specificity of 97.56%, and sensitivity of 99.99 %. This system will assist Pakistani doctors to detect and classify heart disease without any invasive technology usage.
Conference Paper
Full-text available
IoT is the network cloud-based kingdom which combines a large number of cores on one unique platform for MPSoCs, which makes it raising methodologies in the mentioned domain. Nonetheless, the main metric which limits the functionality of multiprocessor system on a chip (MPSoC) is, interconnects web between cores on a chip. HNoCs (Heterogeneous Network-on-Chip Simulator) have come out as a feasible way to resolve these problems. Current existing health applications are mainly driven by fraud issues and the privacy of patients also suffered. This review introduces a brief overview of the existing application of SoC in IoT and the Medical field as well as their methodologies for problems. A comprehensive study about key features of the system like power, throughput, latency gives us keen insight knowledge of structures of HNoCs and ECG-based identification systems as well. This work also provides an insight into the currently available achievements in HNoCs interconnects and data acquiring and analysis methodologies of patients. The biometric solutions are the best serving for all those health applications which are based on the IoT.
Conference Paper
Full-text available
COVID-19 has spread over the whole world gradually and has affected life, public health, and financial systems of various countries on a daily basis. Pathogenic laboratory tests, such as polymerase chain reaction (PCR), which take a long time and provide false-negative results, are considered to be the gold standard for its detection. World Health Organization (WHO) has declared COVID-19 an epidemic, as it affected more than 50 million people and killed 14 million globally. In this study, we proposed an efficient and novel algorithm for the diagnosis of COVID-19 disease from cough auscultations. We used a self-collected dataset of 1579 cough auscultations (CA) which were collected from different local hospitals. Dataset is processed by removing dc components and amplitude normalization. Then Region of interest (ROI) i.e., the part which contains low-frequency components is extracted by Empirical Mode Decomposition. Hjorth descriptor is applied on pre-processed and segmented signals to get activity, mobility, and complexity features. These extracted features are given as an input to Medium-KNN and Fine Decision Tree classifiers resulting in a cumulative accuracy of 99.8% and 94.9 % respectively. This developed system will help Pakistani doctors in non-invasive detection of COVID-19 and classification of coughs accurately.
Conference Paper
Full-text available
Rotatory machines play crucial role in industrial sector due to their high reliability and dynamic performance. Usually, bearings are unable to sustain heavy loads due to which they deteriorate and eventually result in machine malfunctioning. Bearing fault detection at early sage can help overcome the hindrance that can be caused in production process. This research presents a novel system for the diagnosis of bearing faults at initial step. Data is gathered from food sorting machine from KIMS Hattar, Pakistan, this data is then segmented, and high frequency components are filtered using interval dependent denoising technique and Savitzky-Golay filter. Non-Linear Dynamic System features are extracted from filtered data and then dominant features are selected using Minimum Redundancy Maximum Relevance algorithm. Lyapunov Exponent and Shannon Entropy are giving highest correlation with the classes. Different classifiers are analyzed and maximum accuracy of 96.6% is obtained from ensemble subspace k nearest neighbor.
Conference Paper
Full-text available
Bronchial Asthma (BA) and Allergic Bronchopulmonary Aspergillosis Asthma (ABPA) are two of the lethal asthmatic conditions with typical adventitious tones. It is a daunting challenge to detect these diseases from lung sound analytics and by non-invasive methodology. A novel architecture for detection of asthmatic patients is presented here by using machine learning and signal processing. This framework will help the asthma specialist to accurately identify asthmatic and non-asthmatic disease. Normalization and Empirical mode decomposition (EMD)-based techniques are used to denoise and segment the signal. Fusion of cepstral features such as Mel-frequency Cepstral Coefficients (MFCC) and Gammatone Cepstral Coefficients (GTCC) is employed. Feature selection algorithm for fusion is performed through Relieff Algorithm. Improved performance is evidenced by ensembled bagged tree classifier with an accuracy of 97.4% on selected fused features after experimentation on Raspberry pi. This approach for Asthmatic disease detection is quite precise, cost-friendly, easy to operate and non-invasive.
Experiment Findings
Full-text available
An Electret microphone fitted inside the Data Acquisition Board (DAB) records the lung sounds and converts the received analog signal into digital one by saving the raw output in .m4a format. The raw digital signal is then pre-processed through normalization and empirical mode decomposition (EMD) which decomposes the raw signal into its many elements in time domain known as intrinsic mode functions (IMFs). The preprocessed signal is reconstructed by the addition of selected IMFs having no redundant and noisy content. Subsequently, cepstral characteristics are retrieved from these pre-processed signals and integrated to create a concise and accurate portrayal of asthmatic and non-asthmatic auscultation signals. Finally, these features are used to train classification models. An ensemble bagged tree is trained and used to forecast classes with strong differential characteristics of normal and asthma with the help of selected of IMFs.
Conference Paper
Full-text available
World Health Organization Statistics declares the pulmonic illness as the class of deadly illness. Wheezing is a key indicator for the diagnosis of pulmonic illnesses like Asthma and pneumonia. In this research article, the identification of wheeze sound in asthma and pneumonia subjects is done from breathing sound. The analysis is performed through signal processing and machine learning practices. Overall, data is acquired from 300 subjects. It includes 100 Asthma, 100 Pneumonia, and 100 Normal subjects This research work proposes a complete design for accurate classification of wheezing signals. It includes pre-processing by normalization, denoising by filtration, segmentation to remove the non-breathing and silent parts, feature extraction from the spectral domain, and classification by support vector machine (SVM) using Matlab 2019b. The system evidenced an accuracy greater than 96%. Further investigation can be done by analyzing the wheezing sound originates in other pulmonic diseases and exploring its role to identify the pulmonary illness.
Article
See the full text at: https://iot.ire.pw.edu.pl/wp-content/uploads/2022/10/Static_LIDAR_Assisted_UWB_Anchor_Nodes_Localization.pdf The paper presents an indoor mapping and UWB (ultra-wideband) anchor localization method. The method, unlike most of the solutions described it the literature uses a static LiDAR (Light Detection and Ranging) mounted on a tripod rather than a robotic platform. It can be used at an any place, where employing a robot would be difficult (e.g. private homes), but since it requires manual LiDAR placement it would be most efficient in spaces of moderate areas. The proposed concept consists in mapping the environment of system installation while performing ranging measurements with deployed UWB anchor nodes. The SLAM (Simultaneous Location and Mapping) algorithm used for map integration and device localization relies only on LiDAR results. The matching is performed in two steps by finding an initial match based on corresponding landmarks extracted from the scans (intersections of the detected wall lines) and refining the results using an Iterative Closest Point algorithm. The anchors are localized based on the ranging results and SLAM-derived device locations using a Least-Squares based optimizer. The experiments have shown that the algorithm allows to construct a comprehensive map of the environment and localize the anchors with a root mean square error of 0.34 m, which is at similar level to analogous methods described in other works. The impact of anchor localization error on the systems performance was not significant. In both static and dynamic scenarios the difference in median errors obtained using reference and mapped anchors’ locations was about 0.05 m.
Preprint
An efficient navigation system for the autonomous mobile robot demands the integration of the simultaneous localization and mapping system (SLAM). In general, the SLAM framework employs the camera, IMU, wheel encoder and can not lack the combination of the Light Detection and Ranging (LiDAR) sensors for superior accuracy for the navigation system. Nevertheless, the price of mechanical LiDAR is exceptionally high compared with other sensors, which is the primary reason to introduce a new generation of laser called Solid-State LiDAR (SSL). In this paper, we introduce a back-ground of SSL and several types of commercial SSL and also their characteristic. For developing the SLAM framework, we discuss several successful SLAM architectures, which can be capable of integrating into a navigation system for the mobile robot. Last, we propose a lowcost SLAM framework using the visual-inertial-SSL, that can not only reduce the price of a mobile industrial robot but also provide excellent performance.