Fig 2 - uploaded by Valarmathi Jayaraman
Content may be subject to copyright.
Visual Odometry block diagram.

Visual Odometry block diagram.

Context in source publication

Context 1
... Odometry (VO) is a procedure of incrementally estimate the position and orientation of the navigating vehicle by analysing the deviation that the motion induced on the sequence of images captured by the one or more onboard cameras. The block diagram of VO is shown in Figure 2. The VO computes the current position of the vehicle by tracking the point features captured in the image plane. ...

Similar publications

Article
Full-text available
The effective measurement and mapping of the land is retrieval of accurate data according to position, in accordance with mapping provisions and efficient in both cost and time. One way that data can be retrieved effectively and efficiently is by using a drone, commonly called an unmanned aerial vehicle (UAV).Drone is a vehicle equipped with a wave...
Article
Full-text available
En vuelos con GPS la posición del UAV sobre la tierra es obtenida directamente por el receptor lo que facilita la navegación autónoma. Al no contar con una señal adecuada de este sensor, principalmente en interiores, es necesario estimar la velocidad, distancia y orientación con otro tipo de dispositivos como los sistemas de visión que suelen reque...
Article
Full-text available
The purpose of this paper is to develop a fixed-wing aircraft that has the abilities of both vertical take-off (VTOL) and a fixed-wing aircraft. To achieve this goal, a prototype of a fixed-wing gyroplane with two propellers is developed and a rotor can maneuver like a drone and also has the ability of vertical take-off and landing similar to a hel...

Citations

... Other researchers have coupled data links [30] with inertial navigation systems using Kalman filtering algorithms [31] to achieve high output rate and high-precision positioning. However, inertial navigation systems require a long initial alignment time and need to be associated with absolute geographical coordinates for position calculations, limiting their application scenarios [32]. Methods such as leader-follower [33,34] positioning and Link-16 [35] relative positioning are centralized positioning approaches. ...
Article
Full-text available
In a satellite-denied environment, a swarm of drones is capable of achieving relative positioning and navigation by leveraging the high-precision ranging capabilities of the inter-drone data link. However, because of factors such as high drone mobility, complex and time-varying channel environments, electromagnetic interference, and poor communication link quality, distance errors and even missing distance values between some nodes are inevitable. To address these issues, this paper proposes a low-rank optimization algorithm based on the eigenvalue scaling of the distance matrix. By gradually limiting the eigenvalues of the observed distance matrix, the algorithm reduces the rank of the matrix, bringing the observed distance matrix closer to the true value without errors or missing data. This process filters out distance errors, estimates and completes missing distance elements, and ensures high-precision calculations for subsequent topology perception and relative positioning. Simulation experiments demonstrate that the algorithm exhibits significant error filtering and missing element completion capabilities. Using the F-norm metric to measure the relative deviation from the true value, the algorithm can optimize the relative deviation of the observed distance matrix from 11.18% to 0.25%. Simultaneously, it reduces the relative positioning error from 518.05 m to 35.24 m, achieving robust topology perception and relative positioning for the drone swarm formation.
... The UAVs equipped with cameras can capture images of surroundings, which provide motion cues for trajectory estimation and 3D mapping. This visual perception approach primarily relies on the utilization of simultaneousocalization and mapping (SLAM) or visual odometry (VO) technologies [1]. While there are various visual SLAM frameworks available [2][3][4][5][6], their direct application to UAV navigation applications often overlooks the presence of dynamic objects within the environment. ...
Article
Full-text available
The capability of unmanned aerial vehicles (UAVs) to capture and utilize dynamic object information assumes critical significance for decision making and scene understanding. This paper presents a method for UAV relative positioning and target tracking based on a visual simultaneousocalization and mapping (SLAM) framework. By integrating an object detection neural network into the SLAM framework, this method can detect moving objects and effectively reconstruct the 3D map of the environment from image sequences. For multiple object tracking tasks, we combine the region matching of semantic detection boxes and the point matching of the optical flow method to perform dynamic object association. This joint association strategy can prevent trackingoss due to the small proportion of the object in the whole image sequence. To address the problem ofacking scale information in the visual SLAM system, we recover the altitude data based on a RANSAC-based plane estimation approach. The proposed method is tested on both the self-created UAV dataset and the KITTI dataset to evaluate its performance. The results demonstrate the robustness and effectiveness of the solution in facilitating UAV flights.
... The authors of [151] proposed a vision and inertial navigation system, enabling autonomous navigation for drones in situations where GPS signals are unavailable. Existing research on visionbased navigation is comprehensively reviewed in [152], and is concluded that Visual Odometry based approach is more efficient in terms of memory and computational power. The authors also proposed a Modular Multi-Sensor Data Fusion technique designed for UAV navigation in GPS-denied environments. ...
Article
Full-text available
Disasters, whether natural or man-made, necessitate swift and comprehensive responses. Unmanned Aerial Vehicles (UAVs), commonly known as drones, have become indispensable in disaster scenarios, serving as vital communication relays in areas with compromised infrastructure. These UAVs establish temporary networks, facilitating coordination among emergency responders and ensuring timely assistance to survivors. Despite recent advancements in sensing technology, challenges persist in deploying these networks for disaster monitoring applications. This article addresses these challenges by exploring various aspects in the context of multi-UAV networks for disaster monitoring. To provide a clearer motivation for this exploration, we emphasize the critical role that a well-designed network infrastructure plays in enhancing disaster response efforts. We delve into the complexities of formation and control strategies, aiming to optimize the agility and effectiveness of UAV networks. Additionally, we examine communication protocols, data routing mechanisms, and security considerations to highlight the intricacies involved in deploying UAVs for disaster monitoring. This article also underscores the significance of its contributions to the existing literature by providing a survey of the state-of-the art studies and illuminates the potential of leveraging emerging technologies, such as edge computing and artificial intelligence, to bolster performance and security. The article concludes by providing a detailed overview of the key challenges and open issues, outlining various research prospects in the evolving field of multi-UAV networks for disaster response.
... There is a plethora of localization approaches [6], including vision-based [7][8][9], cellular network-based [10], and ranging-based using RF [11] or acoustic [12] signals, to name a few. In the ranging-based techniques, localization is conducted by processing received signals; thus, for all ranging-based techniques there exist a number of transmitters and receivers installed onboard the drone and at known locations in the surrounding area. ...
... For autonomous navigation of drones in the absence of GPS signals, there are several well-known techniques that tackle the problem. For example, vision-based models use a number of different visual techniques, such as visual odometry (VO), simultaneous localization and mapping (SLAM), and optical flow [7][8][9]. A few research papers have used deep neural networks in combination with visual techniques [26] or used LiDAR [27] for autonomous flying. ...
... There are different ways to perform localization in the absence of GPS signals [7][8][9]. Several popular approaches use vision-based localization with optical sensors. ...
Article
Full-text available
For many applications, drones are required to operate entirely or partially autonomously. In order to fly completely or partially on their own, drones need to access location services for navigation commands. While using the Global Positioning System (GPS) is an obvious choice, GPS is not always available, can be spoofed or jammed, and is highly error-prone for indoor and underground environments. The ranging method using beacons is one of the most popular methods for localization, especially for indoor environments. In general, the localization error in this class is due to two factors: the ranging error, and the error induced by the relative geometry between the beacons and the target object to be localized. This paper proposes OPTILOD (Optimal Beacon Placement for High-Accuracy Indoor Localization of Drones), an optimization algorithm for the optimal placement of beacons deployed in three-dimensional indoor environments. OPTILOD leverages advances in evolutionary algorithms to compute the minimum number of beacons and their optimal placement, thereby minimizing the localization error. These problems belong to the Mixed Integer Programming (MIP) class and are both considered NP-hard. Despite this, OPTILOD can provide multiple optimal beacon configurations that minimize the localization error and the number of deployed beacons concurrently and efficiently.
... Aircraft today can navigate and ''sense'' the pose of other aircraft around them using global navigation satellite systems such as GPS, inertial navigation systems composed of IMUs (magnetometers, gyroscopes, and accelerometers), and through the use of various vision navigation techniques. Unfortunately, GPS can be denied [2], IMUs are inherently noisy and drift over time [35], and current state-of-the-art AAR vision algorithms do not simultaneously meet the accuracy, reliability, and execution speed requirements to solve the AAR problem. This paper presents a novel computer vision solution, called relative vectoring using dual object detection, that consistently converts image data into relative position estimates accurate to less than 3 cm of error (Euclidean distance) at contact and runs in real time (greater than 45 Hz) on a laptop with a Nvidia RTX A5000 GPU. ...
Article
Full-text available
Once realized, autonomous aerial refueling will revolutionize unmanned aviation by removing current range and endurance limitations. Previous attempts at establishing vision-based solutions have come close but rely heavily on near perfect extrinsic camera calibrations that often change midflight. In this paper, we propose dual object detection, a technique that overcomes such requirement by transforming aerial refueling imagery directly into receiver aircraft reference frame probe-to-drogue vectors regardless of camera position and orientation. These vectors are precisely what autonomous agents need to successfully maneuver the tanker and receiver aircraft in synchronous flight during refueling operations. Our method follows a common 4-stage process of capturing an image, finding 2D points in the image, matching those points to 3D object features, and analytically solving for the object pose. However, we extend this pipeline by simultaneously performing these operations across two objects instead of one using machine learning and add a fifth stage that transforms the two pose estimates into a relative vector. Furthermore, we propose a novel supervised learning method using bounding box corrections such that our trained artificial neural networks can accurately predict 2D image points corresponding to known 3D object points. Simulation results show that this method is reliable, accurate (within 3 cm at contact), and fast (45.5 fps).
... Wu and Johnson (2010) proposed a vision and inertial navigation system, enabling autonomous navigation for drones in situations where GPS signals are unavailable. Existing research on vision-based navigation is comprehensively reviewed in Balamurugan et al. (2017), and it is concluded that a visual odometry-based approach is more efficient in terms of memory and computational power. The authors also proposed a modular multi-sensor data fusion technique designed for UAV navigation in GPS-denied environments. ...
Article
Full-text available
Disasters, whether natural or man-made, demand rapid and comprehensive responses. Unmanned Aerial Vehicles (UAVs), or drones, have become essential in disaster scenarios, serving as crucial communication relays in areas with compromised infrastructure. They establish temporary networks, aiding coordination among emergency responders and facilitating timely assistance to survivors. Recent advancements in sensing technology have transformed emergency response by combining collaborative power of these networks with real-time data processing. However, challenges still to consider these networks for disaster monitoring applications, particularly in deployment strategies, data processing, routing, and security. Extensive research is crucial to refine ad-hoc networking solutions, enhancing the agility and effectiveness of these systems. This article explores various aspects, including network architecture, formation strategies, communication protocols, and security concerns in multi-UAV networks for disaster monitoring. It also examines the potential of enabling technologies like edge computing and artificial intelligence to bolster network performance and security. Further, the article provides a detailed overview of the key challenges and open issues, outlining various research prospects in the evolving field of multi-UAV networks for disaster response.
... One of the most important challenges is to navigate in different sets of conditions and restrictions imposed by the task execution scenario. In this topic, several issues are raised regarding to UAV localization and state estimation capabilities, perception, route planning, robustness and control, and infrastructure and hardware [9,10]. The high quality and efficiency of these systems, as well as safety, reliability, and autonomy, enable and improve real-world applications. ...
... Estimating the environment state is one of the main task to be performed by the UAV when talking about autonomous navigation, specially in uncertainty and unpredictable scenarios. Visual-based navigation has been receiving a crescent attention during the last years, specially in GPS-denied locations [9,86]. For fast and precise navigation, DL has been used in perception and guidance of autonomous drones in indoor applications. ...
Article
Full-text available
Unmanned aerial vehicles (UAVs) are a valuable source of data for a wide range of real-time applications, due to their functionality, availability, adaptability, and maneuverability. Working as mobile sensors, they can provide a cost-effective solution for extremely complex tasks, such as inspection, air-to-ground communications, search and rescue, surveillance, among others. Nevertheless, the robots needs to navigate in quite distinct environments and in different dynamism levels, usually facing unpredicted situations, very often using limited sensing and computing capabilities. A large number of solutions to this problem has been featured by the scientific community in the last years, some of them based on machine-learning (ML) methods. Due to its great capability to deal with big data and complexity, as well as its speedy and high-accuracy processing, the ML framework has been used to improve existing technologies and control techniques. In this context, its adoption in several UAV navigation strategies is expected to provide solutions for various problems where UAVs are used in real-time applications. Thus, in order to contextualize the most recent advances, this work provides a detailed survey of relevant researches in which ML techniques have been used in UAV navigation to improve some functional aspects, such as energy-efficiency, communication, execution time, resource management, obstacle avoidance, and path planning.
... Usually, UAVs are equipped with MEMS (microelectromechanical system) IMU sensors to derive the position and attitude using an INS (inertial navigation system) mechanization process. Unfortunately, the INS mechanization leads to positioning drift over time as specified in [7,8], making the IMU unreliable when used in a standalone mode for long flight operations. ...
Article
Full-text available
Future UAV (unmanned aerial vehicle) operations in urban environments demand a PNT (position, navigation, and timing) solution that is both robust and resilient. While a GNSS (global navigation satellite system) can provide an accurate position under open-sky assumptions, the complexity of urban operations leads to NLOS (non-line-of-sight) and multipath effects, which in turn impact the accuracy of the PNT data. A key research question within the research community pertains to determining the appropriate hybrid fusion architecture that can ensure the resilience and continuity of UAV operations in urban environments, minimizing significant degradations of PNT data. In this context, we present a novel federated fusion architecture that integrates data from the GNSS, the IMU (inertial measurement unit), a monocular camera, and a barometer to cope with the GNSS multipath and positioning performance degradation. Within the federated fusion architecture, local filters are implemented using EKFs (extended Kalman filters), while a master filter is used in the form of a GRU (gated recurrent unit) block. Data collection is performed by setting up a virtual environment in AirSim for the visual odometry aid and barometer data, while Spirent GSS7000 hardware is used to collect the GNSS and IMU data. The hybrid fusion architecture is compared to a classic federated architecture (formed only by EKFs) and tested under different light and weather conditions to assess its resilience, including multipath and GNSS outages. The proposed solution demonstrates improved resilience and robustness in a range of degraded conditions while maintaining a good level of positioning performance with a 95th percentile error of 0.54 m for the square scenario and 1.72 m for the survey scenario.
... The indoor localization of Autonomous Mobile Robots (AMRs) in dynamic Industry 4.0 (I4.0) environments presents significant challenges, especially in the absence of GPS signals [1] and various obstructions. Addressing this challenge is crucial for efficiently operating AMRs in such settings. ...
Article
Full-text available
This paper introduces the Kabsch Marker Estimation Algorithm (KMEA), a new, robust multi-marker localization method designed for Autonomous Mobile Robots (AMRs) within Industry 4.0 (I4.0) settings. By integrating the Kabsch Algorithm, our approach significantly enhances localization robustness by aligning detected fiducial markers with their known positions. Unlike conventional methods that rely on a limited subset of visible markers, the KMEA uses all available markers, without requiring the camera’s extrinsic parameters, thereby improving robustness. The algorithm was validated in an I4.0 automated warehouse mockup, with a four-stage methodology compared to a previously established marker estimation algorithm for reference. On the one hand, the results have demonstrated the KMEA’s similar performance in standard controlled scenarios, with millimetric precision across a set of error metrics and a mean relative error (MRE) of less than 1%. On the other hand, KMEA, when faced with challenging test scenarios with outliers, showed significantly superior performance compared to the baseline algorithm, where it maintained a millimetric to centimetric scale in error metrics, whereas the other suffered extreme degradation. This was emphasized by the average reduced results of error metrics from 86.9% to 92% in Parts III and IV of the test methodology, respectively. These results were achieved using low-cost hardware, indicating the possibility of even greater accuracy with advanced equipment. The paper details the algorithm’s development, theoretical framework, comparative advantages over existing methods, discusses the test results, and concludes with comments regarding its potential for industrial and commercial applications by its scalability and reliability.
... Previous studies have extensively remarked on this critical concern of positioning and navigation in UAVs when GPS signals are unreliable [8] [9] [10]. Universal approaches including vision-based navigation techniques such as Visual Odometry (VO) [11] and visual SLAM [12] [13], as well as leveraging machine learning models [14] [15], have demonstrated their effectiveness in GPS-denied environments, leading to improved localization accuracy and precision in navigation and mapping [9]. ...
Article
Full-text available
Unmanned aerial vehicle (UAV) technology has shown rising outstanding performance and has become one of the essential innovations at present applications. In outdoor surveillance operations, such as geolocation and navigation in restricted areas, an unmanned aerial vehicle such as a drone is able to accomplish those missions efficiently with the assistance of a Global Positioning System (GPS). On the contrary, a drone without that being said system is particularly limited to indoor surveillance or short-range applications due to the absence of precise positioning capabilities. This research paper addresses the crucial challenge of the non-GPS drone in outdoor surveillance scenarios and proposes a packet loss-based positioning approach, wherein a drone estimates its current position by utilizing wireless packet loss from reference points packet transmission by employing a multilateration technique. Moreover, we introduce three heuristic lightweight navigation algorithms, leveraging packet loss-based positioning and enabling a drone to navigate toward its intended destinations. The simulation results prove that by relying solely on packet loss computation, the drone without a precise positioning system can identify its location and recover to its nearest destination position without the necessity of complex sensors.