A standard remote controller, from Dji's Mavic Pro User Manual.

A standard remote controller, from Dji's Mavic Pro User Manual.

Source publication
Article
Full-text available
This paper presents a framework for performing autonomous precise landing of unmanned aerial vehicles (UAVs) on dynamic targets. The main goal of this work is to design the methodology and the controlling algorithms that will allow multi−rotor drones to perform a robust and efficient landing in dynamic conditions of changing wind, dynamic obstacles...

Context in source publication

Context 1
... common remote controller (see Figure 3) allows manual flight, gimbal and camera control, and provides a robust wireless control link for the drone. ...

Similar publications

Conference Paper
Full-text available
Boats and ships have always been used throughout history as one of the main types of transportation. In recent years, due to the fast evolution of deep learning techniques and online datasets available, convolutional neural networks (CNN) have been widely used for ship and boat detection applications, such as surveillance of marine resources, helpi...

Citations

... Extensive research has been dedicated to exploring various methods for the state estimation of the UAV and UGV. Prior work utilized motion capture systems [11], GPS [12], or computer vision [13] to obtain pose estimates of the UAV and UGV. Recent advancements in computer vision have enabled onboard localization and pose estimation, reducing the dependency on external infrastructure for maneuver execution. ...
... Recent advancements in computer vision have enabled onboard localization and pose estimation, reducing the dependency on external infrastructure for maneuver execution. Early research by [13][14][15] relied on visual fiducial systems to facilitate relative pose estimation between the drone and the target, simplifying the maneuver to just requiring a camera and a marker externally. Present-day research has pivoted toward machine learning techniques for platform detection and relative pose estimation, leading to a proliferation of visual servoing methods [9]. ...
... A diverse array of control strategies has been employed to govern UAVs during landing maneuvers. The strategies range from classical methods, such as proportionalintegral-derivative (PID) [13] and linear quadratic regulator (LQR) [16], to contemporary approaches, like model predictive control (MPC) [17], and even learning-based techniques, such as reinforcement learning [18]. The methodologies are instrumental in executing robust trajectory tracking amidst uncertainties and disturbances. ...
Article
Full-text available
Landing a multi-rotor uncrewed aerial vehicle (UAV) on a moving target in the presence of partial observability, due to factors such as sensor failure or noise, represents an outstanding challenge that requires integrative techniques in robotics and machine learning. In this paper, we propose embedding a long short-term memory (LSTM) network into a variation of proximal policy optimization (PPO) architecture, termed robust policy optimization (RPO), to address this issue. The proposed algorithm is a deep reinforcement learning approach that utilizes recurrent neural networks (RNNs) as a memory component. Leveraging the end-to-end learning capability of deep reinforcement learning, the RPO-LSTM algorithm learns the optimal control policy without the need for feature engineering. Through a series of simulation-based studies, we demonstrate the superior effectiveness and practicality of our approach compared to the state-of-the-art proximal policy optimization (PPO) and the classical control method Lee-EKF, particularly in scenarios with partial observability. The empirical results reveal that RPO-LSTM significantly outperforms competing reinforcement learning algorithms, achieving up to 74% more successful landings than Lee-EKF and 50% more than PPO in flicker scenarios, maintaining robust performance in noisy environments and in the most challenging conditions that combine flicker and noise. These findings underscore the potential of RPO-LSTM in solving the problem of UAV landing on moving targets amid various degrees of sensor impairment and environmental interference.
... DL models are trained to estimate aircraft angles during landing. The vision-based technique combined with artificial neural networks ensures a safe landing by controlling data aggregation and utilizing control algorithms [70], [71], [72]. ...
Article
Full-text available
Nowadays, aerial vehicles (drones) are becoming more popular. Over the past few years, Unmanned Aerial Vehicles (UAVs) have been used in various remote sensing applications. Every aerial vehicle is now either partially or completely automated. The tiniest type of aerial vehicle is the UAV. The widespread use of aerial drones requires numerous safe landing site detection techniques. The paper aims to review literature on techniques for automatic safe landing of aerial drone vehicles by detecting suitable landing sites, considering factors such as ground surfaces and using image processing methods. A drone must determine whether the landing zones are safe for automatic landing. Onboard visual sensors provide potential information on outdoor and indoor ground surfaces through signals or images. The optimal landing locations are then determined from the input data using various image processing and safe landing area detection (SLAD) methods. UAVs are acquisition systems that are quick, efficient, and adaptable. We discuss existing safe landing detection approaches and their achievements. Furthermore, we focus on possible areas for improvement, strength, and future approaches for safe landing site detection. The research addresses the increasing need for safe landing site detection techniques in the widespread use of aerial drones, allowing for automated and secure landing operations.
Preprint
    The integration of precise landing capabilities into UAVs is crucial for enabling autonomous operations, particularly in challenging environments such as the offshore scenarios. This work proposes a heterogeneous perception system that incorporates a multimodal fiducial marker, designed to improve the accuracy and robustness of autonomous landing of UAVs in both daytime and nighttime operations. This work presents ViTAL-TAPE, a visual transformer-based model, that enhance the detection reliability of the landing target and overcomes the changes in the illumination conditions and viewpoint positions, where traditional methods fail. VITAL-TAPE is an end-to-end model that combines multimodal perceptual information, including photometric and radiometric data, to detect landing targets defined by a fiducial marker with 6 degrees-of-freedom. Extensive experiments have proved the ability of VITAL-TAPE to detect fiducial markers with an error of 0.01 m. Moreover, experiments using the RAVEN UAV, designed to endure the challenging weather conditions of offshore scenarios, demonstrated that the autonomous landing technology proposed in this work achieved an accuracy up to 0.1 m. This research also presents the first successful autonomous operation of a UAV in a commercial offshore wind farm with floating foundations installed in the Atlantic Ocean. These experiments showcased the system’s accuracy, resilience and robustness, resulting in a precise landing technology that extends mission capabilities of UAVs, enabling autonomous and Beyond Visual Line of Sight offshore operations.
    Article
    Full-text available
    This paper proposes the creative idea that an unmanned fixed-wing aircraft should automatically adjust its 3D landing trajectory online to land on a given touchdown point, instead of following a pre-designed fixed glide slope angle or a landing path composed of two waypoints. A fixed-wing aircraft is a typical under-actuated and nonholonomic constrained system, and its landing procedure—which involves complex kinematic and dynamic constraints—is challenging, especially in some scenarios such as landing on an aircraft carrier, which has a runway that is very short and narrow. The conventional solution of setting a very conservative landing path in advance and controlling the aircraft to follow it without dynamic adjustment of the reference path has not performed satisfactorily due to the variation in initial states and widespread environmental uncertainties. The motion planner shown in this study can adjust an aircraft’s landing trajectory online and guide the aircraft to land at a given fixed or moving point while conforming to the strict constraints. Such a planner is composed of two parts: one is used to generate a series of motion primitives which conform to the dynamic constraints, and the other is used to evaluate those primitives and choose the best one for the aircraft to execute. In this paper, numerical simulations demonstrate that when given a landing configuration composed of position, altitude, and direction, the planner can provide a feasible guidance path for the aircraft to land accurately.
    Article
    Full-text available
    Unmanned Aerial Vehicles (UAVs) are part of our daily lives with a number of applications in diverse fields. On many occasions, developing these applications can be an arduous or even impossible task for users with a limited knowledge of aerial robotics. This work seeks to provide a middleware programming infrastructure that facilitates this type of process. The presented infrastructure, named DroneWrapper, offers the user the possibility of developing applications abstracting the user from the complexities associated with the aircraft through a simple user programming interface. DroneWrapper is built upon the de facto standard in robot programming, Robot Operating System (ROS), and it has been implemented in Python, following a modular design that facilitates the coupling of various drivers and allows the extension of the functionalities. Along with the infrastructure, several drivers have been developed for different aerial platforms, real and simulated. Two applications have been developed in order to exemplify the use of the infrastructure created: follow-color and follow-person. Both applications use techniques of computer vision, classic (image filtering) or modern (deep learning), to follow a specific-colored object or to follow a person. These two applications have been tested on different aerial platforms, including real and simulated, to validate the scope of the offered solution.