Figure 9 - uploaded by Tim McLain
Content may be subject to copyright.
(a) The waypoint path is constructed so that it is perpendicular to the map obstacle. The radius R ensures collision free passage around the map obstacle. (b) The maximum heading change in waypoint paths is when the MAV must make a full bank to maneuver around the obstacle. (c) An approximation of the minimum distance required to avoid a straight wall if the laser is only sampled when the MAV is on the waypoint path. (d) The geometry used to calculate the distance between two consecutive laser updates.  

(a) The waypoint path is constructed so that it is perpendicular to the map obstacle. The radius R ensures collision free passage around the map obstacle. (b) The maximum heading change in waypoint paths is when the MAV must make a full bank to maneuver around the obstacle. (c) An approximation of the minimum distance required to avoid a straight wall if the laser is only sampled when the MAV is on the waypoint path. (d) The geometry used to calculate the distance between two consecutive laser updates.  

Source publication
Article
Full-text available
Despite the tremendous potential demonstrated by miniature aerial vehicles (MAV) in numerous applications, they are currently limited to operations in open air space, far away from obstacles and terrain. To broaden the range of applications for MAVs, methods to enable operation in environments of increased complexity must be developed. In this arti...

Citations

... It is assumed that there is a noncollision guidance loop to guide the vehicle from any state (including configuration point and speed) to any desired configuration point. In [7], Griffiths et al. proposed a RRT algorithm for constructing traversable paths by modeling prior environmental data. Examine the branches of the tree as it grows to ensure that the vehicle's turning radius and climb rate constraints are met. ...
Article
Full-text available
In this paper, a UUV (Unmanned Underwater Vehicle) recovery path planning method of a known starting vector and end vector is studied. The local structure diagram is designed depending on the distance and orientation information about the obstacles. According to the local structure diagram, a Rapidly exploring Random Tree (RRT) method with feedback is used to generate a 3D Dubins path that approaches the target area gradually, and the environmental characteristic of UUV reaching a specific target area is discussed. The simulation results demonstrate that this method can effectively reduce the calculation time and the amount of data storage required for planning. Meanwhile, the smooth spatial path generated can be used further to improve the feasibility of the practical application of UUV.
... This method has been widely used for path-planning in various applications. Griffiths et al. [17] applied a modified RRT algorithm as a global path planner, which searches the output states instead of the inputs and generates a list of waypoints. And Kothari et al. [18] proposed a dynamic constrained RRT algorithm, which includes a chance constraints framework to account for uncertainty, for probabilistically robust path planning in uncertain environments. ...
Article
Unmanned aerial vehicle (UAV) has a rapid development over the last decade. However, an increasing number of severe flight collision events caused by explosive growth of UAV have drawn widespread concern. It is an important issue to investigate safety management approaches of UAVs to ensure safe and efficient operation. In this paper, we present a comprehensive overview of safety management approaches in large, middle and small scales. In large-scale safety management, path-planning problem is a crucial issue to ensure safety and ordered operation of UAVs globally. In middle-scale safety management, it is an important issue to study the conflict detection and resolution methods. And in small-scale safety management, real-time collision avoidance is the last line of ensuring safety. Moreover, a UAV can be regarded as a terminal device connected through communication and information network. Therefore, the enabling technologies, such as sensing, command and control communication, and collaborative decision-making control technology, have been studied in the last.
... Optical flow sensors are used to avoid collision, measure the altitude and for position stabilization during the landing stage [16]. In [17], an optical flow mouse sensor was used for height approximation and terrain navigation. In [18], the optical flow was used as feedback information to regulate the automatic vertical landing on the moving platform. ...
Article
Full-text available
The high estimated position error (EPE) in current commercial-off-the-shelf (GPS/INS) impedes achieving precise autonomous takeoff and landing flight operations. To overcome this problem, in this paper we propose an integrated GPS/INS/Optical Flow (OF) solution in which the OF provides an accurate augmentation to the GPS/INS. To ensure accurate and robust OF augmentation, we have used a robust modeling method to estimate OF based on a set of real-time experiments conducted under various simulated helicopter-landing scenarios. Knowing that the accuracy of the OF measurements are dependent on the accuracy of the height measurements; we have developed a real time testing environment to model and validate the obtained dynamic OF model at various heights. The performance of the obtained OF model matches the real OF sensor with 87.70 % fitting accuracy. An accuracy of 0.006 m/s mean error between the real OF sensor velocity and the velocity of the OF model is also achieved. The velocity measurements of the obtained OF model and the position of the GPS/INS are used in performing a dynamic model-based sensor fusion algorithm. In the proposed solution, the OF sensor is engaged when the vehicle approaches a landing spot that is equipped with a predefined landing pattern. The proposed solution has succeeded in performing a helicopter auto takeoff and landing with a maximum position error of 27 cm.
... Stephen Griffiths, in his research, has discussed two strategies for obstacle and terrain avoidance. Flight tests have confirmed the feasibility of these approaches and demonstrated further improvements [18]. Larry Lipera has developed iSTAR MAV which is a unique 1.814 kg, 22.8 cm diameter ducted MAV [19]. ...
... Miniature Air Vehicle (MAV) comes under the category of Mini UAVs whose wing span varies from 0.25 m to 2 m. The military applications for MAV include reconnaissance, surveillance, battle damage assessment, and communications relays [Griffiths et al. (2006)]. The civilian applications for MAV include disaster management, remote sensing, and traffic monitoring etc. ...
... The Obstacle Avoidance (OA) controller is successfully implemented in Hardware-In-Loop Simulation (HILS) testbed, and performance is evaluated for various scenarios in which the MAV is simulated to fly in obstacle rich urban environment. [Griffiths et al. (2006)]. The civilian applications for MAV include disaster management, remote sensing, and traffic monitoring etc. ...
Article
Most of the applications need the Miniature Aerial Vehicles (MAVs) to be capable to navigate in urban environment where many obstacles of different types and sizes like buildings, trees, etc. exist. So the main aim of autonomous MAV is the ability to detect obstacles in its path and avoid them for achieving a robust autonomous flight. Since, an accurate modelling of some critical hardware like actuators is difficult in the numerical simulation, an effective tool named Hardware-In-Loop Simulation (HILS) is used which replaces simulation models of some flight critical hardware like OBC, servos, and communication devices in the loop with actual hardware. The most important feature in the HILS setup is its capability to perform the Real-Time (RT) simulation. In the present work, a PC based Hardware-In-Loop Simulation (HILS) test bed is developed, which consists of Host PC, Target PC, Ground station PC, On-Board Computer (OBC) and RC Servos. A reactive Obstacle Avoidance Controller based on a pair of miniature LRFs is developed for achieving the autonomous flight of the MAV's in urban environment with street canyons and obstacles. The LRFs are forward looking and attached both sides of the MAV body at an angle 30° on both sides of the X-axis and rays aligned parallel to the body XY-axis. Range to obstacle measured by left and right LRFs are fed to two PID controllers which compute the required heading angle command for the MAV to turn. The Obstacle Avoidance (OA) controller is successfully implemented in Hardware-In-Loop Simulation (HILS) testbed, and performance is evaluated for various scenarios in which the MAV is simulated to fly in obstacle rich urban environment. © 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
... We assume that nodes can move in space, in relative terms as well as with reference to a common positioning system such as GPS. Also, nodes come with basic autopilot capability and can deal with low-level motor/control and obstacle avoidance issues in an autonomous way, in the spirit of [23]. Thus the application does not have to micro-manage nodes, and can control their movement in a high-level fashion, by specifying target locations or supplying intermediate waypoints, as usual in many robotic systems [3]. ...
Conference Paper
Many pervasive computing applications can benefit from the advanced operational capability and diversity of sensing and actuation resources of modern robotic platforms. Even more powerful functionality can be achieved by combining robots with different mobility capabilities, some of which may already be deployed in the area of interest. However, the current programming frameworks make such flexible resource utilization a daunting task, as the application developer is responsible for the laborious and awkward management of the heterogeneity and transient availability of resources and services, which is done in a manual way. TeCoLa is a programming framework addressing the above issues. It supports structured services as first-class entities, which can appear/disappear dynamically, and are accessed in a transparent way. In addition, to ease the task of multi-robot programming, TeCoLa supports the creation and management of teams based on the dynamic service capability and availability of individual robots. Teams are maintained behind the scenes, without any effort from the application programmer. Furthermore, they can be controlled through team-level service interfaces which are instantiated in an automatic way, based on the services of the robots that are members of the team. We present a first implementation of TeCoLa for a Python environment, and discuss how we test the functionality of our prototype using a software-in-the-loop approach.
... Motivated by the bird/insect flight patterns, many research groups looked into using optical flow measurements for localization or obstacle avoidance with different robotics platforms. For example, multiple optical mouse sensors have been installed on micro UAVs at different locations to achieve obstacle avoidance and navigation within a valley by MAGICC Lab [8]. Similar optical flow approaches coupled with gyroscopes were designed to support indoor navigation of a 30 g micro UAV [9] and for hovering flight and vertical landing control of vertical takeoff and landing (VTOL) UAVs [10]. ...
... Nevertheless, there exist no calibration results of a wide-field optical flowfield using a fixed-wing UAV in the literature. And only a few researchers focus on outdoor open environments [3,8,19]. ...
... The motion models for rotations 1 [Eq. (8)] and 2 [Eq. (9)] are used later for the ground validations of optical flows using a rotation table. ...
Article
Wide-field optical flow information can benefit the navigation of small or micro unmanned aerial vehicles in GPS-degraded/denied environments, inspired by the study of insect/bird flights. This paper focuses on a flight-test evaluation of navigation information in wide-field optical flow, using flight data collected by a wide-angle camera installed on a fixed-wing unmanned aerial vehicle at altitudes ranging from 30 to 183 m above the ground. A comprehensive evaluation methodology is proposed for the comparison between optical flows computed from videos and the reference motion fields determined from conventional navigation sensors including GPS, a laser range finder, and inertial sensors. Seven setsofunmanned aerial vehicle flight data including a total of approximately 72,000 image frames are used for both full-flight temporal evaluation and representative interframe spatial evaluation of wide-field optical flow information for unmanned aerial vehicle navigation purposes. The evaluation results showed a good match between vision-derived optical flows and reference motion fields from conventional navigation sensors. Finally, one benchmark data set is shared at the authors' website for open access from the research community. © Copyright 2016 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
... The reality of experimental robotic trajectories shows that an optic flow balance trajectory always leads to an oscillatory trajectory around the corridor midline, because the robot is constantly avoiding two walls (the right one, or the left one). The optic flow balance strategy has been implemented on-board many terrestrial wheeled robots ( [29]; [38]; [39]; [40]; [41]; [42]; [43] ; [44]; [45]; [46]; [47]; [48] ; [49]), and many aerial vehicles (blimp: [50]; simulated hovercraft: [51] ; simulated helicopter: [52] ; helicopter: [47]; unmanned air vehicle: [53]). In each experiment the robot was able to follow the corridor or canyon midline with more or less oscillations either in indoor or in outdoor conditions, because actually the equilibrium position along the corridor midline is obtained by jointly avoiding the two lateral walls. ...
... The resulting closed loop has been shown to minimize asymmetry (imbalance and shift) in the global optic flow stimulus [19,20], and providing an obstacle-symmetric reference trajectory that achieves safe navigation between obstacles. This approach provides an attractive alternate paradigm for microsystems with limited sensory and processing capabilities, and has been demonstrated in several instances [21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36]. ...
Article
Full-text available
Vision-based bio-inspired control strategies offer great promise in demonstrating safe autonomous navigation of aerial microsystems in completely unstructured environments. This paper presents an innovative navigation technique that embeds bio-inspired wide-field processing of instantaneous optic flow patterns within the H-infinity loop shaping synthesis framework, resulting in a dynamic controller that enables robust stabilization and command tracking behavior in obstacle-laden environments. The local environment is parameterized as a series of simpler corridor-like environments in the optic flow model, and the loop shaping controller is synthesized to provide robust stability across the range of modeled environments. Experimental validation is provided using a quadrotor aerial vehicle across environments with large variation in local structure, with the loop shaping controller demonstrating better tracking performance than other comparable controllers in straight-line corridors of different widths. The current approach is computationally efficient, as it does not involve explicit extraction of an environment depth map, and makes for an attractive paradigm for aerial microsystem navigation in urban environments.
... Several authors inspired by flying insects recently started to use the OF (i) as a means of landing automatically in the case of rotorcraft (Chahl et al 2004, Ruffier andFranceschini 2005, Zufferey et Ruffier 2012, de Croon et al 2013) and that of fixed-wing unmanned aerial vehicles (Green et al 2004, Zufferey et al 2010, (ii) avoiding obstacles (Iida 2001, Green et al 2004, Ruffier and Franceschini 2005, Griffiths et al 2006, Zufferey and Floreano 2006, Beyeler et al 2009b, Kendoul et al 2009a, Moore et al 2010, (iii) hovering (Herisse et al 2008, Kendoul et al 2009b, Briod et al 2013, and (iv) following terrain (Ruffier and Franceschini 2005, Garratt and Chahl 2008, Moore et al 2010. ...
... • using the OF directly in a control loop without any need for information about the velocity, acceleration, altitude or even about the characteristics of the terrain. Some OF based aerial robots are able to perform tasks such as taking off, terrain-following and landing safely and efficiently by mimicking insects' behavior (Ruffier and Franceschini 2004, 2005, Hérissé et al 2010, Moore et al 2010, avoiding frontal obstacles (Griffiths et al 2006), as well as hovering and landing on a moving platform (Herisse et al 2012, Ruffier andFranceschini 2013). ...
Article
Full-text available
Two bio-inspired guidance principles involving no reference frame are presented here and were implemented in a rotorcraft, which was equipped with panoramic optic flow (OF) sensors but (as in flying insects) no accelerometer. To test these two guidance principles, we built a tethered tandem rotorcraft called BeeRotor (80 grams), which was tested flying along a high-roofed tunnel. The aerial robot adjusts its pitch and hence its speed, hugs the ground and lands safely without any need for an inertial reference frame. The rotorcraft's altitude and forward speed are adjusted via two OF regulators piloting the lift and the pitch angle on the basis of the common-mode and differential rotor speeds, respectively. The robot equipped with two wide-field OF sensors was tested in order to assess the performances of the following two systems of guidance involving no inertial reference frame: (i) a system with a fixed eye orientation based on the curved artificial compound eye (CurvACE) sensor, and (ii) an active system of reorientation based on a quasi-panoramic eye which constantly realigns its gaze, keeping it parallel to the nearest surface followed. Safe automatic terrain following and landing were obtained with CurvACE under dim light to daylight conditions and the active eye-reorientation system over rugged, changing terrain, without any need for an inertial reference frame.