A typical head-on run projected onto a satellite terrain map  

A typical head-on run projected onto a satellite terrain map  

Source publication
Conference Paper
Full-text available
A prototype, wide-field, optical sense-and-avoid instrument was constructed from low-cost commercial off-the-shelf components, and configured as a network of smart camera nodes. To detect small, general-aviation aircraft in a timely manner, such a sensor must detect targets at a range of 5-10 km at an update rate of a few Hz. This paper evaluates t...

Similar publications

Article
Full-text available
Airborne dust has broad adverse effects on human activity, including aviation, human health, and agriculture. Remote sensing observations are used to detect dust and aerosols in the atmosphere using long established techniques. False color Red‐Green‐Blue (RGB) imagery using band differences sensitive to dust absorption (Dust RGB) is currently used...

Citations

... UAVs come with their own unique challenges of size, weight, and power compared to larger manned aircraft. Some related experiments have been performed with manned aircraft for use in UAVs ( [21], [32]- [34]), and open loop systems intended for UAV see and avoid ( [26], [35]). ...
Conference Paper
The use of unmanned aerial vehicles (UAVs) or drones have become ubiquitous in the recent years. Collision avoidance is a critical component of path planning, allowing multi-agent networks of cooperative UAVs to work together towards common objectives while avoiding each other. We implemented, integrated and evaluated the effectiveness of using a low cost, wide angle monocular camera with real-time computer vision algorithms to detect and track other UAVs in local airspace and perform collision avoidance in the event of a communications degradation or the presence of non-cooperative adversaries, through experimental flight tests where the UAVs were set on collision courses.
... ADS-B equipment is currently mandatory in portions of Australian airspace; the United States requires some aircraft to be equipped by 2020 [2] and the equipment will be mandatory for some aircraft in Europe by 2017 [4]. ADS-B has Being a lightweight and low-energy passive sensor, RGB cameras have been widely considered for See-and-Avoid [5,8,11,19,25]. The main disadvantage of optical cameras is that the camera resolution has to be high (125 μrad for the detection of general aviation [10]) for a timely detection. But even using high-resolution camera images, the need for a wide field of view limits image magnification, therefore requiring aircraft to be detected when they still have a very small size (subpixel to a few pixels) and no spatial features that allow for discrimination of the object. ...
... Detection ranges for automatic TIR imagery-based detections are shown in Table 3. Other reported visionbased SAA systems using high resolution RGB cameras provided maximum detection ranges of 3.7Km [25], 4.83Km [19] and 8.04Km [5]. The proposed system provides a maximum detection range of 5.09Km using automatic thermal camera settings and up to 10.17Km using manual thermal camera settings. ...
Article
Full-text available
Thermal Infrared (TIR) imaging is a promising technology which can provide enhanced capabilities to current vision-based Sense-and-Avoid (SAA) systems. It allows operation under extreme illumination conditions, such as direct sun exposure and during nighttime. This paper presents a lightweight obstacle detection system for small UAVs that integrates a TIR camera and an Automatic Dependent Surveillance Broadcast (ADS-B) receiver. Algorithms for the detection of flying obstacles in TIR images were developed and TIR images were experimentally compared with synchronized RGB images for validation. Matching between aircraft detected in TIR images and those reported by an ADS-B receiver was performed to obtain distance information to the visually detected aircraft. We experimentally proved that our system is able to enhance individual ADS-B and TIR detection capabilities by detecting aircraft under challenging illumination conditions at real-time frame rates while providing distance estimations to visual detections.
... A technique for extracting the R 0 value directly from the captured imagery is outlined in prior work [25]. For evaluating feature detectors, it is sufficient to compute R min alone. ...
... The apparent target area represents the pixel-based`blob' that a potential target spans in image coordinates. Early efforts focused on a Gaussian fit function [25] to estimate the target size. The Gaussian assumption gave good initial values, but proved too sensitive to noise and was not appropriate for targets with a discontinuous radiance profile. ...
Article
Full-text available
Detecting collision-course targets in aerial scenes from purely passive optical images is challenging for a vision-based sense-and-avoid (SAA) system. Proposed herein is a processing pipeline for detecting and evaluating collision course targets from airborne imagery using machine vision techniques. The evaluation of eight feature detectors and three spatio-temporal visual cues is presented. Performance metrics for comparing feature detectors include the percentage of detected targets (PDT), percentage of false positives (POT) and the range at earliest detection ( R det ). Contrast and motion-based visual cues are evaluated against standard models and expected spatio-temporal behavior. The analysis is conducted on a multi-year database of captured imagery from actual airborne collision course flights flown at the National Research Council of Canada. Datasets from two different intruder aircraft, a Bell 206 rotor-craft and a Harvard Mark IV trainer fixed-wing aircraft, were compared for accuracy and robustness. Results indicate that the features from accelerated segment test (FAST) feature detector shows the most promise as it maximizes the range at earliest detection and minimizes false positives. Temporal trends from visual cues analyzed on the same datasets are indicative of collision-course behavior. Robustness of the cues was established across collision geometry, intruder aircraft types, illumination conditions, seasonal environmental variations and scene clutter.
... Each node is configured with a 2 megapixel (MP) Logitech C600 webcam, a 25 mm C-mount lens, and an embedded single-board ARM computer (SBC) (Minwalla et al. 2012). The camera nodes were linked by an Ethernet backplane. ...
Article
Full-text available
The range performance evaluation of a multi-camera electro-optical aircraft detection instrument, “DragonflEYE,” was conducted. A “range at first detection” (R0) quantity, evaluated from the temporal signal-to-noise ratio of potential targets on a collision course, is proposed as a generic metric for evaluating electro-optical systems. The methodology and evaluation process are discussed. Efficacy of the approach was tested by flying multiple collision trajectories, with the instrument mounted onto a Bell 205 helicopter acting as a surrogate unmanned aircraft system, while an instrumented Bell 206 Jet-Ranger acted as the intruder. The R0 values were extracted and subsequently compared to visual estimates by the flight crew. A mean detection range of R0 = 6.3 ± 1.7 km was observed to be within the margin of error for flight-crew detection range of 4.8 ± 2.0 km. Sensitivity analysis was conducted on the choice of threshold and the sensor’s angular resolution, with increased resolution, yielded diminishing r...
... The relationship between the FOV and detection range based on the probability of an intruder is revealed as well. A prototype optical instrument is tested for S&A functionality in [28], where head-on collisions at typical general aviation altitude and velocity are emphasized. It is also indicated by evaluation results that the airborne detection range can reach 6.7 km. ...
... The relationship between the FOV and detection range based on the probability of an intruder is revealed as well. A prototype optical instrument is tested for S&A functionality in [28], where head-on collisions at typical general aviation altitude and velocity are emphasized. It is also indicated by evaluation results that the airborne detection range can reach 6.7 km. ...
Conference Paper
A combination of see-through head-worn or helmet-mounted displays (HMDs) and imaging sensors is frequently used to overcome the limitations of the human visual system in degraded visual environments (DVE). A visual-conformal symbology displayed on the HMD allows the pilots to see objects such as the landing site or obstacles being invisible otherwise. These HMDs are worn by pilots sitting in a conventional cockpit, which provides a direct view of the external scene through the cockpit windows and a user interface with head-down displays and buttons. In a previous publication, we presented the advantages of replacing the conventional head-down display hardware by virtual instruments. These virtual aircraft-fixed cockpit instruments were displayed on the Elbit JEDEYE system, a binocular, see-through HMD. The idea of our current work is to not only virtualize the display hardware of the flight deck, but also to replace the direct view of the out-the-window scene by a virtual view of the surroundings. This imagery is derived from various sensors and rendered on an HMD, however without see-through capability. This approach promises many advantages over conventional cockpit designs. Besides potential weight savings, this future flight deck can provide a less restricted outside view as the pilots are able to virtually see through the airframe. The paper presents a concept for the realization of such a virtual flight deck and states the expected benefits as well as the challenges to be met.