ArticlePDF Available

A UAV Platform Based on a Hyperspectral Sensor for Image Capturing and On-Board Processing

Authors:

Abstract and Figures

Application-oriented solutions based on the combination of different technologies such as unmanned aerial vehicles (UAVs), advanced sensors, precise GPS and embedded devices have led to important improvements in the field of cyber-physical systems. Agriculture, due to its economic and social impact on the global population, which is expected to reach more than 9 billion by year 2050, arises as a potential domain which could enormously benefit from this paradigm in terms of savings in time, resources and human labor, not to mention aspects related to sustainability and environment respect. This has led to a new revolution named precision agriculture (or precision farming), based on observing and measuring inter and intra-field variability in crops. A key technology in this scenario is the use of hyperspectral imaging, firstly used in satellites and later in manned aircrafts, composed by hundreds of spectral bands which facilitate hidden data to be converted into useful information. In this paper, a hyperspectral flying platform is presented and the construction of the whole system is detailed. The proposed solution is based on a commercial DJI Matrice 600 drone and a Specim FX10 hyperspectral camera. The challenge in this work has been to adapt this latter device, mainly conceived for industrial applications, into a flying platform in which weight, power budget and connectivity are paramount. Additionally, an embedded board with advanced processing capabilities has been mounted on the drone in order to control its trajectory, manage the data acquisition and allow on-board processing, such as the evaluation of different vegetation indices (the normalized difference vegetation index, NDVI, the modified chlorophyll absorption ratio index, MCARI, and the modified soil-adjusted vegetation index, MSAVI), which are numerical and/or graphical indicators of the vegetation properties, and compression, which is of crucial relevance due to the huge amounts of data captured. The whole system was successfully tested in a real scenario located in the island of Gran Canaria, Spain, where a vineyard area was inspected between May and August of year 2018.
Content may be subject to copyright.
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2017.DOI
A UAV platform based on a
hyperspectral sensor for image
capturing and on-board processing
PABLO HORSTRAND1, RAUL GUERRA1, AYTHAMI RODRÍGUEZ1, MARÍA DÍAZ1,
SEBASTIÁN LÓPEZ1, (Member, IEEE), JOSÉ FCO. LÓPEZ1,
1Institute for Applied Microelectronics (IUMA), ULPGC, Spain
Corresponding author: Pablo Horstrand (e-mail: phorstrand@iuma.ulpgc.es).
This work has been supported by the European Commission through the ECSEL Joint Undertaking (ENABLE-S3 project, No. 692455)
and the Spanish Goverment through the projects ENABLE-S3 (No. PCIN-2015-225) and PLATINO (No. TEC2017-86722-C4-1-R)
ABSTRACT Application-oriented solutions based on the combination of different technologies such
as unmanned aerial vehicles (UAVs), advanced sensors, precise GPS and embedded devices have led
to important improvements in the field of cyber-physical systems. Agriculture, due to its economic and
social impact on the global population, which is expected to reach more than 9 billion by year 2050,
arises as a potential domain which could enormously benefit from this paradigm in terms of savings
in time, resources and human labor, not to mention aspects related to sustainability and environment
respect. This has led to a new revolution named precision agriculture (or precision farming), based on
observing and measuring inter and intra-field variability in crops. A key technology in this scenario is
the use of hyperspectral imaging, firstly used in satellites and later in manned aircrafts, composed by
hundreds of spectral bands which facilitate hidden data to be converted into useful information. In this
paper, a hyperspectral flying platform is presented and the construction of the whole system is detailed.
The proposed solution is based on a commercial DJI Matrice 600 drone and a Specim FX10 hyperspectral
camera. The challenge in this work has been to adapt this latter device, mainly conceived for industrial
applications, into a flying platform in which weight, power budget and connectivity are paramount.
Additionally, an embedded board with advanced processing capabilities has been mounted on the drone
in order to control its trajectory, manage the data acquisition and allow on-board processing, such as the
evaluation of different vegetation indices (the normalized difference vegetation index, NDVI, the modified
chlorophyll absorption ratio index, MCARI, and the modified soil-adjusted vegetation index, MSAVI),
which are numerical and/or graphical indicators of the vegetation properties, and compression, which is
of crucial relevance due to the huge amounts of data captured. The whole system was successfully tested
in a real scenario located in the island of Gran Canaria, Spain, where a vineyard area was inspected
between May and August of year 2018.
INDEX TERMS unmanned aerial vehicle; hyperspectral; pushbroom sensor; vegetation index; on-board
processing.
I. INTRODUCTION
NOWADAYS, there is an increasing interest in the use
of unmanned aerial vehicles (UAVs) to collect data
for inspection, surveillance and monitoring in the areas of
defense, security, environmental protection and civil do-
mains, among others. The potential of these aerial platforms
in relation to others such as satellites or manned airborne
platforms is that they represent a lower-cost approach with
a more flexible revisit time and a better spatial and spectral
resolution, which permits a deeper and more accurate data
analysis [1].
In the scientific literature, we can find several research
publications in the aforementioned fields which confirm
the current high demand of these aerial vehicles. Just to
name some, in [2] UAVs are used to detect power lines to
achieve automatic power line surveillance and inspection.
VOLUME 4, 2016 1
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
[3] presents several missions in different safety, security
and rescue field tests using UAVs and [4] illustrates the
advantages offered by the introduction of small-scale UAVs
in the near future in civilian applications, concretely, in
police departments, fire brigades and other homeland se-
curity organizations. However, it is in the agriculture field
where remote sensing relying on UAVs has been strongly
positioned as an emerging field of application [5]. In this
context, the use of UAVs permits periodically monitoring the
plants during their growth and until the harvest and also con-
trolling all external conditions that may affect their health.
They allow collecting periodical information of the crops
using different kinds of sensors. In this scenario, several
multispectral sensors are being widely used on-board UAVs
for collecting spectral information that allows the generation
of maps for indicating the aspects of the plant state [6]. One
of the most typical examples is the normalized difference
vegetation index (NDVI), which indicates the vigorousity
of the plants using the information of two different spectral
channels placed in the red and near infrared parts of the
electromagnetic spectrum [7].
Nevertheless, there are many more indices, in addition to
the NDVI index, that provide useful information for smart
farming applications [8]. These indices use the spectral
information corresponding to different parts of the electro-
magnetic spectrum. Due to this reason, the main goal of this
research work is to develop an autonomous system able to
carry a hyperspectral sensor for collecting information in
a wide range of spectral channels. While the multispectral
sensors typically carried by these UAVs collect just some
spectral bands, the hyperspectral sensors are able to sense
hundreds of very narrow spectral channels, providing in-
formation that may be extremely useful not only for smart
farming applications, but also for other applications such
as target detection, anomaly detection, and classification,
among others. However, the use of hyperspectral sensors
in UAVs, instead of multispectral sensors, is not exempt of
drawbacks. For instance, while there are multispectral sen-
sors that have been specifically designed for being carried
by a drone and for being managed using embedded devices,
finding a hyperspectral sensor that can be directly set up in
UAVs is not an easy task.
In this work, a Specim FX10 VNIR (visible and near
infrared) hyperspectral pushbroom sensor [9] has been
successfully set up in a DJI Matrice 600 drone [10] for
collecting high quality hyperspectral images. This sensor
is able to provide hyperspectral data with 224 spectral
bands with spectral wavelengths between 400 and 1000 nm,
and a spatial resolution of 1024 hyperspectral pixels per
image line. Additionally, an industrial IDS RGB camera
[11] has also been mounted in the UAV with the goal
of providing extra information in order to improve the
spatial quality of the acquired hyperspectral data. These
two sensors have been placed in a DJI Ronin MX gimbal
for reducing the drone vibrations and increasing the quality
of the acquired images. The system includes a Jetson TK1
NVIDIA embedded device [12] used as on-board computer
for autonomously controlling the drone flight and managing
the data acquisition. This board also enables the possibility
of carrying out the on-board processing of the acquired data,
what highly increases the applicability of this acquisition
system. In addition to the devices carried by the drone, a
ground station composed by a DJI radio controller (RC)
transmitter and an iPad tablet with an ad-hoc application
developed for this specific task has been set up. This tablet
communicates with the on-board computer through the RC
transmitter in such a way that the desired acquisition mission
can be easily configured by a non expert user using a custom
developed iOS application.
The acquisition system developed in this work has been
tested in different scenarios where a set of images have
been collected over two different vineyards in the region
of Tejeda, Gran Canaria, Spain. These images have been
used for generating a set of different maps based on some
selected vegetation indices (VIs) in order to analyse specific
terrain properties. In addition, the possibility of on-board
compressing the acquired hyperspectral data in real-time, in
such a way that it can be efficiently transferred to a ground
control station for its further processing, has been also
studied. For doing so, the HyperLCA compressor [13] has
been implemented using NVIDIA CUDA (compute unified
device architecture) for taking advantage of the parallelism
of the low power graphics processing unit (GPU) included in
the Jetson TK1 NVIDIA device. The obtained results verify
the achievement of real-time compression performance, also
demonstrating the benefits of carrying this kind of on-board
computing devices in such aerial platforms.
This manuscript is organized as follows. Section II
displays the main characteristics of the different devices
included in the proposed UAV acquisition system as well as
in the ground station. This section also uncovers the benefits
provided by each of the included devices to the whole sys-
tem. Section III provides a detailed explanation on how each
of the devices described in Section II has been integrated in
the system. Section IV introduces the software application
developed for the individual elements in the system and
their interconnection, starting with the mission configuration
using the developed iOS application, followed by the camera
control and the flight control application running on the on-
board device. Section V gives information about the real
hyperspectral data acquired by the system. This data is an-
alyzed and processed in different ways in Section VI. First,
the data is calibrated in Section VI-A and then analyzed
using the NDVI, the modified chlorophyll absorption ratio
index (MCARI) and the modified soil-adjusted vegetation
index (MSAVI) in Section VI-B, showing an example of the
possible benefits that entitle carrying a hyperspectral sensor
on a UAV acquisition system. Other processing capabilities
are outlined in section VI-C, presenting the results of a sim-
ple classifier applied over a portion of one of the captured
images. Section VI-D displays the real-time compression
results obtained for the acquired data, demonstrating the
2VOLUME 4, 2016
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
benefits of carrying an on-board computing device with
a relatively high computational capability. Finally, Section
VII discloses the obtained conclusions and outlines further
research lines.
II. PLATFORM DEVICES DESCRIPTION
This section details the characteristics of the hardware
elements being used in the platform, the main advantages
they offer and how they are fitted into the whole system.
Figure 1 shows a general overview of the platform elements
and how they are connected.
A. HYPERSPECTRAL SENSOR
Once the decision of using a hyperspectral sensor over a
multispectral one has been justified, the next point is to
select the proper technology among the available hyper-
spectral cameras in the market [14]. Generally speaking,
all the currently available hyperspectral cameras are based
on a panchromatic sensor, lying the differences between
them on the location of the filter. One of the state-of-
the-art hyperspectral solutions is based on a 2D imager
snapshot sensor where the hypercube is obtained in one
shot as the filter is built on-chip. The Firefleye UHD 185
implements that technology, and due to its low weight is a
good candidate to be mounted on a UAV [15]. Nevertheless,
this type of camera has an inverse relationship between
the spatial resolution and the spectral one, since the sensor
dimensions do not vary. This translates, in the case of the
UHD 185, in a spatial resolution of the hyperspectral cube
of 50×50 pixels, not sufficient for many applications despite
the up-sampling process the camera carries out with the aid
of an additional sensor built in the camera.
Pushbroom hyperspectral cameras currently present a
better trade off between spatial and spectral resolution, with
a better range of prices available in the market due to their
popularity, hence, we have opted for such technology in
this work. However, the integration of these cameras into a
UAV is complex since the frames captured share no overlap,
therefore much more care and attention has to be put into the
acquisition and post-processing. The scientific community
has set quite a lot of effort in the post-processing phase,
correcting remotely sensed images both geometrically and
radiometrically either by having a very precise positioning
system to have the camera orientation and position for every
frame [16] or by having incorporated an RGB snapshot
sensor [17] that through image processing is able to make
a 3D reconstruction and obtain the hyperspectral camera
position in each captured frame. As it will be further de-
tailed, one of the novelties of this work is on the acquisition
system, on one hand making the whole process completely
automatic and user friendly, and on the other hand on fine
controlling the speed and the positioning of the platform,
improving the quality of the captured data and reducing
the post-processing efforts, or even eliminating them for
some specific applications where the user demands a prompt
result.
The Specim FX series hyperspectral pushbroom cameras
have been specifically designed for industrial applications
to enhance quality control processes. The target application
will determine which devices should be mounted in the UAV.
For the particular case of precision agriculture, our option
has been to acquire the FX10 as most of the biochemical
and biophysical attributes of the crops are obtained in the
VNIR range. The FX10 camera (shown in Figure 1) is
a hyperspectral imaging instrument, mainly designed for
industrial and laboratory use. It works as a push-broom
hyperspectral scanner, collecting hyperspectral data in the
VNIR (400 to 1000 nm) region through single fore optics.
However, this is the first time that an industrial hyperspec-
tral camera like this is installed in a DJI Matrice 600, and
the challenge has been to do it in a way in which communi-
cations, synchronization and control are optimized in order
to extract the best of it. Under well controlled conditions,
such as in a laboratory, the integration is straightforward.
However, if the camera is to be included in an UAV, aspects
such as vibrations and wind could affect the quality of the
images. This is the reason for using a high quality gimbal
as well as an industrial RGB camera, which is used for
extracting the exact position and rotation of each of the
images and permits future corrections in the hyperspectral
lines obtained at the same time. Table 1 shows the main
characteristics of the hyperspectral sensor.
Spectral Range 400 - 1000 nm
Spectral Bands 224
Spatial Sampling 1024 px
Spectral FWHM 5.5 nm
Frame Per Seconds 330 FPS full frame 9900 FPS with 1 band
selected
FOV (α) 38°
Camera SNR (Peak) 600:1
Camera Interface GigE Vision
Dimensions 150 x 85 x 71 mm
Weight 1.26 Kg
TABLE 1: Specim FX10 main characteristics.
B. THE DJI MATRICE 600 DRONE
The DJI Matrice 600 is an industrial drone, equipped with
the latest generation of flight controllers from DJI, the A3
controller. This module can be connected to an external
board through the UART port. The programming of the
board is supported by the Onboard SDK. This enables
developers to implement a wide variety of custom mission
programs for different applications. Table 2 presents a
summary of the main characteristics of the Matrice 600.
VOLUME 4, 2016 3
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
AIR MODULES GROUND STATION
CAN
CAN
UART
GigE
Lightbridge
USB3
MATRICE 600
RONIN MX
JETSON TK1
SPECIM FX10
IDS RGB CAM
FIGURE 1: General overview of the UAV platform.
The drone fulfills all the requirements for our application
in terms of maximum flight speed, clearly above the flight
speed range performed in these missions that oscillates
between 2 and 5 m/s, and in terms of the payload (6 Kg),
which is above the total payload of the system. The drone
hovering time with the selected battery type goes from 35
minutes (no payload) to 16 minutes (6 Kg payload). In our
particular case, having an approximate payload of 4.5 Kg
which makes an absolute take-off weight of 13.5 Kg, the
hovering time is around 20 minutes.
1) Precision GPS GNSS System: DJI RTK
Flying platform positioning and heading are very important
for an accurate flight control, and useful as well for post-
processing the captured data. That is why data are not only
used during the flight for control but also stored into a file.
In this work, a precise device such as the DJI RTK [18] has
been acquired in order to have an accurate positioning of the
system. The device can deliver 1 centimeter of horizontal
accuracy and 2 centimeters of vertical accuracy.
The main drawback of the device is its sampling fre-
quency, which can not be set higher than 50 Hz. This means
that for some flights there will be frames not positioned and
hence, an ulterior interpolation is required. This limitation
comes from the communication between the flight controller
A3 and the Jetson TK1.
General
Max take off weight 15.1 Kg
Max speed (without wind) 18 m/s
Max angular velocity Pitch: 300/s, Yaw: 150/s
Number of Batteries 6
Weight (with batteries) 9.1 Kg
Battery
Model TB47S
Type LiPo 6S
Capacity 4500 mAh
Weight 595 g
Motor
Model DJI 6010
kV 130 rpm/V
Weight 230 g
Propeller
Model DJI 2170
Diameter ×Thread Pitch 533 ×178 mm
Weight 58 g
TABLE 2: Matrice 600 main characteristics.
2) Camera Stabilization System: DJI Ronin-Mx
In order to damp as much as possible flight vibrations
mainly produced by the wind, the camera has been mounted
on a Gimbal, more concretely the Ronin Mx [19] from DJI,
in order to ease its integration in the flying platform. This
gimbal is mainly intended for filming cameras, that is why
4VOLUME 4, 2016
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
a great deal of effort has been spent to mechanically adapt
the FX10 to the Ronin-Mx as it is explained in the next
section.
The device incorporates a gyroscope which in this case
has also been used to get the camera orientation, roll,
pitch and yaw values. However, again the drawback is the
maximum sampling frequency of 50Hz, when reading the
values through the onboard SDK, as it is the case, and an
accuracy of 0.1 degrees in all directions. Since having an
accurate orientation of every captured hyperspectral frame
is quite critical for geometric correction, a second system
based on an industrial RGB camera has been installed and
it is explained hereafter.
C. ON-BOARD EMBEDDED PLATFORM: JETSON TK1
The development kit NVIDIA Jetson TK1 is a super
computer with a GPU based on the Kepler architecture
that includes all the basic functions to develop embed-
ded applications and provides a NVIDIA CUDA platform
with all the necessary tools to accelerate developments
of high computational loads. The characteristics of this
embedded system are shown in Table 3. Flight control,
camera capture, communication and processing algorithms
have been implemented in the Jetson TK1 within this work.
There are more powerful devices available in the market
in terms of computation, but the Jetson TK1 presents a
good trade-off between size and weight, and the number
of available computational resources. Moreover it has the
required interfaces to build up this platform. It comes with
Gigabit Ethernet port RJ45, which is being used to connect
the hyperspectral camera, a USB 3.0 port to connect the
RGB camera with a high data transfer as well, a USB 2.0
port to make the serial connection with the A3 controller
through a USB-TTL device and finally, a SATA interface
that allows the integration of an external SSD for a very fast
data transfer to memory, which is a key point in this type of
applications handling an enormous amount of information.
GPU NVIDIA kepler GK20 with 192 SM3.2 CUDA
cores (up to 326 GFLOPS)
CPU NVIDIA 2.32GHz ARM quad-core CPU with
Cortex A15 battery saving shadow core
DRAM 2GB DDR3L 933MHz EMC x16 using 64-bit
data width
Storage 16GB fast eMMC 4.51 (routed to SDMMC4)
Interfaces Mini-PCIe, USB 3.0, USB 2.0, HDMI, RS232,
Ethernet, SATA, JTAG, EXPANSION I/O, ...
TABLE 3: Jetson TK1 features.
D. INDUSTRIAL RGB CAMERA
An industrial RGB camera with a very high frame rate
has also been incorporated into the system to assist hyper-
spectral frame registration and later correction. The sensor
comes with a USB 3.0 interface and a SDK that allows
its integration in real-time system platforms, being able to
do parameter setup and image capture autonomously. These
characteristics make this sensor optimal for this application.
Table 4 shows the main characteristics of the sensor and
its optics. The sensor has a maximum resolution of 1280
×1024 pixels, but since we are using the images for
registration and the exact same area is going to be captured
several times, the user is given the possibility to define an
image spatial binning in both the horizontal and vertical
directions, separately, thus potentially reducing the amount
of information that the system has to handle during the flight
mission.
Sensor Model IDS UI-3140CP
Sensor Type CMOS Color
Resolution 1280 ×1024 Pixel
Optical Sensor Class 1/2”
Max. Frame Rate 224
Exposure time (minimum - maximum) 0.035ms - 434ms
Optics 3.5mm HR
Sensor Interface USB 3.0
TABLE 4: Industrial IDS RGB camera features.
III. PLATFORM DEVICES INTEGRATION
As it was mentioned before, this is the first work carried
out to integrate a Specim FX10 into a Matrice 600, which
is mainly intended and adapted for filming cameras. For this
reason, quite a lot of efforts had to be taken in terms of the
design of mechanical pieces and electrical connections, in
order to create a proper functioning system.
Figure 2 shows a 3D layout of the mechanical elements
designed and created with a 3D Ultimaker 3 Extended
printer to fit the gimbal and replace some of the existing
structural components. The following design considerations
have been taken into account to replace these components:
Camera degrees of freedom for position adjustment. It
is critical that the system is properly balanced in off
mode, so when the gimbal is turned on, the motors
are not continuously correcting the deviations and
hence, consuming extra energy, plus the risk of having
vibrations. The designed components allow therefore
to adjust the camera position in the xdirection (flying
direction), ydirection (pointing right of the flight
direction), and zdirection (vertical axis).
The camera is by default pointing downwards. Since
the system is designed to scan fields beneath the drone,
by default it makes sense to have the sensor already
targeting the scanning surface.
VOLUME 4, 2016 5
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
(a)
(b)
FIGURE 2: Gimbal construction elements breakout. (a)
Front view. (b) Rear view.
Weight optimization. It is important to keep the system
as light as possible, so the components design provides
a good trade off between weight and resistance to hold
everything tight.
As it is seen in Figure 2, mainly the bottom and top
original plates of the Ronin Mx have been replaced by new
custom ones, keeping the original vertical rods that close up
the whole structure. Since the original bottom plate contains
the electrical ports and the gyroscope, the new custom made
one has been developed to be able to contain these elements
as well, which are critical for the functioning of the system.
The custom designed plate to hold the camera grabbing
it from the top also has the holes in place to fit the on-
board system, i.e, the Jetson TK1 board. On top of this
board another element has been designed and fixed into the
same holes as the Jetson TK1, to bear the SSD board where
images are stored after being captured.
The second aspect of the design integration are the
electrical connections and communication links. The FX10
camera has to be supplied with an input voltage within the
range 12V±10%, and the Jetson TK1 board spans the
limits into a slightly broader range 12V±15%. The Ronin
Mx provides two output connections out of the 4S Lipo
1580 mAh battery mounted in it and a voltage regulator
that supplies at 13V. This value has been measured for
reassurance throughout the battery discharge cycle and it
is kept constant. Since the 13Vfulfills both camera and
board requirements it is just a matter of making the physical
connection through a cable. The output ports in the Ronin
Mx are Dtap female, the input port in the Jetson TK1 is a
standard 2.1mm DC barrel plug and the input port in the
Specim FX10 is a Fisher-type S1031-Z012-130+.
The communication cable between the FX10 and the
Jetson TK1 is a standard Cat 6a ethernet cable with a male
RJ45 connector on the board end and a male M12 connector
on the camera end. Finally the serial connection between the
Jetson TK1 and the A3 controller from the Matrice 600 has
been implemented with a USB-TTL converter connected to
the board and the jumpers connected to the A3. The reason
why a USB-TTL solution was used instead of the serial port
available in the GPIOs of the Jetson TK1 has been mainly
the voltage level difference from both. The Jetson TK1 serial
output voltage level on the GPIOs is 1.8Vand the one on
the A3 works at a standard value of 3.3V.
Figure 3 shows the system totally assembled and flying
during one of the flight campaigns performed the 24th of
July 2018 in Tejeda, Gran Canaria, Spain.
FIGURE 3: UAV flying platform.
IV. APPLICATION FOR CONTROLLING THE SYSTEM
In this section a detailed description of the developed
application for controlling the system is presented. The
implemented software makes use of different SDKs for con-
trolling the individual components. The main applications
involved in the whole process are enumerated below:
End-user application running on an iOS device, pro-
grammed in Objective C and based on the DJI Mobile
6VOLUME 4, 2016
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
SDK [20].
Flight control application running on the Jetson TK1,
programmed in C++ and based on the DJI Onboard
SDK [21].
FX10 camera control application running on the Jetson
TK1, programmed in C++ and based on the eBUS SDK
from Pleora [22].
IDS camera control application running on the Jetson
TK1, programmed in C++ and based on the IDS uEye
SDK [23].
A. END-USER APPLICATION
The end-user application purpose is triple. On one side, it
receives the basic inputs from the user, thanks to a graphical
interface, to perform the first flight mission calculation.
After these calculations are completed, the mission way-
points are presented on the map, defining the flight swathes.
Finally, during the flight, the end-user application commu-
nicates with the on-board application to provide valuable
information to the user regarding its progress. Figure 4
shows a few snapshots taken from the end-user application.
In Figure 4a, the user is defining the scouting area by
selecting two corners in the diagonal of the rectangle. Figure
4b lays out the mission waypoints after having applied all
user defined parameters. Figure 4c shows the user mission
parameters window and Figure 4d indicates the status of
some of the system variables, in this particular case, the
charging state of the batteries.
1) User inputs
The first step is to define the area to be inspected by the
drone. The application allows the user to provide this area
in two different ways: by selecting the two corners in the
diagonal of the rectangle (see Figure 4a); or by directly
selecting all the corners of an irregular area, in which
case the implemented code will calculate the rectangle with
minimum size including that area.
Next, the user defines some required information to be
able to perform the calculations and start the mission. For
instance, the sensors to be enabled and capturing during the
mission, the camera sensor binnings and two out of the three
main mission parameters:
Relative height, h, from the ground
Speed
Frames per second, FPS
These parameters are linearly dependent, so that the third
one not defined by the user is obtained based on the
following relationships:
resolution =spatial_sampling
2htan α
2
(1)
speed =resolution ×FPS (2)
Equation 1 defines the resolution based on the camera and
flight parameters, such as the camera spatial_sampling
and field of view α, and the flight height h. The
spatial_sampling depends on the hyperspectral camera
spatial binning. The default value for the binning is 1, which
means that 1024 pixels will be captured per frame.
Figure 5 shows the flight parameter representation.
2) Waypoint layout
After the user has defined all inputs and the remaining
parameters have been obtained by the application, it is time
to lay down the mission waypoints (Figure 4b), for which
again the resolution obtained in Equation (1) is essential
together with the overlap between different swathes. The
calculations performed by the application are explained in
the following four steps:
1) First, the rectangle sides are obtained in meters based
on the GPS coordinates of the corners, and the shortest
distance is selected to lay down the waypoints in
between those corners. In this way, each individual
flight swath has a bigger length and therefore, the
flying time is reduced.
2) Second, the image width in meters is obtained by
multiplying the resolution by the spatial sampling of
the camera.
3) Third, the image width is corrected by the user input
overlap: width0=width overlap
4) Finally, the number of waypoints to be added in
between the corner waypoints is calculated.
Nwaypoints = (side_distance
width0
1)(3)
where side_distance is the distance in meters of the
shortest rectangle side. It is obvious that this calculation
not always provides an integer number, so it is rounded up
to the next integer value, giving as a result a larger overlap
between images than what was initially defined.
Based on all the mentioned inputs, the application defines
a rectangle and the intermediate waypoints to cover the
area fulfilling the user requirements. It does also take into
account the remaining battery percentage and in case the
calculated remaining flight time is less than the estimated
time to complete the mission, it warns the user and suggests
to increase the altitude or the speed.
Once the calculation is consolidated, it is sent together
with the defined user input parameters to the application
running on the Jetson TK1, making use of the built-in
functions in both the Mobile SDK and Onboard SDK from
DJI and a specific protocol running on top of those SDKs
defined for this purpose.
B. FLIGHT CONTROL APPLICATION
As it has already been mentioned, the application running
on the Jetson TK1 uses the Onboard SDK from DJI for
the purpose of implementing the drone flight control and
the communication with the device on the ground. The
application follows the steps described next.
VOLUME 4, 2016 7
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
(a) (b)
(c) (d)
FIGURE 4: iOS application screenshots. (a) Input initial coordinates. (b) Waypoint layout. (c) Mission parameter setup. (d)
System parameter overview.
1) The first step of the on-board application, after the
board has successfully initialized, is to start the lis-
tening task implemented with a callback function
that is constantly listening messages from the mobile
application and implements the mentioned protocol in
order to receive the mission setup parameters.
2) Once all parameters and required information have
been successfully transferred and after the user has hit
the start button, the listening task is closed and the
main task together with the camera capture tasks are
started and run in parallel. The main task, responsible
for the drone flight control, is also in charge of
collecting telemetry data.
a) The take-off command is issued, followed by the
drone elevating itself up to 2 meters above the
ground, and hovering in that position.
b) Then the function Z-movement is called with the
relative height above the ground given by the
user as an input.
c) At this point, when the UAV has reached the user
defined height, the RGB sensor is calibrated in
order to obtain good quality images. For this pur-
pose, images are being automatically captured
while adjusting the sensitivity and exposure time
until a defined pixel average value range has
been reached.
d) Afterwards the program enters into the waypoint
loop, consisting in first orientating towards the
8VOLUME 4, 2016
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
h
width
width
spatial
resolution
bands
FIGURE 5: UAV flight schematics.
next waypoint, calling the yaw control function.
e) Once the drone is oriented, it starts moving
towards the next waypoint. It must be pointed out
that a relative coordinate system is being used,
with distances defined in meters and with the
system reference fixed at the drone starting posi-
tion. Since the waypoint coordinates are defined
in UTC, they have to be converted to the new
system. Another point to be taken into account
is the fact that for the hyperspectral capturing
process it is crutial that images are captured at
a constant speed.
f) The waypoint application is running inside a
control loop revisited every 5ms that is con-
stantly checking whether the next waypoint coor-
dinates have already been reached. Some mea-
sures against overshoot have also been imple-
mented in order to make the drone stop as closest
to the waypoint as possible.
g) If it is the last point, the drone moves back to
its initial position. Otherwise steps d to f must
be repeated until the last waypoint is reached.
h) Once the drone is located back in its initial co-
ordinates (where the system reference axes were
located), Z-movement function is called again to
descend and finally the drone lands, completing
the whole mission in a fully automatic fashion.
3) After the flight is over, the board gathers all data and
copies them to the SSD disk, closes up the flight
control application and restarts again the listening
function, so the user can perform another mission or
simply shut down the system.
It is important to highlight that after initialization, each
task in the board runs in parallel in a different thread and
all running threads are executed in the four cores available
at the board CPU in an optimal manner to allow a smooth
system overall functioning. The drone flight control is in
charge of implementing the synchronization between itself
and the camera capture tasks. After the drone has reached
the first waypoint of an even swath the main task will trigger
the capturing start of all camera sensors which internally are
in charge of performing the capture and saving it in memory.
When the drone reaches the next waypoint (end of the
swath) the main task issues a stop command to the camera
tasks. Additionally, the main task will store telemetry data
in memory during those swathes, synchronized with the
camera tasks. In this way only in the desired sectors data
is being captured and the whole system performance and
memory space is optimized. Finally once the drone has
reached the end of the mission the main task will issue
the command to stop the capturing process and close the
camera tasks.
The entire application has been implemented in a modular
way, in order to allow future inclusion of additional sensors.
This means that the camera tasks have been developed shar-
ing a common interface, so the main task interaction with
any camera already mounted and/or to be mounted on the
drone is exactly the same. This interface includes parameters
such as frame rate, exposure time, horizontal/spatial and
vertical/spectral binning.
An important remark in the described process is the use
of more than one thread inside the camera tasks, just to
avoid that once the next capture cycle has arrived and the
process being executed in the previous thread has not yet
VOLUME 4, 2016 9
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
finished, there is no system delay and images are properly
stored in memory. A memory buffer with a capacity of up to
32 frames has been defined so that each thread can handle
a different captured frame. The images are all saved with
an index that identifies when it was captured so there is no
misalignment once the whole mosaic is to be reconstructed.
For inter-task communication and synchronization pur-
poses, a shared memory has been created allowing the ex-
change of parameters between the processes being executed.
V. DATA ACQUIRED BY THE DEVELOPED UAV SYSTEM
The purpose of the construction of the proposed platform is
to use it in the precision agriculture domain. Being more
specific, the goal is to perform several flight campaigns
over a vineyard in the island of Gran Canaria. Figure 6
shows the exact location of the vineyard in a village of the
island called Tejeda, and the two Google Maps pictures of
the terrains under analysis. The exact coordinates of Terrain
1 are 27°59’35.6”N 15°36’25.6”W and the coordinates of
Terrain 2 are 27°59’15.2”N 15°35’51.9”W.
Results of one of the flight campaigns performed over
Terrain 1 are shown in Figure 7. This figure displays a
false RGB representation extracted from the hyperspectral
data aquired for each swath. The flight was performed at a
height of 45 mover the ground, a speed of 4.5 m/s and
the hyperspectral camera capturing frames at 150 FPS. The
flight mission consisted of 12 waypoints which provides 6
swathes as it is represented on Figure 7. The number of
frames per swath is between 4100 and 4200, what results
in approximately 1.9 GBytes per swath. At this height, the
ground sampling distance in line and accross line is 3 cm,
what gives a total of 125 mlength and 31 mwidth coverage
per swath. The mission took approximately 12 minutes to
complete from take-off until land.
Results of one of the flight campaigns performed over
Terrain 2 are shown in Figure 8. This figure displays a
false RGB representation extracted from the hyperspectral
data aquired for each swath. The flight was performed at
a height of 45 mover the ground, a speed of 6 m/s and
the hyperspectral camera capturing at 200 FPS. The flight
mission consisted of 10 waypoints which provides 5 swathes
as it is represented on Figure 8. The number of frames per
swath is between 4900 and 5000, resulting in approximately
2.3 GBytes per swath. At this height, the ground sampling
distance in line and accross line is 3 cm, what gives a total
of 150 mlength and 31 mwidth coverage per swath. The
mission took approximately 9 minutes to complete from
take-off until land.
A. FLIGHT CONTROL ACCURACY
One of the main goals of the proposed platform is to fine
control the flight in order to capture high quality hyper-
spectral images and reduce the post-processing efforts that
have to be carried out performing geometric and radiometric
corrections.
(a)
(b)
(c)
(d)
FIGURE 6: Flight location. (a) Canary islands overview
(b) Gran Canaria overview highlighting the area of Tejeda
where the scouted terrains are located (c) Terrain 1 (d)
Terrain 2.
10 VOLUME 4, 2016
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
FIGURE 7: Captured swathes of Terrain 1.
VOLUME 4, 2016 11
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
FIGURE 8: Captured swathes of Terrain 2.
12 VOLUME 4, 2016
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
The designed software controls position and speed on
every loop that runs every 5ms to obtain a stable flight
at an almost constant speed, which is very important for
the captured samples. The trajectory is constantly adjusted
as well, keeping the drone on track throughout the whole
mission. The system minimizes overshot in position, when
reaching a waypoint, and in angle, when rotating. This has
been achieved by defining extra control parameters that have
been carefully selected through several tests, both in the
provided DJI simulation environment and in real flights.
Figure 9 shows telemetry data captured in the flight
campaign performed over Terrain 2. Figure 9a shows the
ideal trajectory, in red, defined by the iOS application based
on the area selected by the user, and the real one, in green.
GPS longitude and latitude coordinate values have been
converted to meters. In the representation, the initial point
of the first sector has been set as the origin and the X
axis corresponds to the north in the real world. The largest
deviation, as it can be visually appreciated, occurs in sector
2 producing a maximum tangential distance between the
ideal and real trajectories of approximately 2 meters. In
Figure 9b the altitude variation throughout the first sector
of the flight mission is plotted. Altitude is measured over
sea level, so in order to obtain the difference with the
ground, the initial position altitude of the drone, 1227.5 m,
is substracted.
Figure 9c shows the gimbal roll, pitch and yaw angles,
again over sector 1 of the same mission. As it is seen, the
yaw angle starts with a considerable deviation due to the
abrupt rotation the drone does when it reaches the waypoint
and orientates itself to the next waypoint. This deviation is
slowly corrected by the software until it reaches an offset,
left on purpose to compensate the slight misalignment of the
camera relative to the drone horizontal line. The absolute
speed of the drone over sector 1 of the flight mission is
plotted in Figure 9d. It takes the platform a couple of
seconds to achieve the set speed by the user and then it
is kept constant until the next waypoint is reached.
The results presented on Figures 9c and 9d suggest that
the first couple of captured seconds in the sector should be
cropped out since they are going to be more distorted than
desired. However, the rest of the flight looks very stable and
therefore the quality of the images is high. Those images
were captured in good weather conditions with relatively
low wind speed. This is of course not always the case
and that is the reason why the platform incorporates an
industrial RGB camera for image registration and to improve
accuracy. The system does also capture telemetry data as the
one shown in Figure 9, being the main drawback the low
sampling rate compared with acquisition rate of the camera
in terms of frames per second (FPS). In this particular
case, while the camera captures images at 200 FPS, the
telemetry system is only able to obtian data at 50 Hz.
VI. HSI DATA PROCESSING
This section highlights the main benefits of having a hyper-
spectral sensor on-board a flying platform. Similar platforms
based on such sensors have already detailed some possible
processing applications that can be carried out with the
captured data [24] [25] [26]. The novelty of this work is
the capability of the system to do on-board processing in
the Jetson TK1 and provide the user some results while the
flight is still ongoing or has recently ended.
A. IMAGE CALIBRATION
The raw captured frames by the sensor are a measurement
of the sensed light per sensor pixel, each pixel value
expanding from 0 to 4096 considering the camera pixel
depth. However, sensor response is not uniform across the
covered spectral range. The consequence is that, even if
the same amount of radiance is hitting all the pixels of
the sensor, the digital value measured in each pixel will be
different, especially for different wavelengths. Additionally,
the illumination conditions may not be uniform across the
covered spectral range. These facts make not possible to
directly use the raw images, which are affected by the
sensor response, for the subsequent hyperspectral imaging
applications. For instance, Figure 10 displays a captured
hyperspectral frame over a certified Zenith Polymer white
panel that reflects more than 99% of the incident radiation
in all the VNIR range [27]. As it can be observed in
this image, the sensor varies with the spectral wavelength.
Additionally, the response of different pixels that measure
the same spectral wavelength also varies, what typically
results in stripping noise [28].
In order to solve these issues, the captured images are
converted to reflectance values, in such a way that each
image value is scaled between 0 and 1, representing the
percentage of incident radiation that the scanned object
reflects at each specific wavelength. The procedure for doing
so is explained next. Prior to the mission flight, an image of
a Zenith Polymer white calibration panel which is certified
to reflect more than 99% of the incident radiation in the
VNIR spectral range is acquired from the ground, as the one
shown in Figure 10, just using the gimbal with the camera
pointing downwards at a distance of around 1 meter. It is
important to make sure that the camera line of pixels is
entirely sensing the white panel and no pixel is left out. Up
to 50 samples at the exact same frame rate as the one used
during the flight were taken and then averaged to have a
final reference that is used for calibration. A dark sample is
also required for calibration. This dark reference collects the
minimum values that the sensor measures when no radiance
is hitting it. In order to obtain the dark reference, the camera
lens is completely closed. Again, 50 samples are taken and
then averaged.
reflectance =sensed_bitarray dark_refence
white_reference dark_reference (4)
VOLUME 4, 2016 13
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
a) Drone trajectory b) Altitude above ground variation over sector 1
c) Gimbal orientation angle variations over sector 1 d) Absolute speed variation over sector 1
FIGURE 9: Flight data for mission 2.
Spatial dimension. Pixels 1 to 1024
Spectral dimension. 400-1000 nm.
FIGURE 10: Spectral response of the Specim FX10 hyper-
spectral camera.
Equation 4 shows how the raw data is calibrated for
obtaining its corresponding reflectance values. Figure 11
shows an example of how a set of real hyperspectral
signatures collected by the described acquisition system
are calibrated using the described procedure. In particular,
Figure 11a shows the average value of the white and dark
references across all the sensor pixels. Figure 11b shows
a portion of one real hyperspectral image collected by the
system developed in this work, from which some pixels have
been selected, corresponding to vegetation, soil and shad-
ows, marked in the image in green, blue and red colours,
respectively. The raw and calibrated spectral signatures of
these pixels are displayed in Figures 11c and 11d. As it can
be observed in these graphs, the raw spectral signatures are
strongly affected by the sensor spectral response.
14 VOLUME 4, 2016
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
a) White and dark signatures b) Selected pixels
c) Raw signatures d) Calibrated signatures
FIGURE 11: Pixel calibration for the real acquired hyperspectral data.
B. VEGETATION INDICES CALCULATION
One of the direct results that can be obtained with the
captured images is the generation of a set of VIs able
to provide information about the status of the crop [29].
This is performed by combining or transforming two or
more spectral bands designed to enhance the contribution of
vegetation properties, allowing reliable spatial and temporal
inter-comparisons of terrestrial photosysthetic activity and
canopy structural variations. There are multispectral sensors
specifically developed for smart farming applications that
efficiently collect images in some of the most widely used
spectral channels for calculating VIs, such as the Rededge
multispectral camera from Micasense [30]. Nevertheless,
the number of spectral channels collected by this kind of
camera is strongly limited and so is the amount of VIs
that can be correctly calculated. In this sense, the fact of
carrying a hyperspectral scanner provides the advantage of
being able to calculate any VI whose involved bands are
within the VNIR spectral range. Additionally, any other kind
of index [31], not necessarily oriented to smart farming
applications, can also be calculated, thus increasing the
overall applicability of the developed acquisition system.
As an example, some very well known indices, whose for-
mulas are detailed in Table 5, have been calculated for a real
hyperspectral image of 1024×1024 pixels that corresponds
to a portion of the first swath in the hyperspectral data
acquired over Terrain 1. These indices are graphically dis-
played in Figure 12. Concretely, the well-known NDVI [32],
which quantifies vegetation by measuring the difference be-
tween near-infrared (which vegetation strongly reflects) and
red light (which vegetation absorbs), has been measured and
displayed in Figure 12b. Additionally, the Modified Soil-
adjusted Vegetation Index (MSAVI) [33] shown in Figure
12c and the Modified Chlorophyll Absorption Ratio Index
(MCARI) [33] shown in Figure 12c have been calculated.
The first one focus on automatically adjusting the NDVI
VOLUME 4, 2016 15
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
when applied to areas with a high degree of exposed soil
surface. The second one measures the relative abundance of
chlorophyll.
In the VIs shown in Figure 12, already four different
wavelengths are involved in the calculation. Many other
indices can be obtained as indicated by [33], involving
several other spectral wavelengths, and hence the need for
a sensor that captures more than just a few bands.
VI Equation
NDVI (R800 R670)/(R800 +R670)
MSAVI 1
2h2R800 + 1 p(2 R800 + 1)28(R800 R670)i
MCARI h(R700 R670)0.2(R700 R550 )i(R700/R670 )
TABLE 5: Vegetation indices used in this manuscript.
C. OTHER RESULTS
In addition to the calculation of different indices, the hyper-
spectral data collected by the system developed in this work
is potentially useful for many other applications that may
benefit from being able to capture hyperspectral data from
an UAV. Some of them are focused on mineralogy appli-
cations [34] [35], security and/or surveillance applications
[36] or target detection and/or tracking applications [37].
Tasks such as classification, spectral unmixing, anomaly
detection or clustering are usually employed in this kind
of hyperspectral imaging applications [38] [39] [40]. Just
to provide a brief example of the results that can be
obtained when applying these processes to the hyperspectral
data that can be acquired by the proposed system, the
spectral signatures displayed in Figure 11d, that correspond
to vegetation, soil and shadows, have been used to train a
Support Vector Machine (SVM) classifier. In particular, a
1vs1 linear kernel has been employed. The trained SVM
model has then been used for classifying the 1024×1024
image portion previously described, obtaining the classifica-
tion results shown in Figure 13b. Blue represents the pixels
classified as soil, green is used for the vegetation and red
for the shadows.
D. ON-BOARD REAL-TIME DATA PROCESSING
The possibility of using the GPU included in the on-board
PC for accelerating the execution of different processes
represents an important advantage with respect to other
UAV systems. This GPU is the GK20A, based on a Kepler
architecture, with 195 CUDA Cores and a maximum of 2048
resident threads per multiprocessor. As it can be intuited,
these characteristics fit very well our problem at hand, since
we have 1024 hyperspectral pixels per captured line, which
is within the limit of maximum concurrent threads that can
be launched, and also a multiple of 32, which is the warp
size and also the number of execution task packets.
In particular, we have exploited this advantage in two
different ways. On one side, for obtaining results in real-
time regarding the different VIs that can be directly sent to
the user for visual inspection, and, on the other side, for
real-time compression of the hyperspectral data in such a
way that it can be easily transferred to the ground station
for its further processing.
1) Vegetation index maps calculation in real-time
The acceleration of the calculation of different VIs by using
the aforementioned GPU is a relatively straightforward
process. This process consists in independently applying the
corresponding vegetation index equation to each of the 1024
hyperspectral pixels per captured line. Additionally, since
the RAM memory of the Jetson TK1 is actually common
to the board CPU (host) and the GPU (device), it can be
used very efficiently as unified memory in CUDA. By doing
so, the data transfers between host and device are avoided,
reducing the required computational time.
Additionally, in order to obtain correct vegetation index
values, the spectral bands of the pixel involved in the
calculation of the vegetation index need to be calibrated
using the white and dark references, as described in the
previous section. This process is also independently applied
to each of the 1024 hyperspectral pixels per captured line
in the same manner as the vegetation index calculation.
2) Real-time hyperspectral data compresssion
In addition to the vegetation index calculation, more pro-
cesses can be applied to the acquired hyperspectral data that
may be of interest for different hyperspectral imaging appli-
cations, such as classification, target detection or anomaly
detection. In general, the complexity of these processes,
as well as the complexity of the involved mathematical
algorithm, is high. In order to provide a possible solution
that allows the execution of this kind of processes for ob-
taining real-time or near-real time results, we have decided
to compress the acquired hyperspectral frames on-board in
real-time, in such a way that they can be rapidly transferred
to a ground station in a streaming fashion for their further
processing. For such purpose, the HyperLCA compressor
[13] has also been implemented in the Jetson TK1, taking
advantage of its GPU and the CUDA programming model
for achieving real-time compression results.
As described in Section III, the acquisition data rate of
the Specim FX10 hyperspectral camera is up to 100 Mbytes
per second, what results in almost 6 Gbytes per minute.
Accordingly, a 10 minutes flight could result in more than
50 GBytes. Due to this reason, the size of the acquired data
has to be drastically decreased for being able to rapidly
transfer it, specially if real-time transmission is desired. In
particular, the hyperspectral images described in Section V
were collected at 150 and 200 FPS using 2 bytes per pixel
and band, producing 65.6 and 87.5 Mbytes per second for
the data acquired over the Terrain 1 and 2, respectively.
Their size is about 1.9 Gbytes per swath for the images
of Terrain 1, and 2.3 Gbytes per swath for the images of
Terrain 2.
16 VOLUME 4, 2016
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
a) RGB representation b) NDVI.
c) MSAVI. d) MCARI.
FIGURE 12: Calculated indices for a portion of the real hyperspectral image collected by the described system over Terrain
1, first swath, of size 1024×1024 pixels. Lowest values are represented in blue and highest values in red.
VOLUME 4, 2016 17
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
a) Pixels used for training the SVM classifier b) Classification map
FIGURE 13: Hyperspectral imaging classification example using a 1vs1 linear SVM and the real hyperspectral data collected
by the described UAV system. The classification map shown in (b) represents the three elements using the colors: blue for
the soil, green for the vegetation and red for the shadows.
The selected HyperLCA compressor is a lossy com-
pressor specifically designed for independently compressing
each of the hyperspectral frames collected by a pushbroom
scanner in a fast manner. Additionally, it guarantees high
compression ratios at a reasonable low computational com-
plexity while keeping the image information that is poten-
tially more useful for the subsequent hyperspectral imaging
applications. In particular, its computational complexity
decreases for higher compression ratios (smaller compressed
images), allowing to compress the acquired data faster when
higher compression ratios are desired. As it can be noticed,
these characteristics fit very well with the needs of the
described system. This compressor has been implemented
in the Jetson TK1 making used of custom developed kernels
and the CUDA programming model, for taking advantage
of the inherent parallelism present in the operations of the
algorithm. Additionally, extra parallelization strategies were
employed in order to pipeline the execution of the different
stages of the compressor, namely spectral transform, pre-
processing, mapping, and coding.
The developed implementation of the HyperLCA com-
pressor in the Jetson TK1 has proven to achieve real-time
compression for acquisition rates higher than 300 FPS for
all the tested configurations of the compressor. In particular,
this compressor has 3 main input parameters, the minimum
desired compression ratio, CR, the block size, BS, which
corresponds to the number of hyperspectral pixels per frame,
and the amount of bits used for representing the output
projection vectors obtained after the transformation stage
of the compressor, Nbits. In this work, BS has been fixed
to 1024 since this is the size of the hyperspectral frames
captured by the FX10 camera. The CR parameter has been
set to 12, 16 and 20, which guaranties that the compressed
hyperspectral frames are at least 12, 16 and 20 times smaller
than the original ones. The Nbits has been set to 12 and 8.
The results obtained for each configuration are displayed in
Table 6.
Input parameters Maximum compression frame rate (FPS)
BS Nbits CR
1024 12 12 495
16 603
20 697
1024 8 12 386
16 473
20 565
TABLE 6: Compression performance of the HyperLCA
implementation developed for the Jetson TK1, where BS
is the block size, Nbits the number of bits used to represent
the output projection vector and CR the compression ratio
applied.
18 VOLUME 4, 2016
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
VII. CONCLUSION
Presently there is no doubt about the importance of in-
cluding advanced sensors in many of the most common
devices developed by the industry. When combined with
aerial vehicles to transport these devices, an enormous
interest is drawn by many potential users that see a huge
amount of applications where such platforms can contribute.
However, it is not an easy task to adapt industrial sensors
to flying platforms, mainly due to the challenges that rep-
resent critical aspects such as weight, size, power budget
and connectivity. In this work, problems are detected and
solutions are highlighted in the process of validating a
hyperspectral flying platform since its conception. The drone
includes a hyperspectral Specim FX10 camera, a precision
GPS, a controller and an embedded board to interact with
a modular flight control application. 3D pieces have been
constructed in order to adapt these devices to the drone
in an optimal manner. As a result, a solution is proposed
for the particular use case of precision agriculture, based
on capturing 224 spectral bands at up to 300 frames-per-
second in the VNIR range and processing different vege-
tation indices, such as the well-known NDVI, MSAVI and
MCARI, to reveal features of vineyard areas. Furthermore,
a low complexity lossy compressor was included on-board
together with parallelization strategies adopted to transfer
data to a ground station, allowing to perform more complex
tasks. The experience gained in this research facilitates
the inclusion of other advanced hyperspectral sensors in
the SWIR range, uncovering innovative opportunities in
other promising applications such as surface mining, remote
sensing, environmental contamination, and in general, any
field which involves aerial inspection.
REFERENCES
[1] J. A. Berni, P. J. Zarco-Tejada, M. D. Suárez Barranco, and E. Fer-
eres Castiel, “Thermal and narrow-band multispectral remote sensing for
vegetation monitoring from an unmanned aerial vehicle. Institute of
Electrical and Electronics Engineers, 2009.
[2] Z. Li, Y. Liu, R. Walker, R. Hayward, and J. Zhang, “Towards automatic
power line detection for a uav surveillance system using pulse coupled
neural filter and an improved hough transform,” Machine Vision and
Applications, vol. 21, no. 5, pp. 677–686, 2010.
[3] A. Birk, B. Wiggerich, H. Bülow, M. Pfingsthorn, and S. Schwertfeger,
“Safety, security, and rescue missions with an unmanned aerial vehicle
(uav),” Journal of Intelligent & Robotic Systems, vol. 64, no. 1, pp. 57–76,
2011.
[4] K. Daniel and C. Wietfeld, “Using public network infrastructures for
uav remote sensing in civilian security operations,” DORTMUND UNIV
(GERMANY FR), Tech. Rep., 2011.
[5] T. Adão, J. Hruška, L. Pádua, J. Bessa, E. Peres, R. Morais, and J. J. Sousa,
“Hyperspectral imaging: A review on uav-based sensors, data processing
and applications for agriculture and forestry,” Remote Sensing, vol. 9,
no. 11, p. 1110, 2017.
[6] E. R. H. Jr. and C. S. T. Daughtry, “What good are
unmanned aircraft systems for agricultural remote sensing and
precision agriculture?” International Journal of Remote Sensing,
vol. 39, no. 15-16, pp. 5345–5376, 2018. [Online]. Available:
https://doi.org/10.1080/01431161.2017.1410300
[7] J. Huang, H. Wang, Q. Dai, and D. Han, “Analysis of ndvi data for crop
identification and yield estimation,” IEEE Journal of Selected Topics in
Applied Earth Observations and Remote Sensing, vol. 7, no. 11, pp. 4374–
4384, Nov 2014.
[8] C. Rey-Caramés, M. P. Diago, M. P. Martín, A. Lobo, and
J. Tardaguila, “Using rpas multi-spectral imagery to characterise vigour,
leaf development, yield components and berry composition variability
within a vineyard,” Remote Sensing, vol. 7, no. 11, pp. 14 458–14 481,
2015. [Online]. Available: http://www.mdpi.com/2072-4292/7/11/14458
[9] Specim, “Specim FX1 Series hyperspectral cameras [Online],”
http://www.specim.fi/fx/.
[10] DJI, “MATRICE 600 PRO [Online],” https://www.dji.com/matrice600.
[11] IDS, “IDS sensor UI-3140CP [Online],” https://en.ids-
imaging.com/store/ui-3140cp-rev-2.html.
[12] NVIDIA, “Jetson TK-1 developer kit [Online],”
https://www.nvidia.com/object/jetson-tk1-embedded-dev-kit.html.
[13] R. Guerra, Y. Barrios, M. Díaz, L. Santos, S. López, and R. Sarmiento,
“A new algorithm for the on-board compression of hyperspectral images,”
Remote Sensing, vol. 10, no. 3, p. 428, 2018.
[14] H. Aasen, E. Honkavaara, A. Lucieer, and P. Zarco-Tejada, “Quantitative
Remote Sensing at Ultra-High Resolution with UAV Spectroscopy:
A Review of Sensor Technology, Measurement Procedures, and Data
Correction Workflows,” Remote Sensing, vol. 10, no. 1091, 2018.
[Online]. Available: https://www.mdpi.com/2072-4292/10/7/1091
[15] J. Yue, G. Yang, C. Li, Z. Li, Y. Wang, H. Feng, and B. Xu, “Estimation
of Winter Wheat Above-Ground Biomass Using Unmanned Aerial
Vehicle-Based Snapshot Hyperspectral Sensor and Crop Height Improved
Models,” Remote Sensing, vol. 9, no. 708, 2017. [Online]. Available:
https://www.mdpi.com/2072-4292/9/7/708
[16] R. Hruska, J. Mitchell, M. Anderson, and N. F. Glenn, “Radiometric and
geometric analysis of hyperspectral imagery acquired from an unmanned
aerial vehicle,” Remote Sensing, vol. 4, no. 9, pp. 2736–2752, 2012.
[Online]. Available: http://www.mdpi.com/2072-4292/4/9/2736
[17] A. Habib, Y. Han, W. Xiong, F. He, Z. Zhang, and M. Crawford,
“Automated ortho-rectification of uav-based hyperspectral data over
an agricultural field using frame rgb imagery,” Remote Sensing,
vol. 8, no. 796, 2016. [Online]. Available: http://www.mdpi.com/2072-
4292/4/9/2736
[18] DJI, “RTK GNSS System [Online],” https://www.dji.com/d-rtk/info.
[19] DJI, “Gimbal Ronin-Mx [Online],” https://www.dji.com/ronin-mx/info.
[20] DJI, “DJI Mobile SDK [Online],” https://developer.dji.com/mobile-sdk/.
[21] DJI, “DJI Onboard SDK [Online],” https://developer.dji.com/onboard-
sdk/.
[22] Pleora, “Pleora eBUS SDK [Online],”
https://www.pleora.com/products/ebus-sdk/.
[23] IDS, “IDS uEye SDK [Online],” https://en.ids-imaging.com/ueye-
software-archive.html.
[24] J. Suomalainen, N. Anders, S. Iqbal, G. Roerink, J. Franke, P. Wenting,
D. HÃijnniger, H. Bartholomeus, R. Becker, and L. Kooistra,
“A lightweight hyperspectral mapping system and photogrammetric
processing chain for unmanned aerial vehicles,” Remote Sensing,
vol. 6, no. 11, pp. 11 013–11 030, 2014. [Online]. Available:
http://www.mdpi.com/2072-4292/6/11/11013
[25] M. Kanning, I. Kühling, D. Trautz, and T. Jarmer, “High-resolution
uav-based hyperspectral imagery for lai and chlorophyll estimations from
wheat for yield prediction,” Remote Sensing, vol. 10, no. 12, 2018.
[Online]. Available: http://www.mdpi.com/2072-4292/10/12/2000
[26] D. Turner, A. Lucieer, M. McCabe, S. Parkes, and I. Clarke, “Pushbroom
Hyperspectral Imaging from an Unmanned Aircraft System (uas) - Ge-
ometric Processingworkflow and Accuracy Assessment,” ISPRS - Inter-
national Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences, pp. 379–384, Aug. 2017.
[27] SphereOptics, “Zenith Polymer Diffusers [Online],”
https://sphereoptics.de/en/product/zenith-polymer-diffusers/?c=79.
[28] C. Rogass, C. Mielke, D. Scheffler, N. K. Boesche, A. Lausch, C. Lubitz,
M. Brell, D. Spengler, A. Eisele, K. Segl, and L. Guanter, “Reduction of
uncorrelated striping noiseâ ˘
Aˇ
Tapplications for hyperspectral pushbroom
acquisitions,” Remote Sensing, vol. 6, no. 11, pp. 11082–11 106, 2014.
[Online]. Available: http://www.mdpi.com/2072-4292/6/11/11082
[29] P. S. Thenkabail, R. B. Smith, and E. D. Pauw,
“Hyperspectral vegetation indices and their relationships with
agricultural crop characteristics,” Remote Sensing of Environment,
vol. 71, no. 2, pp. 158 – 182, 2000. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S003442579900067X
[30] Micasense, “Micasense Rededge-MX camera [Online],”
https://www.micasense.com/rededge-mx.
[31] IDB, “Vegetation Index Database [Online],
https://www.indexdatabase.de/.
VOLUME 4, 2016 19
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
[32] J. W. Rouse, R. H. Haas, J. A. Schell, and D. W. Deering, “Monitor-
ing the vernal advancements and retrogradation of natural vegetation,
NASA/GSFC, Final Report, pp. 1 – 137, 1974.
[33] D. Haboudane, J. R. Miller, E. Pattey, P. J. Zarco-Tejada, and I. B.
Strachan, “Hyperspectral vegetation indices and novel algorithms for
predicting green lai of crop canopies: Modeling and validation in the
context of precision agriculture,” Remote Sensing of Environment,
vol. 90, no. 3, pp. 337 – 352, 2004. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0034425704000264
[34] B. Martini, E. Silver, W. Pickles, and P. Cocks, “Hyperspectral mineral
mapping in support of geothermal exploration: Examples from long valley
caldera, ca and dixie valley, nv, usa.
[35] N. Yokoya, J. C.-W. Chan, and K. Segl, “Potential of resolution-enhanced
hyperspectral data for mineral mapping using simulated enmap and
sentinel-2 images,” Remote Sensing, vol. 8, no. 3, 2016. [Online].
Available: http://www.mdpi.com/2072-4292/8/3/172
[36] H. Ren and C.-I. Chang, “Automatic spectral target recognition in hy-
perspectral imagery,” IEEE Transactions on Aerospace and Electronic
Systems, vol. 39, no. 4, pp. 1232–1249, Oct 2003.
[37] D. Manolakis, E. Truslow, M. Pieper, T. Cooley, and M. Brueggeman,
“Detection algorithms in hyperspectral imaging systems: An overview of
practical algorithms,” IEEE Signal Processing Magazine, vol. 31, no. 1,
pp. 24–33, Jan 2014.
[38] L. He, J. Li, C. Liu, and S. Li, “Recent advances on spectralâ ˘
A¸Sspatial
hyperspectral image classification: An overview and new guidelines,
IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 3, pp.
1579–1597, March 2018.
[39] J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader,
and J. Chanussot, “Hyperspectral unmixing overview: Geometrical, statis-
tical, and sparse regression-based approaches,” IEEE Journal of Selected
Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 2,
pp. 354–379, April 2012.
[40] S. Matteoli, M. Diani, and G. Corsini, “A tutorial overview of anomaly de-
tection in hyperspectral images,” IEEE Aerospace and Electronic Systems
Magazine, vol. 25, no. 7, pp. 5–28, July 2010.
PABLO HORSTRAND was born in Las Palmas
de Gran Canaria, Spain, in 1986. He received the
double degree in industrial engineering and elec-
tronics and control engineering in 2010 and the
Master degree in telecommunication technologies
in 2011, both from the University of Las Palmas
de Gran Canaria (ULPGC), Spain. He worked
between 2011 and 2017 for ABB Switzerland,
first in the Minerals and Printing, Drives De-
partment, and last in the Traction Department,
Aargau, Switzerland. He is currently pursuing a PhD in telecomunication
technologies at the University of Las Palmas de Gran Canaria. During 2018
he was with the Royal Military Academy, Belgium, as part of his PhD,
doing research in the Signal & Image Centre Department.
RAÚL GUERRA was born in Las Palmas de
Gran Canaria, Spain, in 1988. He received the
industrial engineer degree by the University of
Las Palmas de Gran Canaria in 2012. In 2013
he received the master degree in telecommuni-
cations technologies imparted by the Institute of
Applied Microelectronics, IUMA. He was funded
by this institute to do his PhD research in the
Integrated System Design Division, receiving his
PhD in Telecommunications Technologies by the
University of Las Palmas de Gran Canaria in 2017. During 2016 he
worked as a researcher in the Configurable Computing Lab in Virginia
Tech University. His current research interests include the parallelization
of algorithms for multispectral and hyperspectral images processing and
hardware implementation.
AYTHAMI RODRÍGUEZ was born in Las Pal-
mas de Gran Canaria, Spain, in 1994. He received
the degree in automatic and electronic engineer-
ing in 2016 and the Master degree in telecommu-
nication technologies in 2017, both from the Uni-
versity of Las Palmas de Gran Canaria (ULPGC),
Spain.
MARÍA DÍAZ was born in Spain in 1990. She
received the industrial engineer degree from the
University of Las Palmas de Gran Canaria, Spain,
in 2014. In 2017 she received the master degree in
system and control enginnering imparted jointly
by the Universidad Complutense de Madrid and
the Universidad Nacional de Educación a Distan-
cia (UNED). She is currently working toward the
Ph.D. degree at the University of Las Palmas de
Gran Canaria, developing her research activities
at the Integrated Systems Design Division of the Institute for Applied
Microelectronics (IUMA). During 2017, she conducted a research stay
in the GIPSA-lab, University of Grenoble Alpes, France. Her research
interests include image and video processing, development of highly
parallelized algorithms for hyperspectral images processing and hardware
implementation.
20 VOLUME 4, 2016
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2913957, IEEE Access
Author et al.: Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
SEBASTIÁN LÓPEZ (M’08 - SM’15) was born
in Las Palmas de Gran Canaria, Spain, in 1978.
He received the Electronic Engineer degree by the
University of La Laguna in 2001, obtaining re-
gional and national awards for his CV during his
degree. He got his PhD degree by the University
of Las Palmas de Gran Canaria in 2006, where
he is currently an Associate Professor, developing
his research activities at the Integrated Systems
Design Division of the Institute for Applied Mi-
croelectronics (IUMA). He is currently an Associate Editor of the IEEE
JOURNA L OF SELEC TE D TOPICS IN APP LIED EART H OBS ERVATIONS
AND REMOTE SENSI NG (JSTARS) an AdCom member of the Spanish
Chapter of the IEEE GEOSCI EN CE A ND REMOT E SENSIN G SOC IE TY. He
also was an associate editor of the IEEE TR AN SAC TI ONS ON CONSUMER
ELECTRONI CS from 2008 to 2013. Additionally, he currently serves as
an active reviewer of the IEEE JO UR NAL O F SEL EC TED TOPICS IN
APPLIE D EART H OBS ERVATION S AN D REM OTE SE NS ING (JSTARS),
IEEE TRANSACTIONS ON GEOS CIENC E AN D REM OTE SE NS ING, IEEE
GEOSCI EN CE A ND REMOT E SENSIN G LET TE RS, IEEE TRANSACTIONS
ON CIRCUITS AND SYSTE MS F OR VI DEO TECH NO LO GY, the JOUR-
NAL O F RE AL-TIM E IMAGE PROCES SI NG , MI CRO PRO CE SSORS A ND MI-
CRO SYSTE MS : EM BE DDED HARDWARE DESIG N (MICPRO), and the IET
Electronics Letters, among others. He is also a program committee mem-
ber of different international conferences including the SPIE Conference
on Satellite Data Compression, Communication and Processing, IEEE
WORKSH OP O N HYPERSPECTRAL IMAG E AN D SIGNAL PROCESS IN G:
EVOL UTION I N REM OTE SENSING (WHISPERS), and SPIE Conference
of High Performance Computing in Remote Sensing. Furthermore, he acted
as one of the program chairs at the last two aforementioned conferences
for their 2014 editions and as the program co-chair of the SPIE Con-
ference of High Performance Computing in Remote Sensing in 2015. He
has also been designated as the program co-chair SPIE Conference of
High Performance Computing in Remote Sensing for its 2016 edition.
Moreover, he was the guest editor of the special issue entitled "Design
and Verification of Complex Digital Systems" that was published in 2011
at the EL SE VIER MIC ROPROCES SO RS A ND MICROSYSTEMS: EM BE DDED
HARDWARE DES IGN (MI CP RO) journal, and he was one of the guest
editors of the special issue entitled "HYP ERSPE CT RA L RE MOT E SENSI NG "
that has been published in the IEEE Journal of Selected Topics in Applied
Earth Observations and Remote Sensing in 2015. He has published more
than 80 papers in international journals and conferences. His current
research interests include real-time hyperspectral imaging, reconfigurable
architectures, high-performance computing systems, and image and video
processing.
JOSÉ F. LÓPEZ received the M.S. degree in
physics (specializing in electronics) from the Uni-
versity of Seville and the Ph.D. degree from
the University of Las Palmas de Gran Canaria
(ULPGC), Spain, being awarded by this Uni-
versity for his research in the field of High
Speed Integrated Systems. He has conducted his
investigations at the Institute for Applied Micro-
electronics (IUMA), where he acts as Deputy
Director since 2009. He currently lectures at the
School of Telecommunication & Electronics Engineering and at the MSc.
Program of IUMA, in the ULPGC. He was with Thomson Composants
Microondes, Orsay, France, in 1992. In 1995 he was with the Center for
Broadband Telecommunications at the Technical University of Denmark
(DTU), Lyngby, Denmark, and in 1996, 1997, 1999, and 2000, he was
visiting researcher at Edith Cowan University (ECU), Perth, Western
Australia. Presently his main areas of interest are in the field of image
processing, UAVs, hyperspectral technology and their applications. Dr.
López has been actively enrolled in more than 40 research projects funded
by the European Community, Spanish Government and international private
industries located in Europe, USA and Australia. He has written around
140 papers in national and international journals and conferences.
VOLUME 4, 2016 21
... RGB cameras are inexpensive, but they have a limited number of bands and face challenges in capturing the complete spectrum of the crop canopy [27]. While hyperspectral sensors excel at precisely characterizing spectral responses, they are costly and require complex data processing [28]. Multi-spectral sensors have recently gained attention in agricultural remote sensing due to their affordability and inclusion of important bands like red edge and near-infrared, which are crucial for detecting various agricultural parameters like vegetation health, chlorophyll content, crop stress levels, crop biomass, and crop yield estimation [29]. ...
Article
Full-text available
This study explored how to use UAV-based multi-spectral imaging, a plot detection model, and machine learning (ML) algorithms to predict wheat grain yield at the field scale. Multispectral data was collected over several weeks using the MicaSense RedEdge-P camera. Ground truth data on vegetation indices was collected utilizing portable phenotyping instruments, and agronomic data was collected manually. The YOLOv8 detection model was utilized for field scale wheat plot detection. Four ML algorithms–decision tree (DT), random forest (RF), gradient boosting (GB), and extreme gradient boosting (XGBoost) were used to evaluate wheat grain yield prediction using normalized difference vegetation index (NDVI), normalized difference red edge index (NDRE), and green NDVI (G-NDVI) data. The results demonstrated the RF algorithm's predicting ability across all growth stages, with a root mean square error (RMSE) of 43 grams per plot (g/p) and a coefficient of determination ( $R^{2}$ ) value of 0.90 for NDVI data. For NDRE data, DT outperformed other models, with an RMSE of 43 g/p and an $R^{2}$ of 0.88. GB exhibited the highest predictive accuracy for G-NDVI data, with an RMSE of 42 g/p and an $R^{2}$ value of 0.89. The study integrated isogenic bread wheat sister lines and checked cultivars differing in grain yield, grain protein, and other agronomic traits to facilitate the identification of high-yield performers. The results show the potential use of UAV-based multispectral imaging combined with a detection model and machine learning in various precision agriculture applications, including wheat breeding, agronomy research, and broader agricultural practices.
... Significant research efforts have focused on the development of techniques and algorithms to retrieve water quality parameters, e.g. from UAV-captured hyperspectral images (HSI). On-board compute installed alongside hyperspectral imagers can enable the rapid evaluation of spectral indices from HSI band ratios [6]. These band ratios and polynomial combinations of bands have been used to successfully invert optically active water quality parameters such as turbidity directly from Disclaimer/Publisher's Note: The statements, opinions, and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). ...
Preprint
Full-text available
Unmanned Aerial Vehicles (UAVs) equipped with hyperspectral imagers have emerged as an essential technology for the characterization of inland water bodies. The high spectral and spatial resolutions of these systems enable the retrieval of a plethora of optically-active water quality parameters via band ratio algorithms and machine learning methods. However, fitting and validating these models requires access to sufficient quantities of in situ reference data which are time-consuming and expensive to obtain. In this study, we demonstrate how the Generative Topographic Mapping (GTM), a Bayesian realization of the Self-organizing Map, can be used to visualize high-dimensional hyperspectral imagery and extract spectral signatures corresponding to unique endmembers present in the water. Using data collected across a North Texas pond, we first apply the GTM to visualize the distribution of captured reflectance spectra revealing small-scale spatial variability of water composition. Next, we demonstrate how the nodes of the fitted GTM can be interpreted as unique spectral endmembers. Using extracted endmembers together with the normalized spectral similarity score, we are able to efficiently map the abundance of near shore algae as well as the evolution of a rhodamine tracer dye used to simulate water contamination by a localized source.
... These sensors can acquire image data with different bands and resolutions, and provide rich information sources for subsequent processing. Image preprocessing is to improve image quality, reduce noise, enhance image features and accurately correct image geometry [8]. Common image preprocessing operations include image filtering, enhancement, geometric correction, color space conversion, etc. ...
Article
Full-text available
The purpose of this paper is to analyze the application of image processing in UAV and autopilot. UAV and autonomous driving technology are important development directions in the field of intelligent transportation and agricultural production, and image processing, as one of the core technologies supporting its perception and decision-making, plays a vital role. For UAV, in agricultural plant protection and natural disaster monitoring, the rapid evaluation and identification of farmland growth status and disaster situation can be realized through image processing technology, which provides important support for crop production and rescue work. For the automatic driving system, image processing technology can detect and track the road environment, traffic signs and pedestrians, and improve the driving safety and comfort of vehicles. However, image processing technology also faces some challenges in application, such as complex environmental conditions and real-time requirements. In the future, through the improvement of sensor technology, algorithm optimization and data sharing, the application of image processing technology in UAV and autonomous driving will be continuously improved, bringing more innovation and development opportunities for intelligent transportation and agricultural production.
Chapter
This chapter investigates the use of computer vision in assessing various vegetation indicators for agricultural uses. Plant health, growth, and stress levels are all assessed using vegetation indices, which provide vital information for optimal crop management. This chapter explains how unmanned aerial vehicles (UAVs) and high-end computing devices may automate the process of vegetation index computation, enabling precise and rapid agricultural decision-making by using computer vision techniques and sophisticated algorithms. The chapter opens by discussing vegetation indicators and their use in monitoring plant health and growth. It emphasizes the limits of traditional methods as well as the need for automated alternatives to improve the efficiency and accuracy of vegetation index measurements. The chapter next delves into the various computer vision algorithms used in the estimation of various vegetation indicators. It discusses image processing algorithms, feature extraction approaches, and machine learning techniques used to analyze on-field or aerial pictures collected by UAV cameras. The chapter emphasizes the significance of clever algorithms in extracting useful information from photographs and, as a result, assessing vegetation indices. In addition, the chapter provides a thorough analysis of the individual vegetation indices pertinent to agriculture. It goes through popular indices including the normalized difference vegetation index (NDVI), green normalized difference vegetation index (GNDVI), red edge normalized difference vegetation index (RENDVI), and many more. The mathematical formulations of these indicators are explained, as well as their applications in crop monitoring, water stress management, weed identification, and overall plant health evaluation. Examining theoretical concerns, the chapter includes practical examples and case studies that show how computer vision techniques may be used to measure vegetation indices. It presents real-world examples of how these technologies have been effectively used to increase crop yields, optimize resource allocation, and reduce environmental dangers by emphasizing the importance of this study topic for students, researchers, scientists, and specialists in the subject. It invites readers to investigate this topic and help build models and prototypes that benefit society and the environment. Overall, this chapter serves as a thorough reference for understanding and implementing computer vision techniques in assessing various vegetation indicators, enabling agriculture stakeholders to make educated decisions for improved crop management and production.
Article
Full-text available
Inland waters pose a unique challenge for water quality monitoring by remote sensing techniques due to their complicated spectral features and small-scale variability. At the same time, collecting the reference data needed to calibrate remote sensing data products is both time consuming and expensive. In this study, we present the further development of a robotic team composed of an uncrewed surface vessel (USV) providing in situ reference measurements and an unmanned aerial vehicle (UAV) equipped with a hyperspectral imager. Together, this team is able to address the limitations of existing approaches by enabling the simultaneous collection of hyperspectral imagery with precisely collocated in situ data. We showcase the capabilities of this team using data collected in a northern Texas pond across three days in 2020. Machine learning models for 13 variables are trained using the dataset of paired in situ measurements and coincident reflectance spectra. These models successfully estimate physical variables including temperature, conductivity, pH, and turbidity as well as the concentrations of blue–green algae, colored dissolved organic matter (CDOM), chlorophyll-a, crude oil, optical brighteners, and the ions Ca2+, Cl−, and Na+. We extend the training procedure to utilize conformal prediction to estimate 90% confidence intervals for the output of each trained model. Maps generated by applying the models to the collected images reveal small-scale spatial variability within the pond. This study highlights the value of combining real-time, in situ measurements together with hyperspectral imaging for the rapid characterization of water composition.
Article
Full-text available
Vegetation indices (VIs) are the essential parameters to be considered in the analysis of the crop growth, monitoring and in development stages. In the field of Remote Sensing, these vegetation indices play an important role especially in case of data precision; classification etc.VIs consists of spectral imagery information about a scene wherein more than two bands are utilized as to obtain the effect of properties of vegetation. Vegetation Indices (VIs) have significant role in Remote Sensing as they provide good qualitative, quantitative measure of the various vegetation cover, healthiness measurements of the vegetation, growth stage measurement etc. VIs can be used as one of the major performance metrics in evaluation of many of the parameters that are essential in performance evaluation in Remote Sensing applications .With the help advancement that has taken place in satellite imaging and other advanced imaging techniques, the combined effect of VIs with relevant algorithms are found to be the best evaluation indices for the analysis and evaluation purposes. Based on the previous studies that are carriedout has shown that, there is no consolidated mathematical expression that can best define all the possible vegetation indices, the main reason for the same is considered to be the complexity seen in combining the spectra of light, due to the lesser advancement in instrumentation and low resolutions seen in the captured images. The VIs can be used as substitute for the classifiers that are available with respect to many applications. The indices are usually captured in visible range of the spectrum mainly in the green spectra region as to classify the vegetation surfaces. This paper reviews various narrow band vegetation indices and their applications. These VIs are obtained for the Hyperspectral Imagery. The VIs isobtained by making use of ENVI (Environment for Imaging) Tool for atmospherically corrected image (using FLAASH (Fast Line of Sight)). Comparison of obtained Narrow Band Vegetation Indices (VIs) isdiscussed along with the obtained value. The observation indicates that NDVI is the best VI among all as its value is almost same as that of the reference value.
Article
ABSTRAKPupuk merupakan hal yang penting bagi tanaman. Nilai NDVI dari citra multispektral lahan pertanian dapat digunakan untuk menentukan kebutuhan pupuk pada tanaman. Pada makalah ini, telah direalisasikan layanan rekomendasi pemupukan tanaman padi berdasarkan NDVI clustering. Citra sawah diambil menggunakan kamera Multispectral Mapir Survey 3W RGN yang dipasang pada DJI Mavic 2 Pro. Penentuan kebutuhan pupuk tanaman padi dilakukan dengan menggunakan metode K-Means clustering pada nilai NDVI. Hasil yang didapat dari proses clustering dimasukan ke dalam rumus rekomendasi pemupukan yang mengacu kepada BWD. Dari hasil pengujian menunjukan platform dapat memberikan rekomendasi pemupukan untuk tanaman padi. Selisih antara hasil rekomendasi jumlah pupuk menggunakan platform dan BWD yaitu 1.29% pada pukul 10.00 pagi, 3.35% pada pukul 12.00 siang, dan 2.40% pada pukul 04.00 sore. Selisih hasil perhitungan tersebut disebabkan karena adanya perbedaan intensitas cahaya matahari.Kata kunci: multispektral, K-Means Clustering, NDVI, platform, padi, pupuk ABSTRACTFertilizer is a crucial component for plants. NDVI values from multispectral imagery of agricultural land can be used to determine fertilizer requirements. In this paper, a rice plant fertilization recommendation service based on NDVI clustering has been realized. Rice field images were taken using the Multispectral Mapir Survey 3W RGN camera mounted on the DJI Mavic 2 Pro. Determination of fertilizer needs for rice plants is carried out using the K-Means clustering method on the NDVI value. The results obtained from the clustering process are entered into the fertilization recommendation formula which refers to BWD. The test results showed that the platform can provide fertilizer recommendations for rice plants. The difference between the recommended amount of fertilizer using the platform and BWD is 1.29% at 10 a.m, 3.35% at 12 noon, and 2.40% at 4 p.m. The difference in the results of these calculations is due to differences in the intensity of sunlight.Keywords: multispectral, K-Means Clustering, NDVI, platform, rice, fertilizer
Article
Full-text available
Orchard monitoring is a vital direction of scientific research and practical application for increasing fruit production in ecological conditions. Recently, due to the development of technology and the decrease in equipment cost, the use of unmanned aerial vehicles and artificial intelligence algorithms for image acquisition and processing has achieved tremendous progress in orchards monitoring. This paper highlights the new research trends in orchard monitoring, emphasizing neural networks, unmanned aerial vehicles (UAVs), and various concrete applications. For this purpose, papers on complex topics obtained by combining keywords from the field addressed were selected and analyzed. In particular, the review considered papers on the interval 2017-2022 on the use of neural networks (as an important exponent of artificial intelligence in image processing and understanding) and UAVs in orchard monitoring and production evaluation applications. Due to their complexity, the characteristics of UAV trajectories and flights in the orchard area were highlighted. The structure and implementations of the latest neural network systems used in such applications, the databases, the software, and the obtained performances are systematically analyzed. To recommend some suggestions for researchers and end users, the use of the new concepts and their implementations were surveyed in concrete applications, such as a) identification and segmentation of orchards, trees, and crowns; b) detection of tree diseases, harmful insects, and pests; c) evaluation of fruit production, and d) evaluation of development conditions. To show the necessity of this review, in the end, a comparison is made with review articles with a related theme.
Article
Onboard image data management and sharing are the foundations for achieving cooperative data processing and analysis in multiple unmanned aerial vehicles (multi-UAVs). However, various challenges, such as the lack of efficient onboard data indices, restrict the development of multi-UAV cooperative applications. Here, we propose a novel and versatile cooperative data management framework based on a discrete grid system for multi-UAV onboard image data. First, we study the image coding methodology employed within the proposed framework. This method transforms original spatiotemporal and attribute information in images into well standardized and structured coded information. Second, we introduce a grid-based onboard image data management (Grid-OIM) approach to facilitate cooperative data management among multi-UAVs using code-based index and query methods. Finally, we applied Grid-OIM to cooperative image localization tasks. Experiments were conducted using an edge-computing platform and an embedded database. The image coding method could process >12000 images/s while maintaining excellent real-time performance. Moreover, the efficiency of creating and updating the image data index and querying the image data increased by averages of 17.6, 9.3, and 66.1 times, respectively, compared to the image data management method based on $\text{R}^{\ast} $ -Tree, highlighting the substantial advantages of this proposed method. These improvements address the demands of indexing and querying highly dynamic onboard image data effectively. Furthermore, the horizontal accuracy of image localization calculated by the cooperative localization method was improved by 43.4%–81.6% compared to that of a single UAV, enhancing reliability. Overall, Grid-OIM presents a feasible and practical solution for multi-UAV cooperative applications.
Article
Full-text available
The efficient use of nitrogen fertilizer is a crucial problem in modern agriculture. Fertilization has to be minimized to reduce environmental impacts but done so optimally without negatively affecting yield. In June 2017, a controlled experiment with eight different nitrogen treatments was applied to winter wheat plants and investigated with the UAV-based hyperspectral pushbroom camera Resonon Pika-L (400-1000 nm). The system, in combination with an accurate inertial measurement unit (IMU) and precise gimbal, was very stable and capable of acquiring hyperspectral imagery of high spectral and spatial quality. Additionally, in situ measurements of 48 samples (leaf area index (LAI), chlorophyll (CHL), and reflectance spectra) were taken in the field, which were equally distributed across the different nitrogen treatments. These measurements were used to predict grain yield, since the parameter itself had no direct effect on the spectral reflection of plants. Therefore, we present an indirect approach based on LAI and chlorophyll estimations from the acquired hyperspectral image data using partial least-squares regression (PLSR). The resulting models showed a reliable predictability for these parameters (R 2 LAI = 0.79, RMSELAI [m 2 m −2 ] = 0.18, R 2 CHL = 0.77, RMSECHL [µg cm −2 ] = 7.02). The LAI and CHL predictions were used afterwards to calibrate a multiple linear regression model to estimate grain yield (R 2 yield = 0.88, RMSEyield [dt ha −1 ] = 4.18). With this model, a pixel-wise prediction of the hyperspectral image was performed. The resulting yield estimates were validated and opposed to the different nitrogen treatments, which revealed that, above a certain amount of applied nitrogen, further fertilization does not necessarily lead to larger yield.
Article
Full-text available
In the last 10 years, development in robotics, computer vision, and sensor technology has provided new spectral remote sensing tools to capture unprecedented ultra-high spatial and high spectral resolution with unmanned aerial vehicles (UAVs). This development has led to a revolution in geospatial data collection in which not only few specialist data providers collect and deliver remotely sensed data, but a whole diverse community is potentially able to gather geospatial data that fit their needs. However, the diversification of sensing systems and user applications challenges the common application of good practice procedures that ensure the quality of the data. This challenge can only be met by establishing and communicating common procedures that have had demonstrated success in scientific experiments and operational demonstrations. In this review, we evaluate the state-of-the-art methods in UAV spectral remote sensing and discuss sensor technology, measurement procedures, geometric processing, and radiometric calibration based on the literature and more than a decade of experimentation. We follow the ‘journey’ of the reflected energy from the particle in the environment to its representation as a pixel in a 2D or 2.5D map, or 3D spectral point cloud. Additionally, we reflect on the current revolution in remote sensing, and identify trends, potential opportunities, and limitations.
Article
Full-text available
Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth’s surface. In this situation, an efficient compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless compression algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher compression ratios, making it necessary to use lossy compression techniques. A new transform-based lossy compression algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA), is proposed in this manuscript. This compressor has been developed for achieving high compression ratios with a good compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art compression solutions. The theoretical and experimental evidence indicates that the proposed algorithm represents an excellent option for lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.
Article
Full-text available
Remote sensing from unmanned aircraft systems (UAS) was expected to be an important new technology to assist farmers with precision agriculture, especially crop nutrient management. There are three advantages using UAS platforms compared to manned aircraft platforms with the same sensor for precision agriculture: (1) smaller ground sample distances, (2) incident light sensors for image calibration, and (3) canopy height models created from structure-from-motion point clouds. These developments hold promise for future data products. In order to better match vendor capabilities with farmer requirements, we classify applications into three general niches: (1) scouting for problems, (2) monitoring to prevent yield losses, and (3) planning crop management operations. The three different niches have different requirements for sensor calibration and have different costs of operation. Planning crop management operations may have the most environmental and economic benefits. However, a USDA Economic Research Report showed that only about 20% of farmers in the USA have adopted variable rate applicators; so, most farmers in the USA may not have the technology to benefit from management plans. In the near-term, monitoring to prevent yield losses from weeds, insects, and diseases may provide the most economic and environmental benefits, but the costs for data acquisition need to be reduced.
Article
Full-text available
Traditional imagery-provided, for example, by RGB and/or NIR sensors-has proven to be useful in many agroforestry applications. However, it lacks the spectral range and precision to profile materials and organisms that only hyperspectral sensors can provide. This kind of high-resolution spectroscopy was firstly used in satellites and later in manned aircraft, which are significantly expensive platforms and extremely restrictive due to availability limitations and/or complex logistics. More recently, UAS have emerged as a very popular and cost-effective remote sensing technology, composed of aerial platforms capable of carrying small-sized and lightweight sensors. Meanwhile, hyperspectral technology developments have been consistently resulting in smaller and lighter sensors that can currently be integrated in UAS for either scientific or commercial purposes. The hyperspectral sensors' ability for measuring hundreds of bands raises complexity when considering the sheer quantity of acquired data, whose usefulness depends on both calibration and corrective tasks occurring in pre-and post-flight stages. Further steps regarding hyperspectral data processing must be performed towards the retrieval of relevant information, which provides the true benefits for assertive interventions in agricultural crops and forested areas. Considering the aforementioned topics and the goal of providing a global view focused on hyperspectral-based remote sensing supported by UAV platforms, a survey including hyperspectral sensors, inherent data processing and applications focusing both on agriculture and forestry-wherein the combination of UAV and hyperspectral sensors plays a center role-is presented in this paper. Firstly, the advantages of hyperspectral data over RGB imagery and multispectral data are highlighted. Then, hyperspectral acquisition devices are addressed, including sensor types, acquisition modes and UAV-compatible sensors that can be used for both research and commercial purposes. Pre-flight operations and post-flight pre-processing are pointed out as necessary to ensure the usefulness of hyperspectral data for further processing towards the retrieval of conclusive information. With the goal of simplifying hyperspectral data processing-by isolating the common user from the processes' mathematical complexity-several available toolboxes that allow a direct access to level-one hyperspectral data are presented. Moreover, research works focusing the symbiosis between UAV-hyperspectral for agriculture and forestry applications are reviewed, just before the paper's conclusions.
Article
Full-text available
In this study, we assess two push broom hyperspectral sensors as carried by small (10–15 kg) multi-rotor Unmanned Aircraft Systems (UAS). We used a Headwall Photonics micro-Hyperspec push broom sensor with 324 spectral bands (4–5 nm FWHM) and a Headwall Photonics nano-Hyperspec sensor with 270 spectral bands (6 nm FWHM) both in the VNIR spectral range (400–1000 nm). A gimbal was used to stabilise the sensors in relation to the aircraft flight dynamics, and for the micro-Hyperspec a tightly coupled dual frequency Global Navigation Satellite System (GNSS) receiver, an Inertial Measurement Unit (IMU), and Machine Vision Camera (MVC) were used for attitude and position determination. For the nano-Hyperspec, a navigation grade GNSS system and IMU provided position and attitude data. This study presents the geometric results of one flight over a grass oval on which a dense Ground Control Point (GCP) network was deployed. The aim being to ascertain the geometric accuracy achievable with the system. Using the PARGE software package (ReSe – Remote Sensing Applications) we ortho-rectify the push broom hyperspectral image strips and then quantify the accuracy of the ortho-rectification by using the GCPs as check points. The orientation (roll, pitch, and yaw) of the sensor is measured by the IMU. Alternatively imagery from a MVC running at 15 Hz, with accurate camera position data can be processed with Structure from Motion (SfM) software to obtain an estimated camera orientation. In this study, we look at which of these data sources will yield a flight strip with the highest geometric accuracy.
Article
Full-text available
Correct estimation of above-ground biomass (AGB) is necessary for accurate crop growth monitoring and yield prediction. We estimated AGB based on images obtained with a snapshot hyperspectral sensor (UHD 185 firefly, Cubert GmbH, Ulm, Baden-Württemberg, Germany) mounted on an unmanned aerial vehicle (UAV). The UHD 185 images were used to calculate the crop height and hyperspectral reflectance of winter wheat canopies from hyperspectral and panchromatic images. We constructed several single-parameter models for AGB estimation based on spectral parameters, such as specific bands, spectral indices (e.g., Ratio Vegetation Index (RVI), NDVI, Greenness Index (GI) and Wide Dynamic Range VI (WDRVI)) and crop height and several models combined with spectral parameters and crop height. Comparison with experimental results indicated that incorporating crop height into the models improved the accuracy of AGB estimations (the average AGB is 6.45 t/ha). The estimation accuracy of single-parameter models was low (crop height only: R2 = 0.50, RMSE = 1.62 t/ha, MAE = 1.24 t/ha; R670 only: R2 = 0.54, RMSE = 1.55 t/ha, MAE = 1.23 t/ha; NDVI only: R2 = 0.37, RMSE = 1.81 t/ha, MAE = 1.47 t/ha; partial least squares regression R2 = 0.53, RMSE = 1.69, MAE = 1.20), but accuracy increased when crop height and spectral parameters were combined (partial least squares regression modeling: R2 = 0.78, RMSE = 1.08 t/ha, MAE = 0.83 t/ha; verification: R2 = 0.74, RMSE = 1.20 t/ha, MAE = 0.96 t/ha). Our results suggest that crop height determined from the new UAV-based snapshot hyperspectral sensor can improve AGB estimation and is advantageous for mapping applications. This new method can be used to guide agricultural management.
Article
Full-text available
Low-cost Unmanned Airborne Vehicles (UAVs) equipped with consumer-grade imaging systems have emerged as a potential remote sensing platform that could satisfy the needs of a wide range of civilian applications. Among these applications, UAV-based agricultural mapping and monitoring have attracted significant attention from both the research and professional communities. The interest in UAV-based remote sensing for agricultural management is motivated by the need to maximize crop yield. Remote sensing-based crop yield prediction and estimation are primarily based on imaging systems with different spectral coverage and resolution (e.g., RGB and hyperspectral imaging systems). Due to the data volume, RGB imaging is based on frame cameras, while hyperspectral sensors are primarily push-broom scanners. To cope with the limited endurance and payload constraints of low-cost UAVs, the agricultural research and professional communities have to rely on consumer-grade and light-weight sensors. However, the geometric fidelity of derived information from push-broom hyperspectral scanners is quite sensitive to the available position and orientation established through a direct geo-referencing unit onboard the imaging platform (i.e., an integrated Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS). This paper presents an automated framework for the integration of frame RGB images, push-broom hyperspectral scanner data and consumer-grade GNSS/INS navigation data for accurate geometric rectification of the hyperspectral scenes. The approach relies on utilizing the navigation data, together with a modified Speeded-Up Robust Feature (SURF) detector and descriptor, for automating the identification of conjugate features in the RGB and hyperspectral imagery. The SURF modification takes into consideration the available direct geo-referencing information to improve the reliability of the matching procedure in the presence of repetitive texture within a mechanized agricultural field. Identified features are then used to improve the geometric fidelity of the previously ortho-rectified hyperspectral data. Experimental results from two real datasets show that the geometric rectification of the hyperspectral data was improved by almost one order of magnitude.
Article
Full-text available
Hyperspectral images are of increasing importance in remote sensing applications. Imaging spectrometers provide semi-continuous spectra that can be used for physics based surface cover material identification and quantification. Preceding radiometric calibrations serve as a basis for the transformation of measured signals into physics based units such as radiance. Pushbroom sensors collect incident radiation by at least one detector array utilizing the photoelectric effect. Temporal variations of the detector characteristics that differ with foregoing radiometric calibration cause visually perceptible along-track stripes in the at-sensor radiance data that aggravate succeeding image-based analyses. Especially, variations of the thermally induced dark current dominate and have to be reduced. In this work, a new approach is presented that efficiently reduces dark current related stripe noise. It integrates an across-effect gradient minimization principle. The performance has been evaluated using artificially degraded whiskbroom (reference) and real pushbroom acquisitions from EO-1 Hyperion and AISA DUAL that are significantly covered by stripe noise. A set of quality indicators has been used for the accuracy assessment. They clearly show that the new approach outperforms a limited set of tested state-of-the-art approaches and achieves a very high accuracy related to ground-truth for selected tests. It may substitute recent algorithms in the Reduction of Miscalibration Effects (ROME) framework that is broadly used to reduce radiometric miscalibrations of pushbroom data takes.
Article
Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in the last four decades from being a sparse research tool into a commodity product available to a broad user community. Specially, in the last 10 years, a large number of new techniques able to take into account the special properties of hyperspectral data have been introduced for hyperspectral data processing, where hyperspectral image classification, as one of the most active topics, has drawn massive attentions. Spectral-spatial hyperspectral image classification can achieve better classification performance than its pixel-wise counterpart, since the former utilizes not only the information of spectral signature but also that from spatial domain. In this paper, we provide a comprehensive overview on the methods belonging to the category of spectral-spatial classification in a relatively unified context. First, we develop a concept of spatial dependency system that involves pixel dependency and label dependency, with two main factors: neighborhood covering and neighborhood importance. In terms of the way that the neighborhood information is used, the spatial dependency systems can be classified into fixed, adaptive, and global systems, which can accommodate various kinds of existing spectral-spatial methods. Based on such, the categorizations of single-dependency, bilayer-dependency, and multiple-dependency systems are further introduced. Second, we categorize the performings of existing spectral-spatial methods into four paradigms according to the different fusion stages wherein spatial information takes effect, i.e., preprocessing-based, integrated, postprocessing-based, and hybrid classifications. Then, typical methodologies are outlined. Finally, several representative spectral-spatial classification methods are applied on real-world hyperspectral data in our experiments.