Fig 1 - uploaded by Corey Duberstein
Content may be subject to copyright.
The thermal imaging camera characteristics that determine the spatial resolution and field of view.

The thermal imaging camera characteristics that determine the spatial resolution and field of view.

Source publication
Article
Full-text available
Thermal infrared video can provide essential information about bird and bat activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provid...

Context in source publication

Context 1
... basic components of a thermal camera are shown in Fig. 1. A thermal infrared camera works similarly to an optical camera in that a lens focuses energy onto an array of receptors to produce an image. A thermal image is an intensity image, where the intensity value of each pixel is related to the amount of thermal energy incident on an element in the receptor array. A thermal image contains no ...

Citations

... To that end, the Pacific Northwest National Laboratory (PNNL), with funding from the U.S. Department of Energy, has spent the last decade engineering and testing a thermal camera-based system known as the ThermalTracker-3D (TT3D). The autonomous detection and tracking performance of the TT3D has previously been evaluated on land (Matzner et al., 2015(Matzner et al., , 2020. ...
... The time-series of valid positions was then smoothed using cubic spline interpolation as implemented in the SciPy interpolate module. The smoothing reduced the uncertainty in the position data due to the limited spatial resolution of the thermal data, which depends on the distance of a target from the camera (Matzner et al., 2015). ...
Article
Full-text available
Planning is underway for placement of infrastructure needed to begin offshore wind (OSW) energy generation along the West Coast of the United States and elsewhere in the Pacific Ocean. In contrast to the primarily nearshore windfarms currently in the North Atlantic, the seabird communities inhabiting Pacific Wind Energy Areas (WEAs) include significant populations of species that fly by dynamic soaring, a behavior dependent on wind and in which flight height increases steeply with wind speed. Therefore, a more precise and detailed assessment of their 3D airspace use is needed to better understand the potential collision risks that OSW turbines may present to these seabirds. Toward this end, a novel technology called the ThermalTracker-3D (TT3D), which uses thermal imaging and stereo vision, was developed to render high-resolution (on average within ±5 m) flight tracks and related behavior of seabirds. The technology was developed and deployed on a wind-profiling LiDAR buoy in the Humboldt WEA, located 34 to 57 km off California’s coast. During the at-sea deployment between 24 May and 13 August 2021, the TT3D successfully tracked birds moving between 10 and 500 m from the device, around the clock, and in all weather conditions; a total of 1407 detections and their corresponding 3D flight trajectories were recorded. Mean altitudes of detections ranged 6-295 m above sea level (asl). Considering the degree of overlap with anticipated rotor swept zones (RSZ), which extend 25-260 m asl, 79% of detected birds (per m³ of airspace) moved below the RSZ, 21% moved at heights overlapping the RSZ, and another 0.04% occurred at heights exceeding the RSZ. The high-resolution tracks provided valuable insight into seabird space use, especially at heights that make them vulnerable to collision during various environmental conditions (e.g., darkness, strong winds). Observations made by the TT3D will be useful in filling critical knowledge gaps related to estimating collision and avoidance between seabirds and OSW facilities in the Pacific and elsewhere. Future research will focus on enhancing the TT3D’s identification capabilities to the lowest taxon through validation studies and artificial intelligence, further contributing to seabird conservation efforts associated with OSW.
... While such fine-scale measurements are common in multi-camera 3D filming setups (e.g., Corcoran & Hedrick, 2019;Evangelista et al., 2017;Ling et al., 2018), those approaches are significantly more complicated to deploy and require at least three times the number of cameras. While calculating these values from single-camera videos has been discussed (Matzner et al., 2015) and at least partially implemented (wingbeat frequency in captivity, Atanbori et al., 2013; wingbeat frequency with posture matching, Breslav et al., 2012; and wingbeat frequency in birds, Li & Song, 2013), another specific objective of our study was to implement this complete suite with single cameras and the output from the CNNs described above at scale in the wild. ...
Article
Full-text available
Estimating animal populations is essential for conservation. Censusing large congregations is especially important since these are priorities for protection, but efficiently counting hundreds of thousands of moving animals remains a challenge. We developed a deep learning-based system using consumer cameras that not only counts but also records behavioral information for large numbers of flying animals in a range of lighting conditions including near darkness. We built a robust training set without human labeling by leveraging data augmentation and background subtraction. We demonstrate this approach by estimating the size of a straw-colored fruit bat (Eidolon helvum) colony in Kasanka National Park, Zambia with cameras encircling the colony to record evening emergence. Detection of bats was robust to deteriorating lighting conditions and changing backgrounds. Combined over five days, our population estimates ranged between 750,000 and 976,000 bats with a mean of 857,233. In addition to counts, we extracted wingbeat frequency, flight altitude, and local group polarity for 639,414 individuals. This open access method is an inexpensive but powerful approach that, in addition to radial emergences from central locations, can also be applied to unidirectional movements of flying groups, such as migratory streams of birds. © 2023 The Authors. Ecosphere published by Wiley Periodicals LLC on behalf of The Ecological Society of America.
... Corcoran & Hedrick, 2019;Evangelista et al., 2017;Ling et al., 2018) those approaches are significantly more complicated to deploy and require at least three times the number of cameras. While calculating these values from single camera videos has been discussed (Matzner et al., 2015) and at least partially implemented (wingbeat frequency in captivity (Atanbori et al., 2013); wingbeat frequency with posture matching (Breslav et al., 2012); ...
... The typical resolution for thermal (Long Wave Infra-Red) cameras is low (320x240 or 640x480 pixels) compared to digital cameras using daylight or near-infrared, and therefore reduces a small or remote object to a few pixels making it harder to track. Increasing the focal length of the camera (higher zoom factor) will increase the detection range but at the same time the field of view (FOV) will decrease, and objects can easily be lost when they fall outside the image (Matzner et al. 2015). The optimal focal length therefore depends on the area surveyed as well as on the size and behaviour of the subject. ...
... There are active-cooled and uncooled cameras and both of those systems have their own advantages, the details of which are beyond the scope of this report, but in general cooled cameras produce a much better image quality (Matzner et al. 2015) but are more expensive, use more energy and require a "startup-time" to cool down. Uncooled cameras tend to have lower resolution and quality. ...
... Sensitivity (NETD) for thermal camerasThe reported detection distance of bats is up to 100 m(Matzner et al. 2015), 120 m(Lagerveld et al. 2017A) and 150 m(Mollis et al. 2019), using thermal or near-infrared cameras. Medium to large birds can be detected at about 200 m(Matzner et al. 2015). ...
... The detection of bats and small birds over the full length of a rotor blade, for instance, usually requires a large focal length, although this is also generally dependent on the camera mounting location, such as from structures on the lower platform or nacelle. A longer or large focal length, however, yields a small field of view (as computable with online calculators such as Fulton 2018) and a smaller depth of field (Matzner et al. 2015) ( Figure 6.4). Consequently, multiple infrared cameras are generally necessary for detection of bats and small birds within the rotor-swept area, coupled with a generally significant increase in equipment power and data storage requirements. ...
... The physical principles that determine camera characteristics, such as resolution of images and detection limits of objects. (Modified afterMatzner et al. 2015) ...
Chapter
This chapter collates the scope and limitations of technology and methods to quantify the density, identity, flight height and behaviour of bats and birds in the offshore environment, including both seabirds and migratory landbirds such as passerines and waterfowl. Such information is needed because offshore wind-farm development has reached the industrial stage in European waters and is now rapidly increasing in many countries around the world such as in South-east Asia and the USA. Development poses direct and indirect threats to wildlife, particularly in a cumulative context. Many aspects of animal–turbine interactions appear to be site, season and species specific, so that uncertainties about the magnitude of threats and potential impacts remain. Quantification of bat and bird activity and risk-associated behaviour is especially challenging in the offshore environment for practical reasons, such as technical constraints on measurements (e.g. wave impact on acoustic and radar detection) as well as for reasons of remoteness and limited accessibility, thus demanding the use of elaborate remote-sensing techniques rather than observer-based visual observations. The practicability of a range of methods and techniques to achieve these aims is introduced and described, with a focus on multisensor systems. These systems intend to maximise the quality of bird and bat data needed as input for collision risk models. The quality of existing risk models would greatly benefit from (1) improved input data through technical advances of both well-established and more recently developed methods, such as advanced radars, thermal imaging devices, video cameras (visual and short-wave infrared light), radio and satellite telemetry, and acoustic analysis software, and (2) adjusted parameterisation, such as the effects of avoidance or attraction of wind turbines and variability in these input parameters. A recommendation for what is considered to be the best available practical solution to quantify interactions of birds and bats with offshore wind farms is provided.
... In-flight classification introduces challenges around image quality, shape, and image noise, but also presents some opportu-nities: the flight patterns of birds are known to vary across different species (Bruderer et al., 2010;Duberstein et al., 2012) and are used by human observers to assist recognition. Very few existing studies Matzner et al., 2015) have made use of motion features in this way, and these have only differentiated small numbers of species. None has previously combined appearance and motion features to facilitate improved classification. ...
... Colour features are generally undetectable in low-light, and so applications to bat monitoring have looked more closely at motion features. For example, Cullinan et al. (2015); Matzner et al. (2015); Hristov et al. (2010); Betke et al. (2008); Lazarevic et al. (2008) censused large bat populations. Betke et al. (2008) also estimated wing beat frequencies of individuals using pose templates, and applying Fast Fourier Transform (FFT). ...
... Whilst Cullinan et al. (2015), and Matzner et al. (2015) used motion features to differentiate between small numbers of bird species, other existing works concerned with automated classification of birds use appearance features from a single image of an individual bird. These approaches can be further subdivided into those that make use of the physical structure of the bird (which we refer to as part-based), and those which do not. ...
Article
Full-text available
The monitoring of bird populations can provide important information on the state of sensitive ecosystems; however, the manual collection of reliable population data is labour-intensive, time-consuming, and potentially error prone. Automated monitoring using computer vision is therefore an attractive proposition, which could facilitate the collection of detailed data on a much larger scale than is currently possible. A number of existing algorithms are able to classify bird species from individual high quality detailed images often using manual inputs (such as a priori parts labelling). However, deployment in the field necessitates fully automated in-flight classification, which remains an open challenge due to poor image quality, high and rapid variation in pose, and similar appearance of some species. We address this as a fine-grained classification problem, and have collected a video dataset of thirteen bird classes (ten species and another with three colour variants) for training and evaluation. We present our proposed algorithm, which selects effective features from a large pool of appearance and motion features. We compare our method to others which use appearance features only, including image classification using state-of-the-art Deep Convolutional Neural Networks (CNNs). Using our algorithm we achieved an 90% correct classification rate, and we also show that using effectively selected motion and appearance features together can produce results which outperform state-of-the-art single image classifiers. We also show that the most significant motion features improve correct classification rates by 7% compared to using appearance features alone.
... Very little existing work has considered this, and the primary objective of this research is to combine motion features with appearance features to provide robust automated classification of birds in flight: a technology which would be immensely useful for deployment in the field. The only existing comparable works are those presented by Duberstein et al. (2012) which address only broad categorisation (bats, swallows, terns and gulls) and use thermal sensors and also work by Matzner et al. (2015). However, both these works are very limited in that the number of species/categories, the data sets are small, and the species used in the experiment have obviously very different flight patterns. ...
... Whilst Cullinan et al. (2015), and Matzner et al. (2015) mentioned above have attempted to use motion features to differentiate between small numbers of species, all other existing works concerned with the automated classification of birds use appearance features derived from a single image of an individual bird. These type of approaches can be further subdivided into those that make use of information about the physical structure of individual birds (referred to as part-based), and those which do not. ...
... The most significant relevant studies using motion features with bird species include Duberstein et al. (2012), which explores wing beat frequencies of bird and bats species but does not use these features for classification. Duberstein et al. (2012) Hitherto, works concerned with flight pattern have been carried out using recordings from tracking radar (Liechti and Bruderer, 2002;Bruderer et al., 2010;Zaugg et al., 2008) during the migration seasons of bird species while others have been carried out using thermal camera (Duberstein et al., 2012;Cullinan et al., 2015;Matzner et al., 2015). ...
Thesis
Full-text available
Bird species are recognised as important biodiversity indicators: they are responsive to changes in sensitive ecosystems, whilst populations-level changes in behaviour are both visible and quantifiable. They are monitored by ecologists to determine factors causing population fluctuation and to help conserve and manage threatened and endangered species. Every five years, the health of bird population found in the UK are reviewed based on data collected from various surveys. Currently, techniques used in surveying species include manual counting, Bioacoustics and computer vision. The latter is still under development by researchers. Hitherto, no computer vision technique has fully been deployed in the field for counting species as these techniques use high-quality and detailed images of stationary birds, which make them impractical for deployment in the field, as most species in the field are in-flight and sometimes distant from the cameras field of view. Techniques such as manual and bioacoustics are the most frequently used but they can also become impractical, particularly when counting densely populated migratory species. Manual techniques are labour intensive whilst bioacoustics may be unusable when deployed for species that emit little or no sound. There is the need for automated systems for identifying species using computer vision and machine learning techniques, specifically for surveying densely populated migratory species. However, currently, most systems are not fully automated and use only appearance-based features for identification of species. Moreover, in the field, appearance-based features like colour may fade at a distance whilst motion-based features will remain discernible. Thus to achieve full automation, existing systems will have to combine both appearance and motion features. The aim of this thesis is to contribute to this problem by developing computer vision techniques which combine appearance and motion features to robustly classify species, whilst in flight. It is believed that once this is achieved, with additional development, it will be able to support the surveying of species and their behaviour studies. The first focus of this research was to refine appearance features previously used in other related works for use in automatic classification of species in flight. The bird appearances were described using a group of seven proposed appearance features, which have not previously been used for bird species classification. The proposed features improved the classification rate when compared to state-of-the-art systems that were based on appearance features alone (colour features). The second step was to extract motion features from videos of birds in flight, which were used for automatic classification. The motion of birds was described using a group of six features, which have not previously been used for bird species classification. The proposed motion features, when combined with the appearance features improved classification rates compared with only appearance or motion features. The classification rates were further improved using feature selection techniques. There was an increase of between 2-6% of correct classification rates across all classifiers, which may be attributable directly to the use of motion features. The only motion features selected are the wing beat frequency and vicinity features irrespective of the method used. This shows how important these groups of features were to species classification. Further analysis also revealed specific improvements in identifying species with similar visual appearance and that using the optimal motion features improve classification accuracy significantly. We attempt a further improvement in classification accuracy, using majority voting. This was used to aggregate classification results across a set of video sub-sequences, which improved classification rates considerably. The results using the combined features with majority voting outperform those without majority voting by 3% and 6% on the seven species and thirteen classes dataset respectively. Finally, a video dataset against which future work can be benchmarked has been collated. This data set enables the evaluation of work against a set of 13 species, enabling effective evaluation of automated species identification to date and a benchmark for further work in this area of research. The key contribution of this research is that a species classification system was developed, which combines motion and appearance features and evaluated it against existing appearance-only-based methods. This is not only the first work to combine features in this way but also the first to apply a voting technique to improve classification performance across an entire video sequence.
... Infrared or thermal cameras work similarly to optical or common CCD cameras, in that a lens focuses energy onto an array of receptors to produce an image. By receiving and measuring infrared radiation from the surface of an object, the camera captures information on the heat that the object is emitting and then converts this to a radiant temperature reading (James et al., 2014;Matzner et al., 2015). Thus, while CCD cameras measure the radiation of visible bands, thermal cameras detect the characteristic near-infrared radiation (typically wavelengths of 8-12 µm) of objects (McCafferty et al., 2011). ...
Article
**Free access to the article until July 13, 2017:https://authors.elsevier.com/a/1V5ig6fSi9YjKd** Livestock production to provide food for a growing world population, with increasing demand for meat and milk products, has led to a rapid growth in the scale of cattle and pig enterprises globally. However, consumers and the wider society are also increasingly concerned about the welfare, health and living conditions of farm animals. Awareness of animal needs underpins new production standards for animal health and welfare. Pig and cattle behaviour can provide information about their barn environmental situation, food and water adequacy, health, welfare and production efficiency. Real-time scoring of cattle and pig behaviours is challenging, but the increasing availability and sophistication of technology makes automated monitoring of animal behaviour practicable. Machine vision techniques, as novel technologies, can provide an automated, non-contact, non-stress and cost-effective way to achieve animal behaviour monitoring requirements. This review describes the state of the art in 3D imaging systems (i.e. depth sensor and time of flight cameras) along with 2D cameras for effectively identifying livestock behaviours, and presents automated approaches for monitoring and investigation of cattle and pig feeding, drinking, lying, locomotion, aggressive and reproductive behaviours. The performance of developed systems is reviewed in terms of sensitivity, specificity, accuracy, error rate and precision. These technologies can support the farmer by monitoring normal behaviours and early detection of abnormal behaviours in large scale enterprises. **Free access to the article until July 13, 2017:https://authors.elsevier.com/a/1V5ig6fSi9YjKd**
... This technique makes use of the higher body temperature of animals compared to their surroundings for detection, although it has trouble distinguishing between different animal species of similar size and shape. This limitation was recently overcome by Matzner et al. (2015) who combined thermal videography to detect differences in temperature and digital photography for identifying species such as birds. However, thermal videography has many limitations. ...
Article
Impacts of human civilization on ecosystems threaten global biodiversity. In a changing environment, traditional in situ approaches to biodiversity monitoring have made significant steps forward to quantify and evaluate BD at many scales but still, these methods are limited to comparatively small areas. Earth observation (EO) techniques may provide a solution to overcome this shortcoming by measuring entities of interest at different spatial and temporal scales. This paper provides a comprehensive overview of the role of EO to detect, describe, explain, predict and assess biodiversity. Here, we focus on three main aspects related to biodiversity – taxonomic diversity, functional diversity and structural diversity, which integrate different levels of organization – molecular, genetic, individual, species, populations, communities, biomes, ecosystems and landscapes. In particular, we discuss the recording of taxonomic elements of biodiversity through the identification of animal and plant species. We highlight the importance of the species traits concept for EO-based biodiversity research. Furthermore we provide examples of spectral traits used in EO applications for quantifying taxonomic diversity, functional diversity and structural diversity. We discuss the use of EO to monitor biodiversity and habitat quality using different remote-sensing techniques. Finally, we suggest specifically important steps for a better integration of EO in biodiversity research. EO methods represent an affordable, repeatable and comparable method for measuring, describing, explaining and modelling taxonomic, functional and structural diversity. Upcoming sensor developments will provide opportunities to quantify spectral traits, currently not detectable with EO, and will surely help to describe biodiversity in more detail. Therefore, new concepts are needed to tightly integrate EO sensor networks with the identification of biodiversity. This will mean taking completely new directions in the future to link complex, large data, different approaches and models.