Fig 2 - uploaded by Paul Fergus
Content may be subject to copyright.
Faster-RCNN architecture.

Faster-RCNN architecture.

Source publication
Article
Full-text available
Determining animal distribution and density is important in conservation. The process is both time-consuming and labour-intensive. Drones have been used to help mitigate human-intensive tasks by covering large geographical areas over a much shorter timescale. In this paper we investigate this idea further using a proof of concept to detect rhinos a...

Contexts in source publication

Context 1
... the region proposal and object detection tasks were undertaken by the same CNN. Figure 2 shows the basic architecture of a Faster-RCNN. This model architecture allows us to perform image classification and object detection for counting. ...
Context 2
... the region proposal and object detection tasks were undertaken by the same CNN. Figure 2 shows the basic architecture of a Faster-RCNN. This model architecture allows us to perform image classification and object detection for counting. ...

Citations

... Drone identification and classification based on RF fingerprints involves capturing and analyzing the unique radio frequency signals emitted by drones during their operation [9][10][11]. Traditional methods for drone identification have primarily relied on visual cues, such as image or video analysis [12][13][14]. However, these approaches can be limited in scenarios where drones are visually obstructed or when distinguishing between similar drone models is challenging. ...
Article
The convergence of drones with the Internet of Things (IoT) has paved the way for the Internet of Drones (IoD), an interconnected network of drones, ground control systems, and cloud infrastructure that enables enhanced connectivity, data exchange, and autonomous operations. The integration of drones with the IoD has opened up new possibilities for efficient and intelligent aerial operations, facilitating advancements in sectors such as logistics, agriculture, surveillance, and emergency response This paper introduces a novel approach for drone identification and classification by extracting radio frequency (RF) fingerprints and utilizing Mel spectrograms as distinctive patterns. The proposed approach converts RF signals to audio signals and leverages Mel spectrograms as essential features to train neural networks. The YAMNet neural network is employed, utilizing transfer learning techniques, to train the dataset and classify multiple drone models. The initial classification layer achieves an impressive accuracy of 99.6% in distinguishing between drones and non-drones. In the subsequent layer, the model achieves 96.9% accuracy in identifying drone types from three classes, including AR Drone, Bebop Drone, and Phantom Drone. At the third classification layer, the accuracy ranges between 96% and 97% for identifying the specific mode of each drone type. This research showcases the efficacy of Mel spectrogram-based RF fingerprints and demonstrates the potential for accurate drone identification and classification using pre-trained YAMNet neural networks.
... In recent years, automated wildlife detection plays a critical role in wildlife survey (Chalmers et al., 2021;Delplanque et al., 2021;Peng et al., 2020), conservation (Khaemba and Stein, 2002;O'Brien, 2010), and ecosystem management (Austrheim et al., 2014;Harris et al., 2010) to tackle worldwide accelerated biodiversity crisis. Up-to-date detailed and accurate wildlife data can be beneficial in preventing biodiversity losses, ecosystem damage, and poaching (Norouzzadeh et al., 2018;Petso et al., 2021). ...
Article
Full-text available
Objective. With climatic instability, various ecological disturbances, and human actions threaten the existence of various endangered wildlife species. Therefore, an up-to-date accurate and detailed detection process plays an important role in protecting biodiversity losses, conservation, and ecosystem management. Current state-of-the-art wildlife detection models, however, often lack superior feature extraction capability in complex environments, limiting the development of accurate and reliable detection models. Method. To this end, we present WilDect-YOLO, a deep learning (DL)-based automated high-performance detection model for real-time endangered wildlife detection. In the model, we introduce a residual block in the CSPDarknet53 backbone for strong and discriminating deep spatial features extraction and integrate DenseNet blocks to improve in preserving critical feature information. To enhance receptive field representation, preserve fine-grain localized information, and improve feature fusion, a Spatial Pyramid Pooling (SPP) and modified Path Aggregation Network (PANet) have been implemented that results in superior detection under various challenging environments. Results. Evaluating the model performance in a custom endangered wildlife dataset considering high variability and complex backgrounds, WilDect-YOLO obtains a mean average precision (mAP) value of 96.89%, F1-score of 97.87%, and precision value of 97.18% at a detection rate of 59.20 FPS outperforming current state-of-the-art models. Significance. The present research provides an effective and efficient detection framework addressing the shortcoming of existing DL-based wildlife detection models by providing highly accurate species-level localized bounding box prediction. Current work constitutes a step toward a non-invasive, fully automated animal observation system in real-time in-field applications.
... Technological advancements, such as the use of consumer-grade drones and convolutional neural networks in collecting and analyzing population-level data [46], could greatly improve the number of conservation prioritization studies for fauna. These technologies can reduce the effort and cost and provide real-time detections for animal surveys and animal tracking for prioritizing choices for fauna conservation [46,47]. ...
Article
Full-text available
Here, we synthesized the research trends in conservation priorities for terrestrial fauna and flora across the globe from peer-reviewed articles published from 1990 to 2022, following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Results showed India to have the highest number of studies (i.e., 12) about the topic. Contrarily, most of the megadiverse and biodiversity hotspot countries have only 1-3 studies. Flora studies are more documented than faunal studies. The bio-ecological attributes are the most frequently used criteria for prioritizing choices in the conservation of fauna (i.e., 55.42%) and flora species (i.e., 41.08%). The climatic/edaphic and the taxonomic/genetic variables for flora had the lowest frequency (i.e., <5%). For fauna, the lowest value (i.e., <10%) was observed in socioeconomic and climatic/edaphic criteria. Moreover, the point scoring method (PSM), was the most frequently used in conservation prioritization, followed by conservation priority index (CPI), correlation analysis, principal component analysis (PCA), species distribution model, and rule-based method. The present review also showed multiple species as the most frequently used approach in prioritizing conservation choices in both flora and fauna species. We highlight the need to increase not only the conservation prioritization studies but also the scientific efforts on improving biodiversity-related information in hotspot regions for an improved prioritization methodology, particularly in the faunal aspect.
... Although research estimating population density from drone surveys is emerging (e.g., Beaver et al. 2020), the vast majority of the drone-based studies thus far have focused on determining species' presence (Linchant et al. 2015;Wich and Koh 2018;Wang et al. 2019). Drones can survey areas in a fraction of the time of other existing methods (Jiménez López and Mulero-Pázmány 2019), and observer bias (i.e., differences between observers in their ability to detect the presence of the animal of interest) can be minimized as multiple observers can review images or video footage obtained (Vermeulen et al. 2013;Martin et al. 2015;Scarpa and Piña 2019) and machine learning algorithms can be used to automatically detect species or individuals (Seymour et al. 2017;Corcoran et al. 2019Corcoran et al. , 2020Chalmers et al. 2021). In addition, a wide variety of sensors (e.g., multispectral or hyperspectral imaging outside of the typical RBG frequency range, LiDAR, chemical imaging) can be mounted on drones to achieve particular desired research objectives (Wich and Koh 2018;Jiménez López and Mulero-Pázmány 2019). ...
Article
Full-text available
Commercial, off-the-shelf, multirotor drones are increasingly employed to survey wildlife due to their relative ease of use and ability to cover areas quicker than traditional methods. Such drones fitted with high-resolution visual spectrum (RGB) cameras are an appealing tool for wildlife biologists. However, evaluations of the application of drones with RGB cameras for monitoring large-bodied arboreal mammals are largely lacking. We aimed to assess whether Geoffroy’s spider monkeys (Ateles geoffroyi) could be detected in RGB videos collected by drones in tropical forests. We performed 77 pre-programmed grid flights with a DJI Mavic 2 Pro drone at a height of 10 m above the maximum canopy height covering 45% of a 1-hectare polygon per flight. We flew the drone directly over spider monkeys who had just been sighted from the ground, detecting monkeys in 85% of 20 detection test flights. Monkeys were detected in 17% of 18 trial flights over areas of known high relative abundance. We never detected monkeys in 39 trial flights over areas of known low relative abundance. Proportion of spider monkey detections during drone flights was lower than other commonly employed survey methods. Agreement between video-coders was high. Overall, our results suggest that with some changes in our research design, multirotor drones with RGB cameras might be a viable survey method to determine spider monkey presence in closed-canopy forest, although its applicability for rapid assessments of arboreal mammal species′ distributions seems currently unfeasible. We provide recommendations to improve survey design using drones to monitor arboreal mammal populations.
... In recent years, automated wildlife detection plays a critical role in wildlife survey (Peng et al., 2020;Chalmers et al., 2021;Delplanque et al., 2021), conservation (Khaemba and Stein, 2002;O'Brien, 2010), and ecosystem management (Austrheim et al., 2014;Harris et al., 2010) to tackle worldwide accelerated biodiversity crisis. Up-to-date detailed and accurate wildlife data can be beneficial in preventing biodiversity losses, ecosystem damage, and poaching (Norouzzadeh et al., 2018;Petso et al., 2021). ...
... The training was done by tagging approximately 6000 aerial-view images of people, cars, and African animal species (elephants, rhinos, etc.), both TIR and RGB images, via the framework: www.conservationai.co.uk (accessed 9 April 2020) using the Visual Object Tagging Tool (VoTT) version 1.7.0. In order to classify objects within new images, the deep neural network extracts and 'learns' various parameters from these labelled images [28,38]. ...
... Therefore, a downside of this method is that the training and the use of these models in near real-time are extremely costly, along with a steep learning curve, due to the complex network and computational requirements. However, research is currently being conducted to overcome these challenges [38,60]. ...
... A strategy called 'You Only Look Once' (YOLO) uses a lightweight model with a single CNN, and although this provides faster inference time, it has the downside of having a poor overall accuracy of detection. This accuracy worsens when the drone is flown at higher altitudes [38,71]. More advanced drones and software can be used to overcome these limitations, having longer operating times and higher spatial resolutions, but they are restrictive in terms of their cost, which is unrealistic for small NGOs in developing countries. ...
Article
Full-text available
Drones are being increasingly used in conservation to tackle the illegal poaching of animals. An important aspect of using drones for this purpose is establishing the technological and the environmental factors that increase the chances of success when detecting poachers. Recent studies focused on investigating these factors, and this research builds upon this as well as exploring the efficacy of machine-learning for automated detection. In an experimental setting with voluntary test subjects, various factors were tested for their effect on detection probability: camera type (visible spectrum, RGB, and thermal infrared, TIR), time of day, camera angle, canopy density, and walking/stationary test subjects. The drone footage was analysed both manually by volunteers and through automated detection software. A generalised linear model with a logit link function was used to statistically analyse the data for both types of analysis. The findings concluded that using a TIR camera improved detection probability, particularly at dawn and with a 90° camera angle. An oblique angle was more effective during RGB flights, and walking/stationary test subjects did not influence detection with both cameras. Probability of detection decreased with increasing vegetation cover. Machine-learning software had a successful detection probability of 0.558, however, it produced nearly five times more false positives than manual analysis. Manual analysis, however, produced 2.5 times more false negatives than automated detection. Despite manual analysis producing more true positive detections than automated detection in this study, the automated software gives promising, successful results, and the advantages of automated methods over manual analysis make it a promising tool with the potential to be successfully incorporated into anti-poaching strategies.
Chapter
Zoos and aquariums are culturally and historically important places where families enjoy their leisure time and scientists study exotic animals. Many contain buildings of great architectural merit. Some people consider zoos little more than animal prisons, while others believe they play an important role in conservation and education. Zoos have been the subject of a vast number of academic studies, whose results are scattered throughout the literature. This interdisciplinary volume brings together research on animal behaviour, visitor studies, zoo history, human-animal relationships, veterinary medicine, welfare, education, enclosure design, reproduction, legislation, and zoo management conducted at around 200 institutions located throughout the world. The book is neither 'pro-' nor 'anti-' zoo and attempts to strike a balance between praising zoos for the good work they have done in the conservation of some species, while recognising that they face many challenges in making themselves relevant in the modern world.
Chapter
The current COVID-19 pandemic has undoubtedly brought new challenges to society and the constant search for solutions to control and reduce its effects. In this sense, the use of technology has become essential to cope with the situation. Thus, this work proposes a real-time social distancing detection system using Deep Learning algorithms and carrying out the monitoring through a UAV. This system consists of two fundamental blocks. The first one consists of convolutional neural network training to detect people using the YOLO object detection system while the second one consists of real-time video acquisition and analysis. Practical applications involves detecting people and calculating the distances between them to determine whether social distancing measurements are being obeyed or not. By increasing surveillance capabilities, authorities and security forces may control and prevent possible outbreaks of massive COVID-19 infections. The experiments were made in three different flight scenarios with altitudes of 15, 30, and 50 m. According to the results, the detection system’s recall reaches values close to 90%, although the highest values were obtained in flights at 30 m high. Regarding the calculation of the distances, in the three scenarios, the average relative error did not exceed 5%. Thus, the video transmission showed a high performance during the experiments. Hence, the system returns reliable results to control compliance with measures such as social distancing.