a) The BoniRob farming robot used in the Flourish project to acquire the datasets for the experiments. The black structure under the robot includes the sensors and the lighting devices used to acquire the data. b) An image of the ground captured by the robot. c) Result of the crop/weed segmentation process of image b), where green pixels represent crop and red pixels represent weeds.

a) The BoniRob farming robot used in the Flourish project to acquire the datasets for the experiments. The black structure under the robot includes the sensors and the lighting devices used to acquire the data. b) An image of the ground captured by the robot. c) Result of the crop/weed segmentation process of image b), where green pixels represent crop and red pixels represent weeds.

Contexts in source publication

Context 1
... order to build effective farming robots, one of the main challenges to address is the crop and weeds detection task, where the robot should be able to identify the vegetation and to distinguish between crop and weeds (see Fig. 1). Moreover, this process has to be carried out in real-time in order to trigger the proper weeding actions. Machine learning methods, specifically Convolutional Neural Networks (CNNs), have been used to accomplish the crop/weed classification task (e.g., [2]- [4]). These methods enable to train highly discriminative visual models that ...
Context 2
... a crop/weed segmentation system. This results in 1) a lower annotation effort, since less real images need to be labelled, and 2) an improvement of the network generalization capabilities and performance. To test our method, we have used the open-source Sugar Beet 2016 dataset [6]. The dataset has been collected by a BOSCH Bonirob farm robot (see Fig. 1a) moving on a sugar beet field across different weeks. The dataset is composed of a set of images taken by a 1296×966 pixels 4-channels (RGB-NIR) JAI AD-13 camera, mounted on the robot and facing downward. Specifically, from the Sugar Beet 2016 dataset we took a total of 1600 images, randomly chosen among different days of acquisition ...

Similar publications

Article
Full-text available
Rural farm wages are one of the most important tools for measuring the condition of rural people more accurately. Using secondary data, the present study makes an attempt to study both money and real trends for different agricultural operations for male and female agricultural labourers of Assam. It has been observed that there both money and real...

Citations

... Secondly, the hardware system could be enhanced to produce distinct signals for each weed class, enabling the sprayer to employ species-specific herbicides. Lastly, there is potential for exploring additional data augmentation techniques for weed detection, such as Generative methods [11] and diffusion methods [12]. ...
... Therefore, it is important to investigate other methods that can facilitate better training of RGB-D models, such as data augmentation and transfer learning from the RGB domain. However, data augmentation [25][26][27][28][29][30] and transfer learning [31,32], commonly used in state-of-the-art RGB detection models, are not one-to-one applicable to RGB-D data. This hampers the training of deep networks for RGB-D data. ...
Article
Full-text available
Automated precision weed control requires visual methods to discriminate between crops and weeds. State-of-the-art plant detection methods fail to reliably detect weeds, especially in dense and occluded scenes. In the past, using hand-crafted detection models, both color (RGB) and depth (D) data were used for plant detection in dense scenes. Remarkably, the combination of color and depth data is not widely used in current deep learning-based vision systems in agriculture. Therefore, we collected an RGB-D dataset using a stereo vision camera. The dataset contains sugar beet crops in multiple growth stages with a varying weed densities. This dataset was made publicly available and was used to evaluate two novel plant detection models, the D-model, using the depth data as the input, and the CD-model, using both the color and depth data as inputs. For ease of use, for existing 2D deep learning architectures, the depth data were transformed into a 2D image using color encoding. As a reference model, the C-model, which uses only color data as the input, was included. The limited availability of suitable training data for depth images demands the use of data augmentation and transfer learning. Using our three detection models, we studied the effectiveness of data augmentation and transfer learning for depth data transformed to 2D images. It was found that geometric data augmentation and transfer learning were equally effective for both the reference model and the novel models using the depth data. This demonstrates that combining color-encoded depth data with geometric data augmentation and transfer learning can improve the RGB-D detection model. However, when testing our detection models on the use case of volunteer potato detection in sugar beet farming, it was found that the addition of depth data did not improve plant detection at high vegetation densities.
... Additionally, sev-AQ3 eral research holes are found that might be filled to increase transparency for identifying plant diseases even before their symptoms are plainly visible. Liu et al. [12,13] suggests a novel model called Leaf GAN, which is based on generative adversarial networks (GANs), to produce images of four different grape leaf diseases for training identification models, focusing on the shortage of training photos of grape leaf diseases. The dense connectivity strategy and instance normalisation are combined into an effective discriminator to distinguish between real and fake disease images by utilising their excellent feature extraction capabilities on grape leaf lesions. ...
Chapter
Full-text available
Agriculture has long been a vital source of sustenance. More over 60% of the world’s population relies mostly on agricultural sources for food, according to linked statistics. Plant infections, however, are a catastrophic issue that seriously hampers agriculture productivity. Plant diseases cause a loss of agriculture productivity of about 25% per year. Potato crops have many benefits for human life. One of the most valuable benefits of potatoes for humans is their carbohydrate content; carbohydrates are the leading food for humans. The development of potato crop agriculture is significant for the sustainability of human life. There are several obstacles to developing potato farming, including a disease that attacks potato leaves. Infestans only the diseases early Blight (Alternia Solani) and late Blight (Phytophthora infestans de Bary) are among the most harmful to potato crops. However, the percentage of crops that fail to grow grows due to the erroneous and tardy identification of plant diseases. Using a diagnostic and detection system based on the Hybrid CNN (Convolutional neural network) with SVM (Support vector machine) presently deep learning (DL) technology, we offer an efficient and accurate way to identify early plant illnesses to reduce the plant output losses. Potato leaf disease detection issues can be resolved with the use of informatics technology and digital image processing. From the input photos of the supported training dataset, we utilised CNN to extract best feature of the illness characteristics, and then we used SVM to those features to do classification. 1900 photos of potato leaves were used to train the deep learning model, and about 950 images were utilised for testing.We suggested the CNN model and trained a strong accuracy data set. In addition, we employed ResNET50, a transfer learning model, and obtained accuracy that was superior to our suggested CNN model. It’s interesting that the accuracy of our suggested Hybrid CNN (ResNet 50) with SVN is 97.3% and that it was trained using the same Potato lead picture dataset and test data accuracy is 95.6.KeywordsPlant disease detectionPotato leaf imageDeep LearningCNNResNet50SVM
... To overcome the limitation of generating sufficient training data for vegetation segmentation, we tested an approach on the basis of image composition and domain transfer. Several earlier studies have implemented similar strategies or components of the strategy implemented here, especially in the context of weed detection or segmentation (e.g., [45][46][47][48]), although these studies typically had a strong focus on the generation of scenes with sparse vegetation and weed plants with a rosette-like growth habit. The approach presented here was chosen because it was expected to have high chances of success, because composite images contain all the features that are typically observed in real images of wheat stands, including all components of a staygreen or senescing wheat canopy, complex lighting, and richly textured and highly variable soil background. ...
Article
Full-text available
Maintenance of sufficiently healthy green leaf area after anthesis is key to ensuring an adequate assimilate supply for grain filling. Tightly regulated age-related physiological senescence and various biotic and abiotic stressors drive overall greenness decay dynamics under field conditions. Besides direct effects on green leaf area in terms of leaf damage, stressors often anticipate or accelerate physiological senescence, which may multiply their negative impact on grain filling. Here, we present an image processing methodology that enables the monitoring of chlorosis and necrosis separately for ears and shoots (stems + leaves) based on deep learning models for semantic segmentation and color properties of vegetation. A vegetation segmentation model was trained using semisynthetic training data generated using image composition and generative adversarial neural networks, which greatly reduced the risk of annotation uncertainties and annotation effort. Application of the models to image time series revealed temporal patterns of greenness decay as well as the relative contributions of chlorosis and necrosis. Image-based estimation of greenness decay dynamics was highly correlated with scoring-based estimations ( r ≈ 0.9). Contrasting patterns were observed for plots with different levels of foliar diseases, particularly septoria tritici blotch. Our results suggest that tracking the chlorotic and necrotic fractions separately may enable (a) a separate quantification of the contribution of biotic stress and physiological senescence on overall green leaf area dynamics and (b) investigation of interactions between biotic stress and physiological senescence. The high-throughput nature of our methodology paves the way to conducting genetic studies of disease resistance and tolerance.
... This implied that, there were 1000 (800+200) images for each concentration level i.e., a total of 4,000 images to train the VGG19 network with SoftMax and SVM classifiers. The generated synthetic images were realistic and very close to the real images and therefore they were used to train the VGG19 network with SoftMax and SVM classifiers for improved accuracy and better generalization as shown by Giuffrida et al. (2017) and Fawakherji et al. (2020). ...
... Generative adversarial networks (GANs) were primarily used to enhance the training process by adding to the manually labeled data in a semi-supervised manner. The authors of [134,135] used semi-supervised GANs (SGAN) and cGAN, respectively. In both studies, GAN architectures were outperformed by CNNs for higher labeling rates. ...
Article
Full-text available
Unmanned aerial vehicles (UAVs) are increasingly being integrated into the domain of precision agriculture, revolutionizing the agricultural landscape. Specifically, UAVs are being used in conjunction with machine learning techniques to solve a variety of complex agricultural problems. This paper provides a careful survey of more than 70 studies that have applied machine learning techniques utilizing UAV imagery to solve agricultural problems. The survey examines the models employed, their applications, and their performance, spanning a wide range of agricultural tasks, including crop classification, crop and weed detection, cropland mapping, and field segmentation. Comparisons are made among supervised, semi-supervised, and unsupervised machine learning approaches, including traditional machine learning classifiers, convolutional neural networks (CNNs), single-stage detectors, two-stage detectors, and transformers. Lastly, future advancements and prospects for UAV utilization in precision agriculture are highlighted and discussed. The general findings of the paper demonstrate that, for simple classification problems, traditional machine learning techniques, CNNs, and transformers can be used, with CNNs being the optimal choice. For segmentation tasks, UNETs are by far the preferred approach. For detection tasks, two-stage detectors delivered the best performance. On the other hand, for dataset augmentation and enhancement, generative adversarial networks (GANs) were the most popular choice.
... Generative Adversarial Networks (GANs) were primarily used to enhance the training process by adding to the manually labelled data in a semi-supervised manner. References [125], [126] used semi-supervised GANs (SGAN) and cGAN, respectively. In both works, GAN architectures were outperformed by CNNs for higher labelling rates. ...
Preprint
Full-text available
Unmanned Aerial Vehicles (UAV) are increasingly being used in a variety of domains and precision agriculture is no exception. Precision agriculture is the future of agriculture and will play a key role in long-term sustainability of agricultural practices. This paper presents a survey of how image data collected using UAVs has been used in conjunction with ma-chine learning techniques to support precision agriculture. Numerous agricultural applications including classification of crop types and trees, crops detection, weed detection, cropland cover, and segmentation of farming fields are discussed. A variety of supervised, semi-supervised and unsupervised machine learning techniques for image-based preci-sion agriculture are compared. The survey showed that for traditional machine learning approaches, Random Forests performed better than Support Vector Machines (SVM) and K-Nearest Neighbor Algorithm (KNN) for crop/weed classification. And, while Convolutional Neural Networks (CNN) have been used extensively, the U-Net-based models out-performed conventional CNN models for classification and segmentation tasks. Among the Single Stage Detectors (SSD), YOLO series performed relatively well. Two-Stage Detectors like R-CNN, FPN, and Mask R-CNN generally tended to outperform SSDs. Vision Trans-formers (ViT) showed promising results amongst transformer-based models which did not generally perform better than CNNs. Finally, Generative Adversarial Networks (GANs) have been used to address the problem of smaller datasets and unbalanced data
... The agricultural domain is part of the industrial and research areas for which the development of artificial methods for improvement of training datasets is vital [19][20][21]. This demand appears due to the high complexity and variability of the investigated system (plant) that has to be characterized by computer vision algorithms [22]. ...
Article
Full-text available
Large datasets catalyze the rapid expansion of deep learning and computer vision. At the same time, in many domains, there is a lack of training data, which may become an obstacle for the practical application of deep computer vision models. To overcome this problem, it is popular to apply image augmentation. When a dataset contains instance segmentation masks, it is possible to apply instance-level augmentation. It operates by cutting an instance from the original image and pasting to new backgrounds. This article challenges a dataset with the same objects present in various domains. We introduce the Context Substitution for Image Semantics Augmentation framework (CISA), which is focused on choosing good background images. We compare several ways to find backgrounds that match the context of the test set, including Contrastive Language–Image Pre-Training (CLIP) image retrieval and diffusion image generation. We prove that our augmentation method is effective for classification, segmentation, and object detection with different dataset complexity and different model types. The average percentage increase in accuracy across all the tasks on a fruits and vegetables recognition dataset is 4.95%. Moreover, we show that the Fréchet Inception Distance (FID) metrics has a strong correlation with model accuracy, and it can help to choose better backgrounds without model training. The average negative correlation between model accuracy and the FID between the augmented and test datasets is 0.55 in our experiments.
... Researchers focused on a robust image segmentation method in [133], which will be used to distinguish between crops and weeds in real time. They also discussed using annotated images in various studies and stated that annotating images could be timeconsuming. ...
Article
Full-text available
Weeds are one of the most harmful agricultural pests that have a significant impact on crops. Weeds are responsible for higher production costs due to crop waste and have a significant impact on the global agricultural economy. The importance of this problem has promoted the research community in exploring the use of technology to support farmers in the early detection of weeds. Artificial intelligence (AI) driven image analysis for weed detection and, in particular, machine learning (ML) and deep learning (DL) using images from crop fields have been widely used in the literature for detecting various types of weeds that grow alongside crops. In this paper, we present a systematic literature review (SLR) on current state-of-the-art DL techniques for weed detection. Our SLR identified a rapid growth in research related to weed detection using DL since 2015 and filtered 52 application papers and 8 survey papers for further analysis. The pooled results from these papers yielded 34 unique weed types detection, 16 image processing techniques, and 11 DL algorithms with 19 different variants of CNNs. Moreover, we include a literature survey on popular vanilla ML techniques (e.g., SVM, random forest) that have been widely used prior to the dominance of DL. Our study presents a detailed thematic analysis of ML/DL algorithms used for detecting the weed/crop and provides a unique contribution to the analysis and assessment of the performance of these ML/DL techniques. Our study also details the use of crops associated with weeds, such as sugar beet, which was one of the most commonly used crops in most papers for detecting various types of weeds. It also discusses the modality where RGB was most frequently used. Crop images were frequently captured using robots, drones, and cell phones. It also discusses algorithm accuracy, such as how SVM outperformed all machine learning algorithms in many cases, with the highest accuracy of 99 percent, and how CNN with its variants also performed well with the highest accuracy of 99 percent, with only VGGNet providing the lowest accuracy of 84 percent. Finally, the study will serve as a starting point for researchers who wish to undertake further research in this area.
... A GAN was also used to generate synthetic images for further crop and weed segmentation using convolutional neural networks U-Net, BONNET, SEGNET, and UNET-RESNET. Only UNET-RESNET outperformed the other networks in terms of accuracy on synthetic data compared to real data (Fawakherji et al., 2020). Despite its intensive use, GAN's performance for synthetic data generation is acceptable but not high, and it was used for image datasets, but especially not on continuous tabular datasets. ...