Table 1 - available via license: Creative Commons Attribution 3.0 Unported
Content may be subject to copyright.
Crop characteristics during the measurements.

Crop characteristics during the measurements.

Source publication
Article
Full-text available
For robotic harvesting of sweet-pepper fruits in greenhouses a sensor system is required to detect and localize the fruits on the plants. Due to the complex structure of the plant, most fruits are (partially) occluded when an image is taken from one viewpoint only. In this research the effect of multiple camera positions and viewing angles on fruit...

Contexts in source publication

Context 1
... this procedure, images were taken from five different viewpoints of every plant in the first session, and from 14 viewpoints in the second and third session. In total 330 images were taken of 30 plants during the three image acquisition sessions as indicated in Table 1. For every session, ground truth data was collected to be able to evaluate the results for the fruits visible in the recorded images. ...
Context 2
... number was also affected by previous harvest operations on the same plant. Table 1 gives an overview of the crop characteristics. The total number of fruits given is for the number of fruits for all plants together. ...

Citations

... This effect leads to a decrease in the detection accuracy (approximately 0.80 in Figure 7). Therefore, to improve such occlusions, the difference in the detection rate depending on the shooting direction, as in Haggag et al. [30] and Hemming et al. [44], as well as the indirect estimation of fruits hidden by occlusions, are considered effective [45]. It is also desirable to develop a new cultivation system that can achieve both the non-occurrence of occlusion and high yield. ...
Article
Full-text available
This study investigated the interoperability of a tomato fruit detection model trained using nighttime images from two greenhouses. The goal was to evaluate the performance of the models in different environmets, including different facilities, cultivation methods, and imaging times. An innovative imaging approach is introduced to eliminate the background, highlight the target plants, and test the adaptability of the model under diverse conditions. The results demonstrate that the tomato fruit detection accuracy improves when the domain of the training dataset contains the test environment. The quantitative results showed high interoperability, achieving an average accuracy (AP50) of 0.973 in the same greenhouse and a stable performance of 0.962 in another greenhouse. The imaging approach controlled the lighting conditions, effectively eliminating the domain-shift problem. However, training on a dataset with low diversity or inferring plant appearance images but not on the training dataset decreased the average accuracy to approximately 0.80, revealing the need for new approaches to overcome fruit occlusion. Importantly, these findings have practical implications for the application of automated tomato fruit set monitoring systems in greenhouses to enhance agricultural efficiency and productivity.
... Traditional fruit and vegetable picking is labor-intensive and costly, prompting a need for automated harvesting. However, current technology has only achieved a 33% success rate in picking sweet pepper fruit, taking an average of 94 s per fruit [35]. Selective harvesting, which involves harvesting specific sections of the plantbased on quality criteria, requires the ability to recognize quality factors before harvest and to harvest without damaging the remaining crop [15]. ...
Chapter
Full-text available
Agriculture is paramount in India, serving as a critical sector for ensuring food security, nutritional well-being, long-term development, and poverty reduction. However, in recent times, the migration of youth and individuals in search of alternative employment opportunities, coupled with urbanization, has created a labor shortage in rural areas. Moreover, the COVID-19 pandemic and extreme weather conditions have posed challenges for crop phenotyping in research fields across large areas. To address these issues, technology emerges as a crucial solution. Given the magnitude of the challenges anticipated in ensuring future food security, new technologies will play a pivotal role. Technological advancements have historically aided Indian agriculture in overcoming productivity stagnation, establishing market linkages, and improving farm management. These technologies have the potential to address critical concerns faced by Indian agriculture, including declining overall productivity, depletion and degradation of natural resources, increasing demand for high-quality food, stagnant farm incomes, fragmented land ownership, and the impacts of climate change. The adoption of technology has demonstrated the ability to modernize farmers’ production processes, leading to consistent returns, reduced risks of crop failure, and higher yields. Robotics in the twenty-first century present opportunities to tackle age-old farming challenges. This review covers the ongoing research, development, and innovations in robots for research applications and smart farming, encompassing their concepts, principles, advantages, and limitations.
... One of the challenges in designing vision systems and, in particular, object detection models for greenhouse applications is the variety and complexity of the tasks the robot needs to solve. Different tasks may require the use of different labelling methodologies (e.g., bounding boxes versus polygons), may deal with different object classes (e.g., stems, tomatoes, peppers, and leaves), may be used for different applications (such as counting [25] or tracking the main stem [22]), or may have different performance constraints, such as accuracy versus inference speed. While there are many different datasets which may be suitable for training models on general fruit and vegetable detection tasks, includ-ing CropDeep [26] (containing 30 categories of fruit and vegetables at different growing stages) and Laboro Tomato [27] which can be used for tomato segmentation and ripening classification, there is no one-size-fits-all dataset available for greenhouse harvesting. ...
Article
Full-text available
Harvesting operations in agriculture are labour-intensive tasks. Automated solutions can help alleviate some of the pressure faced by rising costs and labour shortage. Yet, these solutions are often difficult and expensive to develop. To enable the use of harvesting robots, machine vision must be able to detect and localize target objects in a cluttered scene. In this work, we focus on a subset of harvesting operations, namely, tomato harvesting in greenhouses, and investigate the impact that variations in dataset size, data collection process and other environmental conditions may have on the generalization ability of a Mask-RCNN model in detecting two objects critical to the harvesting task: tomatoes and stems. Our results show that when detecting stems from a perpendicular perspective, models trained using data from the same perspective are similar to one that combines both perpendicular and angled data. We also show larger changes in detection performance across different dataset sizes when evaluating images collected from an angled camera perspective, and overall larger differences in performance when illumination is the primary source of variation in the data. These findings can be used to help practitioners prioritize data collection and evaluation efforts, and lead to larger-scale harvesting dataset construction efforts.
... During the training of the model, the smooth edge of the dataset was made possible by the softmax optimizer function. The value of epochs should be set to 10. Table 02 presents the information regarding the parameters that have been assigned to our model. ...
Article
Full-text available
In recent years, artificial intelligence and image processing plays an important role in the agriculture fields such as plant disease detection and plant health issue prediction. The detection of plant quality in the early stages is a difficult task due to the variations in the symptoms, crop species, and climate factors. Several diseases such as late blight and early blight influence the quantity and quality of fruits. Manual detection of fruit quality leaves disease is a quite complex and time-taking process. It requires an expert with high skills to diagnose the fruit quality in the early stage. Therefore, an automated and efficient method is required that can detect the fruit quality. In this research, a novel EfficientB2 convolution neural network model is proposed to extract the deep features from the dataset. The model is evaluated on the Processed Images Fruits dataset. The result shows that the proposed model achieves efficient and improved results as compared to the previous work.
... along the orchard without overlap between consecutive frames (Bargoti and Underwood, 2017a;Apolo-Apolo et al., 2020a). However, since the ratio of visible to occluded fruit is not always constant, the use of multi-view approaches is sometimes required to increase fruit detectability (Hemming et al., 2014). Hence, to prevent double counting, fruit need to be tracked during scanning. ...
Article
Full-text available
Fruit size at harvest is an economically important variable for high-quality table fruit production in orchards and vineyards. In addition, knowing the number and size of the fruit on the tree is essential in the framework of precise production, harvest, and postharvest management. A prerequisite for analysis of fruit in a real-world environment is the detection and segmentation from background signal. In the last five years, deep learning convolutional neural network have become the standard method for automatic fruit detection, achieving F1-scores higher than 90 %, as well as real-time processing speeds. At the same time, different methods have been developed for, mainly, fruit size and, more rarely, fruit maturity estimation from 2D images and 3D point clouds. These sizing methods are focused on a few species like grape, apple, citrus, and mango, resulting in mean absolute error values of less than 4 mm in apple fruit. This review provides an overview of the most recent methodologies developed for in-field fruit detection/counting and sizing as well as few upcoming examples of maturity estimation. Challenges, such as sensor fusion, highly varying lighting conditions, occlusions in the canopy, shortage of public fruit datasets, and opportunities for research transfer, are discussed.
... Since ML and DL models rely on detection of the target of interest on the images fed into the models, pod occlusion and background clutter of images (due to soybean plant architecture) could effectively reduce the accuracy of pod counting [52]. There have been several attempts to reduce the effects of occlusion and noisy images on the accuracy of yield prediction [65][66][67][68][69][70][71][72]. However, most of these methods only work for large plant organs (e.g., fruits) or when there are minimal occlusion issues. ...
Article
Full-text available
Improving soybean (Glycine max L. (Merr.)) yield is crucial for strengthening national food security. Predicting soybean yield is essential to maximize the potential of crop varieties. Non-destructive methods are needed to estimate yield before crop maturity. Various approaches, including the pod-count method, have been used to predict soybean yield, but they often face issues with the crop background color. To address this challenge, we explored the application of a depth camera to real-time filtering of RGB images, aiming to enhance the performance of the pod-counting classification model. Additionally, this study aimed to compare object detection models (YOLOV7 and YOLOv7-E6E) and select the most suitable deep learning (DL) model for counting soybean pods. After identifying the best architecture, we conducted a comparative analysis of the model’s performance by training the DL model with and without background removal from images. Results demonstrated that removing the background using a depth camera improved YOLOv7’s pod detection performance by 10.2% precision, 16.4% recall, 13.8% mAP@50, and 17.7% mAP@0.5:0.95 score compared to when the background was present. Using a depth camera and the YOLOv7 algorithm for pod detection and counting yielded a mAP@0.5 of 93.4% and mAP@0.5:0.95 of 83.9%. These results indicated a significant improvement in the DL model’s performance when the background was segmented, and a reasonably larger dataset was used to train YOLOv7.
... Robots designed specifically for agriculture.Platform built according to the onboard steering scheme with two front fixed wheels (operating in drift or differential mode) and two rear caster wheels Ladybug(Van Henten et al., 2006;Hemming et al., 2014) p Omnidirectional robot powered by batteries and solar energy. panels using an independent steering scheme Greenbot (We put machines to work, 2019). ...
Article
Full-text available
The article contains an analytical review and perspectives of robotic technologies in horticulture. Trends in the growth of production, implementation, and sales of robots in various regions of the world are revealed. The analysis showed a lag in the introduction of agricultural robots compared to other sectors of the economy, as well as a significant gap between the countries of the Asian region and other continents. A review of technical means of three main components of ground agricultural robots is considered: navigation systems, sensors, and platform design. Examples of constructing a tree trajectory using the A* algorithm and using the Rviz visualization tools and the Github PathFindings graphical web service are given. As a result of the conducted research, the use of Lidar sensors is recommended, which will make it possible to design the route of robotic platforms, build maps by scanning a previously unknown surrounding space and updating the resulting map at each step of the algorithm in real time. The use of existing modern sensors with an optical rangefinder with a resolution of 4.5 million pixels, a frame rate of 25 frames per second and the ability to automatically adapt to the light level in combination with stereo cameras and GPS/GLONASS navigation will improve the positioning accuracy of robotic platforms and ensure autonomous operation. To perform basic technological operations for the care of plantings with row spacing of 2.5-4 m, a tree crown height up to 3-3.5 m with intensive technologies, the following design parameters of a robotic platform are required: agro-treatment of at least 1200 mm, adjustable track width of 1840-2080 mm, weight not more than 400 kg, load capacity not less than 1000 kg, the power of the power plant is not less than 5 kW.
... In agriculture, target objects have high intra-class variation. Inherently, the same object will exhibit large variations caused by changes along the growth stages (Harel et al., 2020a), different viewpoints (causing the same fruit, plant, or tree to look different depending on the viewpoint) (Hemming et al., 2014), and even different features of a single object (e.g., color) (Ringdahl et al., 2017;Kurtser & Edan, 2018); for example, apricots are green at the beginning of the growth season but their color changes to yellow-orange as they grow. Similarly, some apple cultivars (e.g., Pink Lady) have both red and yellow segments, and sweet peppers can be red on one side and green on another, since they mature in a non-homogeneous manner (Harel et al., 2020a). ...
Article
Full-text available
The number of objects is considered an important factor in a variety of tasks in the agricultural domain. Automated counting can improve farmers’ decisions regarding yield estimation, stress detection, disease prevention, and more. In recent years, deep learning has been increasingly applied to many agriculture-related applications, complementing conventional computer-vision algorithms for counting agricultural objects. This article reviews progress in the past decade and the state of the art for counting methods in agriculture, focusing on deep-learning methods. It presents an overview of counting algorithms, metrics, platforms and sensors, a list of all publicly available datasets, and an in-depth discussion of various deep-learning methods used for counting. Finally, it discusses open challenges in object counting using deep learning and gives a glimpse into new directions and future perspectives for counting research. The review reveals a major leap forward in object counting in agriculture in the past decade, led by the penetration of deep learning methods into counting platforms.
... One way to overcome occlusion is to explore the plant using multiple viewpoints for detection, as acquiring multiple measurements can help fill any missing information or resolve ambiguities [6], [7]. However, the multiple viewpoints must be chosen efficiently. ...
Preprint
Full-text available
To automate harvesting and de-leafing of tomato plants using robots, it is important to search and detect the relevant plant parts, namely tomatoes, peduncles, and petioles. This is challenging due to high levels of occlusion in tomato greenhouses. Active vision is a promising approach which helps robots to deliberately plan camera viewpoints to overcome occlusion and improve perception accuracy. However, current active-vision algorithms cannot differentiate between relevant and irrelevant plant parts, making them inefficient for targeted perception of specific plant parts. We propose a semantic active-vision strategy that uses semantic information to identify the relevant plant parts and prioritises them during view planning using an attention mechanism. We evaluated our strategy using 3D models of tomato plants with varying structural complexity, which closely represented occlusions in the real world. We used a simulated environment to gain insights into our strategy, while ensuring repeatability and statistical significance. At the end of ten viewpoints, our strategy was able to correctly detect 85.5% of the plant parts, about 4 parts more on average per plant compared to a volumetric active-vision strategy. Also, it detected 5 and 9 parts more compared to two predefined strategies and 11 parts more compared to a random strategy. It also performed reliably with a median of 88.9% correctly-detected objects per plant in 96 experiments. Our strategy was also robust to uncertainty in plant and plant-part position, plant complexity, and different viewpoint sampling strategies. We believe that our work could significantly improve the speed and robustness of automated harvesting and de-leafing in tomato crop production.
... The use of multiple viewpoints planning for an eye-in-hand robotic configuration or drone field monitoring is a widely discussed issue in agri-robotic vision applications (Barth et al., 2016;Bulanon et al., 2009;Hemming et al., 2014;Kurtser & Edan, 2018a, 2018bZaenker et al., 2021Zaenker et al., , 2020. The discussion often focuses on target visibility due to the high occlusion levels requiring multiple viewpoints to overcome the problem. ...
Article
Full-text available
Registration of point cloud data containing both depth and color information is critical for a variety of applications, including in-field robotic plant manipulation, crop growth modeling, and autonomous navigation. However, current state-of-the-art registration methods often fail in challenging agricultural field conditions due to factors such as occlusions, plant density, and variable illumination. To address these issues, we propose the NDT-6D registration method, which is a color-based variation of the Normal Distribution Transform (NDT) registration approach for point clouds. Our method computes correspondences between pointclouds using both geometric and color information and minimizes the distance between these correspondences using only the three-dimensional (3D) geometric dimensions. We evaluate the method using the GRAPES3D data set collected with a commercial-grade RGB-D sensor mounted on a mobile platform in a vineyard. Results show that registration methods that only rely on depth information fail to provide quality registration for the tested data set. The proposed color-based variation outperforms state-of-the-art methods with a root mean square error (RMSE) of 1.1–1.6 cm for NDT-6D compared with 1.1–2.3 cm for other color-information-based methods and 1.2–13.7 cm for noncolor-information-based methods. The proposed method is shown to be robust against noises using the TUM RGBD data set by artificially adding noise present in an outdoor scenario. The relative pose error (RPE) increased 14% for our method compared to an increase of 75% for the best-performing registration method. The obtained average accuracy suggests that the NDT-6D registration methods can be used for in-field precision agriculture applications, for example, crop detection, size-based maturity estimation, and growth modeling.