Fig 6 - uploaded by Abdul Basit
Content may be subject to copyright.
Counting apples using Hough circle transform. (a). Apple fruits blob received from previous step. (b). Hough circle applied and correctly estimate the apples. The green circle shows the count of apples.

Counting apples using Hough circle transform. (a). Apple fruits blob received from previous step. (b). Hough circle applied and correctly estimate the apples. The green circle shows the count of apples.

Source publication
Article
Full-text available
Researchers proposed various visual based methods for estimating the fruit quantity and performing qualitative analysis, they used aeriel and ground vehicles to capture the fruit images in orchards. Fruit yield estimation is a challenging task with environmental noise such as illumination changes, color variation, overlapped fruits, cluttered envir...

Contexts in source publication

Context 1
... applying the morphological operators, we received the overlapped blobs from the previous step. In this phase, we apply the Hough circles to detect the overlapped and partial round shape objects in the image. The method successfully identifies the circular regions and help counting the apple accurately, see Fig. 6. 1) Hough circle explained: Hough transform is the shape positioning technique in image analysis. The aim of using this technique was to obtain inaccurate instances of objects within certain class of shapes by a voting mechanism. This voting mechanism carried out in a parameter space, from this space the object candidates achieved when ...
Context 2
... applying the morphological operators, we received the overlapped blobs from the previous step. In this phase, we apply the Hough circles to detect the overlapped and partial round shape objects in the image. The method successfully identifies the circular regions and help counting the apple accurately, see Fig. 6. 1) Hough circle explained: Hough transform is the shape positioning technique in image analysis. The aim of using this technique was to obtain inaccurate instances of objects within certain class of shapes by a voting mechanism. This voting mechanism carried out in a parameter space, from this space the object candidates achieved when ...

Citations

... For grape picking, Rodrigo et al selected HOG (Histogram of Oriented Gradients) and LBP (Local Binary Pattern) to extract shape and texture features of grapes, and then used SVM-RBF to build a grape recognition classifier (Perez-Zavala et al., 2018). For apple harvesting, Zartash et al used HS model to locate and segment the apple images, and then used refinement denoising and Hough transform to realize accurate location of the apples (Kanwal et al., 2019). Gu Suhang et al introduced the ASIFT feature to repair the target hollow areas generated by K-means clustering, and used the gPb contour detector and the dynamic threshold Otsu method successively to generate clear and continuous target contours (Gu et al., 2017). ...
Article
Full-text available
Pine cones are important forest products, and the picking process is complex. Aiming at the multi-objective and dispersed characteristics of pine cones in the forest, a machine vision detection model (EBE-YOLOV4) is designed to solve the problems of many parameters and poor computing ability of the general YOLOv4, so as to realize rapid and accurate recognition of pine cones in the forest. Taking YOLOv4 as the basic framework, this method can realize a lightweight and accurate recognition model for pine cones in forest through optimized design of the backbone and the neck networks. EfficientNet-b0 (E) is chosen as the backbone network for feature extraction to reduce parameters and improve the running speed of the model. Channel transformation BiFPN structure (B), which improves the detection rate and ensures the detection accuracy of the model, is introduced to the neck network for feature fusion. The neck network also adds a lightweight channel attention ECA-Net (E) to solve the problem of accuracy decline caused by lightweight improvement. Meanwhile, the H-Swish activation function is used to optimize the model performance to further improve the model accuracy at a small computational cost. 768 images of pine cones in forest were used as experimental data, and 1536 images were obtained after data expansion, which were divided into training set and test set at the ratio of 8:2. The CPU used in the experiment was Inter Core i9-10885@2.40Ghz, and the GPU was NVIDIA Quadro RTX 5000. The performance of YOLOv4 lightweight design was observed based on the indicators of precision (P), recall (R) and detection frames per second (FPS). The results showed that the measurement accuracy (P) of the EBE-YOLOv4 was 96.25%, the recall rate (F) was 82.72% and the detection speed (FPS) was 64.09F/S. Compared with the original YOLOv4, the precision of detection had no significant change, but the speed increased by 70%, which demonstrated the effectiveness of YOLOv4 lightweight design.
... In case of orchard crops like citrus, computer vision approaches are quite straightforward (Fig. 10). The yield can be estimated by directly counting the number of flowers or fruits prior to the harvesting stages (Cheng et al., 2017;Dorj et al., 2017;Kanwal et al., 2019). With an objective of estimating yield from citrus orchards, Apolo-Apolo et al. (Apolo-Apolo et al., 2020)developed a Faster-RCNN model for the fruit detection. ...
Article
Full-text available
The agriculture industry is undergoing a rapid digital transformation and is growing powerful by the pillars of cutting-edge approaches like artificial intelligence and allied technologies. At the core of artificial intelligence, deep learning-based computer vision enables various agriculture activities to be performed automatically with utmost precision enabling smart agriculture into reality. Computer vision techniques, in conjunction with high-quality image acquisition using remote cameras, enable non-contact and efficient technology-driven solutions in agriculture. This review contributes to providing state-of-the-art computer vision technologies based on deep learning that can assist farmers in operations starting from land preparation to harvesting. Recent works in the area of computer vision were analyzed in this paper and categorized into (a) seed quality analysis, (b) soil analysis, (c) irrigation water management, (d) plant health analysis, (e) weed management (f) livestock management and (g) yield estimation. The paper also discusses recent trends in computer vision such as generative adversarial networks (GAN), vision transformers (ViT) and other popular deep learning architectures. Additionally, this study pinpoints the challenges in implementing the solutions in the farmer’s field in real-time. The overall finding indicates that convolutional neural networks are the corner stone of modern computer vision approaches and their various architectures provide high-quality solutions across various agriculture activities in terms of precision and accuracy. However, the success of the computer vision approach lies in building the model on a quality dataset and providing real-time solutions.
... In addition, in fruit identification, if the proposed method is applied to images with denser fruit, it becomes difficult to identify the same fruit because the distance filter may not work as intended. Furthermore, previous studies on apple detection using the Hough transform [24,25] reported that the precisions were 93.5% and 92.0%, respectively, and the proposed method shows a higher value than those reported. The proposed method is effective when applied to data with large volumes and diverse state changes, such as time-series images, because the Hough transform does not detect fruit correctly with fixed parameters. ...
Article
Full-text available
Understanding the growth status of fruits can enable precise growth management and improve the product quality. Previous studies have rarely used deep learning to observe changes over time, and manual annotation is required to detect hidden regions of fruit. Thus, additional research is required for automatic annotation and tracking fruit changes over time. We propose a system to record the growth characteristics of individual apples in real time using Mask R-CNN. To accurately detect fruit regions hidden behind leaves and other fruits, we developed a region detection model by automatically generating 3000 composite orchard images using cropped images of leaves and fruits. The effectiveness of the proposed method was verified on a total of 1417 orchard images obtained from the monitoring system, tracking the size of fruits in the images. The mean absolute percentage error between the true value manually annotated from the images and detection value provided by the proposed method was less than 0.079, suggesting that the proposed method could extract fruit sizes in real time with high accuracy. Moreover, each prediction could capture a relative growth curve that closely matched the actual curve after approximately 150 elapsed days, even if a target fruit was partially hidden.
... Various research studies conducted on this area to make video surveillance [3] more versatile and reliable but the detection of suspicious objects is still a challenging job in video surveillance. We need a system that distinguishes and identify highly hazardous situations and makes alerts to take proper action. ...
Article
Full-text available
Tracking objects over fixed surveillance cameras are widely used for security purposes in public areas such as train stations, airports, parking areas, and public transportation for the prevention of terrorism. Once the object is accurately detected in the image scene, we can use various visual algorithms to find a number of applications. In this paper, we introduce a model for tracking the multiple objects along with detecting the abandoned luggage in the real time environment. In our model, we used the initial frames to model the background scene. Next, we used the motion model that is background subtraction to detect and track moving objects such as the owner and the luggage. The proposed model also maintains the position history of moving objects followed by the frame differencing technique to find out the luggage history and detect the abandoned luggage by a human. We have used PETS2006 and PETS2007 dataset for the testing of the proposed system in arious indoor and outdoor environments with varying lighting conditions