Examples of the mechanical damage class.

Examples of the mechanical damage class.

Source publication
Article
Full-text available
Agricultural losses due to post-harvest diseases can reach up to 30% of total production. Detecting diseases in fruits at an early stage is crucial to mitigate losses and ensure the quality and health of fruits. However, this task is challenging due to the different formats, sizes, shapes, and colors that the same disease can present. Convolutional...

Context in source publication

Context 1
... necessitates a detector model with a level of generalization beyond that of conventional detectors to abstract these particularities. Figure 2 provides an example of this point, showcasing three examples of the "mechanical damage" class with highly distinct characteristics. To address these challenges, we first trained and tested the image base using state-of-the-art detectors commonly employed in the object detection task. ...

Citations

... The selection of the number of integrated CBAM blocks during the incorporation of CBAM modules was done meticulously to minimize any significant increase in the computational overhead on the network. The integration of CBAM modules into the YOLOv7 architecture by researchers to develop novel designs for solving various problems has emerged as a highly popular research area [49,50]. Therefore, the determination of position, number of blocks, and their integration with other network components are design decisions that can vary depending on the specific problem under consideration. ...
Article
Full-text available
Objectives The aim of this study was automatically detecting and numbering teeth in digital bitewing radiographs obtained from patients, and evaluating the diagnostic efficiency of decayed teeth in real time, using deep learning algorithms. Methods The dataset consisted of 1170 anonymized digital bitewing radiographs randomly obtained from faculty archives. After image evaluation and labeling process, the dataset was split into training and test datasets. This study proposed an end-to-end pipeline architecture consisting of three stages for matching tooth numbers and caries lesions to enhance treatment outcomes and prevent potential issues. Initially, a pre-trained convolutional neural network (CNN) utilized to determine the side of the bitewing images. Then, an improved CNN model YOLOv7 was proposed for tooth numbering and caries detection. In the final stage, our developed algorithm assessed which teeth have caries by comparing the numbered teeth with the detected caries, using the intersection over union value for the matching process. Results According to test results, the recall, precision, and F1-score values were 0.994, 0.987 and 0.99 for teeth detection, 0.974, 0.985 and 0.979 for teeth numbering, and 0.833, 0.866 and 0.822 for caries detection, respectively. For teeth numbering and caries detection matching performance; the accuracy, recall, specificity, precision and F1—Score values were 0.934, 0.834, 0.961, 0.851 and 0.842, respectively. Conclusions The proposed model exhibited good achievement, highlighting the potential use of CNNs for tooth detection, numbering, and caries detection, concurrently. Clinical significance CNNs can provide valuable support to clinicians by automating the detection and numbering of teeth, as well as the detection of caries on bitewing radiographs. By enhancing overall performance, these algorithms have the capacity to efficiently save time and play a significant role in the assessment process.
... Song et al. [28] embedded a CBAM attention mechanism into a YOLOv3 network to detect maize leaf blight infestation in a field scene. De Moraes et al. [29] integrated a CBAM attention mechanism into a YOLOv7 network to detect nine papaya fruit diseases, with an accuracy rate of 86.2% under the complex backgrounds. However, although the above models can effectively alleviate the interference of the natural background by providing channel or spatial information, they neglect model size and cannot acquire long-range dependency information. ...
Article
Full-text available
The precise detection of diseases is crucial for the effective treatment of pear trees and to improve their fruit yield and quality. Currently, recognizing plant diseases in complex backgrounds remains a significant challenge. Therefore, a lightweight CCG-YOLOv5n model was designed to efficiently recognize pear leaf diseases in complex backgrounds. The CCG-YOLOv5n model integrates a CA attention mechanism, CARAFE up-sampling operator, and GSConv into YOLOv5n. It was trained and validated using a self-constructed dataset of pear leaf diseases. The model size and FLOPs are only 3.49 M and 3.8 G, respectively. The mAP@0.5 is 92.4%, and the FPS is up to 129. Compared to other lightweight models, the experimental results demonstrate that the CCG-YOLOv5n achieves higher average detection accuracy and faster detection speed with a smaller computation and model size. In addition, the robustness comparison test indicates that the CCG-YOLOv5n model has strong robustness under various lighting and weather conditions, including frontlight, backlight, sidelight, tree shade, and rain. This study proposed a CCG-YOLOv5n model for accurately detecting pear leaf diseases in complex backgrounds. The model is suitable for use on mobile terminals or devices.
... YoloV7 detector with the implementation of a convolutional block attention module (CBAM) attention mechanism [133] machine learning and artificial intelligence are artificial neural networks and they are modelled after the structure of the human brain and operate as though they were made of interconnected nodes where simple processing operations are performed [112]. With the help of this model, people were able to address a variety of practical issues that had previously proven to be challenging [113], [114], [115]. ...
Article
Full-text available
Horticulture is a versatile field which encompasses a plethora of day to day strategic decisions like varietal selection, optimisation of resources, understanding the mechanisms of the phenology, identification of plant invaders both in the micro and macro level, wise and judicious use of plant protectants, yield prediction & assessment, post harvesting & handling, strategic way of understanding the pulse of consumer’s popular demands and efficient way of marketing. Fruit trees are perennial unlike annual vegetable and cereal crops where there is a high prerequisite for efficient modelling of canopy architecture, photosynthesis, nutrient uptake, pest forecasting etc where the ill-effects of climate change are bringing out huge losses in the existing germplasm, annual turnover of the farmers and emergence of unheard pests and diseases. An invincible foresight or preparedness against such vagaries can be brought out by efficient modelling mechanisms combining the physiology, phenology and vital requirements of fruit trees with the interacting ecosystem of the land where it is present. Extrapolating such models from the local level to a general situations always gives fruitful results and it further aids in strengthening the present protocols. With the advancement of machine learning and deep learning in precision agriculture, problems of farmers and orchardists are being solved at a faster pace with the help of sensors in identification of problems and its alleviation using fast and error-free processing at pre-harvesting, harvesting and post-harvesting stages of fruit crops. In fact it is also one of the major concerns among people regarding the complete replacement of human power in the crucial decision support systems for agriculture and farming.
... In the CNN architecture, the You Only Look Once (YOLO) algorithm stands out for its remarkable balance between speed and accuracy, allowing fast object recognition and reliability [27]. Many articles and scientific studies have applied the YOLO algorithm to classify agricultural products [21][22][23][24][25][26] and achieved remarkable success. In [21], the authors used YOLOv3 to identify and classify chilli quality; in the best condition, the classification accuracy reached 99.4%, but in the worst case (the peppers are on top of each other), the correct result was only 75.6%. ...
... The results for YOLOv5 were superior to those of the Mask R-CNN method when real-time object detection was required. J. L. De Moraes and colleagues [26] used the YOLOv7 algorithm to identify and classify nine different diseases on papaya fruit, achieving an average mAP of 86.2%, even at types with high internal variation, such as "mechanical damage". ...
... A highly popular real-time target identification technique, the YOLO (You Only Look Once) [19][20][21] algorithm offers very quick target recognition at the tradeoff of very little accuracy. As seen in Figure 1, the YOLO algorithm divides the entire image into numerous grids and forecasts bounding boxes and class probabilities for each grid. ...
Article
Full-text available
One of the greatest engineering feats in history is the construction of tunnels, and the management of tunnel safety depends heavily on the detection of tunnel defects. However, the real-time, portability, and accuracy issues with the present tunnel defect detection technique still exist. The study improves the traditional defect detection technology based on the knowledge distillation algorithm, the depth pooling residual structure is designed in the teacher network to enhance the ability to extract target features. Next, the MobileNetv3 lightweight network is built into the student network to reduce the number and volume of model parameters. The lightweight model is then trained in terms of both features and outputs using a multidimensional knowledge distillation approach. By processing the tunnel radar detection photos, the dataset is created. The experimental findings demonstrate that the multidimensional knowledge distillation approach greatly increases the detection efficiency: the number of parameters is decreased by 81.4%, from 16.03 MB to 2.98 MB, while the accuracy is improved by 2.5%, from 83.4% to 85.9%.
Article
Full-text available
This study investigates the application of Vision Transformers (ViTs) in deep learning for the accurate identification of papaya diseases. ViTs, known for their effectiveness in image classification tasks, are utilized to develop a robust model capable of precisely diagnosing various diseases that affect papaya plants. Through rigorous experimentation and validation, the study showcases the superior performance of ViTs compared to traditional convolutional neural networks (CNNs) in terms of classification accuracy and computational efficiency. The results highlight the potential of ViTs in real-world agricultural systems, enabling early and accurate disease detection to improve crop yield and ensure food security. This research contributes to the advancement of computer vision techniques in agriculture, emphasizing the importance of leveraging cutting-edge deep learning models like ViTs for enhanced disease management and sustainable agricultural practices.
Article
With the rise of the fruit processing industry, machine learning and image processing have become necessary for quality control and monitoring of fruits. Recently, strong vision-based solutions have emerged in farming industries that make inspections more accurate at a much lower cost. Advanced deep learning methods play a key role in these solutions. In this study, we built an image-based framework that uses the ResNet-101 CNN model to identify different types of papaya fruit diseases with minimal training data and processing power. A case study to identify commonly encountered papaya fruit diseases during harvesting was used to support the results of the suggested methodology. A total of 983 images of both healthy and defective papaya were considered during the experiment. In this study, we initially used the ResNet-101 CNN model for classification and then combined the deep features drawn out from the activation layer (fc1000) of the ResNet-101 CNN along with a multi-class Support Vector Machine (SVM) to classify papaya fruit defect detection. After comparing the performance of both approaches, it was found that Cubic SVM is the best classifier using the deep feature of ResNet-101 CNN, achieved with an accuracy of 99.5% and an area under the curve (AUC) of 1 without any classification error. The findings of this experiment reveal that the ResNet-101 CNN with the cubic SVM model can categorize good, diseased, and defective papaya pictures. Moreover, the suggested model executed the task in a greater way in terms of the F1- Score (0.99), sensitivity (99.50%), and precision (99.71%). The present work not only assists the end user in determining the type of disease but also makes it possible for them to take corrective measures during farming.