Figure - available from: Molecular Breeding
This content is subject to copyright. Terms and conditions apply.
Heat maps of different layers with the proposed model

Heat maps of different layers with the proposed model

Source publication
Article
Full-text available
Unlabelled: Peanut is an essential food and oilseed crop. One of the most critical factors contributing to the low yield and destruction of peanut plant growth is leaf disease attack, which will directly reduce the yield and quality of peanut plants. The existing works have shortcomings such as strong subjectivity and insufficient generalization a...

Citations

... Leveraging the PlantVillage dataset, comprising ninety images, the study employs a methodology centred on a Support Vector Machine (SVM) classifier. Impressively, the proposed model achieves an accuracy surpassing 95% on the PlantVillage dataset, showcasing the efficacy of the developed apple leaf disease recognition method (Arathi and Dulhare, 2023;Bin Naeem et al., 2023;Xu et al., 2023). This work holds substantial promise for advancing computer vision applications in precision agriculture and plant health monitoring (Vengaiah and Konda, 2023;Terentev et al., 2023;Liu et al., 2023). ...
Article
Full-text available
Detecting plant leaf diseases accurately and promptly is essential for reducing economic consequences and maximizing crop yield. However, farmers’ dependence on conventional manual techniques presents a difficulty in accurately pinpointing particular diseases. This research investigates the utilization of the YOLOv4 algorithm for detecting and identifying plant leaf diseases. This study uses the comprehensive Plant Village Dataset, which includes over fifty thousand photos of healthy and diseased plant leaves from fourteen different species, to develop advanced disease prediction systems in agriculture. Data augmentation techniques including histogram equalization and horizontal flip were used to improve the dataset and strengthen the model’s resilience. A comprehensive assessment of the YOLOv4 algorithm was conducted, which involved comparing its performance with established target identification methods including Densenet, Alexanet, and neural networks. When YOLOv4 was used on the Plant Village dataset, it achieved an impressive accuracy of 99.99%. The evaluation criteria, including accuracy, precision, recall, and f1-score, consistently showed high performance with a value of 0.99, confirming the effectiveness of the proposed methodology. This study’s results demonstrate substantial advancements in plant disease detection and underscore the capabilities of YOLOv4 as a sophisticated tool for accurate disease prediction. These developments have significant significance for everyone involved in agriculture, researchers, and farmers, providing improved capacities for disease control and crop protection.
... The loss does not keep leveling off, but gradually increases [35]. The use of residual blocks allows the depth of the network to be further deepened under the premise of ensuring normal operation [36]. The residual operation is formulated as shown by Xu, et al. [36]. ...
... The use of residual blocks allows the depth of the network to be further deepened under the premise of ensuring normal operation [36]. The residual operation is formulated as shown by Xu, et al. [36]. ...
Article
Full-text available
Pests and diseases significantly impact the quality and yield of maize. As a result, it is crucial to conduct disease diagnosis and identification for timely intervention and treatment of maize pests and diseases, ultimately enhancing the quality and economic efficiency of maize production. In this study, we present an enhanced maize pest identification model based on ResNet50. The objective was to achieve efficient and accurate identification of maize pests and diseases. By utilizing convolution and pooling operations for extracting shallow-edge features and compressing data, we introduced additional effective channels (environment-cognition-action) into the residual network module. This step addressed the issue of network degradation, establishes connections between channels, and facilitated the extraction of crucial deep features. Finally, experimental validation was performed to achieve 96.02% recognition accuracy using the ResNet50 model. This study successfully achieved the recognition of various maize pests and diseases, including maize leaf blight, Helminthosporium maydis, gray leaf spot, rust disease, stem borer, and corn armyworm. These results offer valuable insights for the intelligent control and management of maize pests and diseases.
... The fruit body period of edible fungi is a key period for disease prevention and control. Realizing accurate diagnosis of disease types in the fruit body period of edible fungi can provide theoretical guidance for farmers to accurately spray drugs, which is significant for improving the yield of edible fungi [11,12] and ensuring the healthy and sustainable development of the edible fungi industry [13][14][15]. ...
Article
Full-text available
Early recognition of fruit body diseases in edible fungi can effectively improve the quality and yield of edible fungi. This study proposes a method based on improved ShuffleNetV2 for edible fungi fruit body disease recognition. First, the ShuffleNetV2+SE model is constructed by deeply integrating the SE module with the ShuffleNetV2 network to make the network pay more attention to the target area and improve the model’s disease classification performance. Second, the network model is optimized and improved. To simplify the convolution operation, the 1 × 1 convolution layer after the 3 × 3 depth convolution layer is removed, and the ShuffleNetV2-Lite+SE model is established. The experimental results indicate that the accuracy, precision, recall, and Macro-F1 value of the ShuffleNetV2-Lite+SE model on the test set are, respectively, 96.19%, 96.43%, 96.07%, and 96.25%, which are 4.85, 4.89, 3.86, and 5.37 percent higher than those before improvement. Meanwhile, the number of model parameters and the average iteration time are 1.6 MB and 41 s, which is 0.2 MB higher and 4 s lower than that before the improvement, respectively. Compared with the common lightweight convolutional neural networks MobileNetV2, MobileNetV3, DenseNet, and EfficientNet, the proposed model achieves higher recognition accuracy, and its number of model parameters is significantly reduced. In addition, the average iteration time is reduced by 37.88%, 31.67%, 33.87%, and 42.25%, respectively. The ShuffleNetV2-Lite+SE model proposed in this paper has a good balance among performance, number of parameters, and real-time performance. It is suitable for deploying on resource-limited devices such as mobile terminals and helps in realization of real-time and accurate recognition of fruit body diseases of edible fungi.
Article
Full-text available
Accurate detection of tea diseases is essential for optimizing tea yield and quality, improving production, and minimizing economic losses. In this paper, we introduce TeaDiseaseNet, a novel disease detection method designed to address the challenges in tea disease detection, such as variability in disease scales and dense, obscuring disease patterns. TeaDiseaseNet utilizes a multi-scale self-attention mechanism to enhance disease detection performance. Specifically, it incorporates a CNN-based module for extracting features at multiple scales, effectively capturing localized information such as texture and edges. This approach enables a comprehensive representation of tea images. Additionally, a self-attention module captures global dependencies among pixels, facilitating effective interaction between global information and local features. Furthermore, we integrate a channel attention mechanism, which selectively weighs and combines the multi-scale features, eliminating redundant information and enabling precise localization and recognition of tea disease information across diverse scales and complex backgrounds. Extensive comparative experiments and ablation studies validate the effectiveness of the proposed method, demonstrating superior detection results in scenarios characterized by complex backgrounds and varying disease scales. The presented method provides valuable insights for intelligent tea disease diagnosis, with significant potential for improving tea disease management and production.
Article
Full-text available
Introduction Semantic segmentation is effective in dealing with complex environments. However, the most popular semantic segmentation methods are usually based on a single structure, they are inefficient and inaccurate. In this work, we propose a mix structure network called MixSeg, which fully combines the advantages of convolutional neural network, Transformer, and multi-layer perception architectures. Methods Specifically, MixSeg is an end-to-end semantic segmentation network, consisting of an encoder and a decoder. In the encoder, the Mix Transformer is designed to model globally and inject local bias into the model with less computational cost. The position indexer is developed to dynamically index absolute position information on the feature map. The local optimization module is designed to optimize the segmentation effect of the model on local edges and details. In the decoder, shallow and deep features are fused to output accurate segmentation results. Results Taking the apple leaf disease segmentation task in the real scene as an example, the segmentation effect of the MixSeg is verified. The experimental results show that MixSeg has the best segmentation effect and the lowest parameters and floating point operations compared with the mainstream semantic segmentation methods on small datasets. On apple alternaria blotch and apple grey spot leaf image datasets, the most lightweight MixSeg-T achieves 98.22%, 98.09% intersection over union for leaf segmentation and 87.40%, 86.20% intersection over union for disease segmentation. Discussion Thus, the performance of MixSeg demonstrates that it can provide a more efficient and stable method for accurate segmentation of leaves and diseases in complex environments.