Figure - available from: Annals of Data Science
This content is subject to copyright. Terms and conditions apply.
Confusion Matrix of the proposed Xception CNN architecture on the hex-nut dataset

Confusion Matrix of the proposed Xception CNN architecture on the hex-nut dataset

Source publication
Article
Full-text available
Industrial Inspection systems are an essential part of Industry 4.0. An automated inspection system can significantly improve product quality and reduce human labor while making their life easier. However, a deep learning-based camera inspection system requires a large amount of data to classify the defective products accurately. In this paper, a f...

Citations

... These methods are lightweight and fast but provide relatively lesser accuracy in practical scenarios [5]. Another approach is the Deep Learning (DL) based automatic feature selection [6] method which is highly accurate in practical scenarios but at the expense of high computation complexity [7], [8]. The last CV module focuses on identifying defects or anomalies within the images by analyzing the extracted features to determine the presence and location of defects. ...
... Additionally, the choice of YOLOv4 and YOLACT, while capable, presented certain drawbacks, notably their large size (244 MBs) and a large number of parameters. These factors not only demanded substantial computational power but also led to slower detection rates Monowar et al. [8] devised an industrial inspection framework leveraging DL technology. In their study, they explored various CNN architectures, employing the principle of trans-fer learning. ...
... We have conducted a comprehensive analysis comparing various DL-based methods for detecting surface defects [7], [8], [10], [26], [28], [40]. In Table 5, we present a detailed comparison involving key aspects such as the applied model, model size in MB, dataset size, classification category, testing accuracy, and ASR value. ...
Article
Full-text available
This paper presents a deep learning-based framework for automating the visual inspection of plastic bottles in an Industry 4.0 context, detecting surface defects to enhance product quality. Our contributions include the acceleration of model development through knowledge transfer learning, an inventive data generation strategy that combines physical samples with synthetic data augmentation techniques, an extensive evaluation of pre-trained deep convolutional neural networks, and a user-friendly interface for real-time quality inspection reporting and making the information easily accessible and actionable. In comparison to existing methods, our proposed method outperforms with a higher Accuracy to Size Ratio of 7.0. This characteristic underscores its capacity to efficiently and accurately classify and detect defects across multiple classes while maintaining a low area utilization. This feature not only demonstrates its exceptional performance but also positions it as a practical solution for real-world scenarios with resource constraints.
... The software and its development framework for visual detection were designed in [19], which specifies the resources involved in AVI. In addition, DL-based detection pipelines for specific parts can be found in [20,21]. A general scheme was proposed by [22] for the appearance quality inspection of various types of electronic products with small size. ...
Article
Full-text available
    The industrial manufacturing model is undergoing a transformation from a product-centric model to a customer-centric one. Driven by customized requirements, the complexity of products and the requirements for quality have increased, which pose a challenge to the applicability of traditional machine vision technology. Extensive research demonstrates the effectiveness of AI-based learning and image processing on specific objects or tasks, but few publications focus on the composite task of the integrated product, the traceability and improvability of methods, as well as the extraction and communication of knowledge between different scenarios or tasks. To address this problem, this paper proposes a common, knowledge-driven, generic vision inspection framework, targeted for standardizing product inspection into a process of information decoupling and adaptive metrics. Task-related object perception is planned into a multi-granularity and multi-pattern progressive alignment based on industry knowledge and structured tasks. Inspection is abstracted as a reconfigurable process of multi-sub-pattern space combination mapping and difference metric under appropriate high-level strategies and experiences. Finally, strategies for knowledge improvement and accumulation based on historical data are presented. The experiment demonstrates the process of generating a detection pipeline for complex products and continuously improving it through failure tracing and knowledge improvement. Compared to the (, 69.802 mm) and 0.883 obtained by state-of-the-art deep learning methods, the generated pipeline achieves a pose estimation ranging from (, 153.584 mm) to (, 52.308 mm) and a detection rate ranging from 0.462 to 0.927. Through verification of other imaging methods and industrial tasks, we prove that the key to adaptability lies in the mining of inherent commonalities of knowledge, multi-dimensional accumulation, and reapplication.
    Preprint
    Full-text available
    p>There is unrealized potential in using automation to alleviate the visual inspection associated with non-destructive testing in manufacturing facilities. The identification of defects during the production can help avoid substantial manufacturing errors by indicating that preventative maintenance should be introduced. The use of an autoencoder for this application reduces the need to generate datasets for various defect types, instead only one training dataset would be needed. To address this, this paper proposes a Convolution Neural Network (CNN) autoencoder approach to detect surface defects on cast components during the production. The proposed method categorizes the data into damaged and undamaged components by clustering based on the loss associated with the reconstructed image. The average F1-score and accuracy from retraining the model 10 times was 89.14% and 88.52% respectively. Although previous studies have obtained higher metrics, they have focused their efforts on supervised training techniques where as this research proposes an unsupervised training method with results comparable to the previous studies.</p
    Preprint
    Full-text available
    p>There is unrealized potential in using automation to alleviate the visual inspection associated with non-destructive testing in manufacturing facilities. The identification of defects during the production can help avoid substantial manufacturing errors by indicating that preventative maintenance should be introduced. The use of an autoencoder for this application reduces the need to generate datasets for various defect types, instead only one training dataset would be needed. To address this, this paper proposes a Convolution Neural Network (CNN) autoencoder approach to detect surface defects on cast components during the production. The proposed method categorizes the data into damaged and undamaged components by clustering based on the loss associated with the reconstructed image. The average F1-score and accuracy from retraining the model 10 times was 89.14% and 88.52% respectively. Although previous studies have obtained higher metrics, they have focused their efforts on supervised training techniques where as this research proposes an unsupervised training method with results comparable to the previous studies.</p
    Article
    Full-text available
    There is unrealized potential in using automation to alleviate the visual inspection associated with non-destructive testing in manufacturing facilities. The identification of defects during the production can help avoid substantial manufacturing errors by indicating that preventative maintenance should be introduced. The use of an autoencoder for this application reduces the need to generate datasets for various defect types, instead only one training dataset would be needed. To address this, this paper proposes a Convolution Neural Network (CNN) autoencoder approach to detect surface defects on cast components during the production. The proposed method categorizes the data into damaged and undamaged components by clustering based on the loss associated with the reconstructed image. The average F1-score and accuracy from retraining the model 10 times was 89.14% and 88.52% respectively. Although previous studies have obtained higher metrics, they have focused their efforts on supervised training techniques where as this research proposes an unsupervised training method with results comparable to the previous studies.
    Article
    Full-text available
    Automatic vision-based inspection systems have played a key role in product quality assessment for decades through the segmentation, detection, and classification of defects. Historically, machine learning frameworks, based on hand-crafted feature extraction, selection, and validation, counted on a combined approach of parameterized image processing algorithms and explicated human knowledge. The outstanding performance of deep learning (DL) for vision systems, in automatically discovering a feature representation suitable for the corresponding task, has exponentially increased the number of scientific articles and commercial products aiming at industrial quality assessment. In such a context, this article reviews more than 220 relevant articles from the related literature published until February 2023, covering the recent consolidation and advances in the field of fully-automatic DL-based surface defects inspection systems, deployed in various industrial applications. The analyzed papers have been classified according to a bi-dimensional taxonomy, that considers both the specific defect recognition task and the employed learning paradigm. The dependency on large and high-quality labeled datasets and the different neural architectures employed to achieve an overall perception of both well-visible and subtle defects, through the supervision of fine or/and coarse data annotations have been assessed. The results of our analysis highlight a growing research interest in defect representation power enrichment, especially by transferring pre-trained layers to an optimized network and by explaining the network decisions to suggest trustworthy retention or rejection of the products being evaluated.