Schematic diagram of the algorithm process.

Schematic diagram of the algorithm process.

Source publication
Article
Full-text available
Based on the good feature learning ability of the pyramid scene parsing network, a method for extracting the centerline of structured light stripes of weld lines based on the pyramid scene parsing network and Steger algorithm is proposed. This method avoids the traditional complex weld image preprocessing technology, and simplifies the operation st...

Context in source publication

Context 1
... this paper, the extraction method of weld line structured light stripe centerline is mainly based on PSPNet for image segmentation of line structured light stripe centerline in weld image and then combined with Steger algorithm for line structured light stripe centerline extraction. The algorithm consists of two parts, the specific process is shown in Figure 1. Among them, in order to reduce the time required for training the model and accelerate the convergence rate, the pre-trained ResNet-50 is used for parameter finetuning. ...

Similar publications

Article
Full-text available
In this paper, we propose a multi-class image segmentation method based on uncertainty management by weak continuity constraints and neutrosophic set (NS). To manage the uncertainties in the segmentation process, an image is mapped into the NS domain. In the NS domain, the image is represented as true, false, and indeterminate subsets. In the propo...

Citations

... Although the IGGM exhibits versatility in accommodating stripes of varying shapes, it remains susceptible to image Signal, Image and Video Processing noise. Yu [8] combined PSPNet with the Steger algorithm for improved weld structure light stripe extraction, at the cost of high computational needs. Kamanli et al. [9] alongside their colleagues, innovated a method that integrates multiscale cross-patch attention with dilated convolution. ...
Article
Full-text available
Due to the reflective surfaces of battery cells, which introduce ambiguities during visual inspection, segmenting light strip edges accurately and extracting the center of the light strip become challenging. These tasks are crucial for measuring the parallelism between cells. To tackle this issue, this paper introduces a novel neural network model named Edge-Aware Dynamic Re-weighted U-Net (EADRU-Net). This model significantly improves edge detection and segmentation by incorporating an Edge Emphasis Loss. Moreover, we integrate a Context-Aware Cross-Dimensional Adaptive Attention mechanism. This mechanism optimizes the capture and expression of key features of light strips through context-aware layers and cross-dimensional learning strategies. EADRU-Net features a dynamic re-weighting mechanism that adaptively adjusts the weight of each pixel, optimizing the recognition and segmentation of reflective light strips on cell surfaces. Experimental results demonstrate EADRU-Net’s superior performance in noise suppression and precise edge segmentation of light strips, achieving a Mean Intersection over Union of 90.95% and a Mean Pixel Accuracy of 93.89%. This represents a 3.94% improvement over the enhanced U-Net, highlighting EADRU-Net’s effectiveness and superiority in detecting and segmenting light strips on cell surfaces.
... (2) Semantic segmentation network (SSN) offers superior precision in classifying laser stripe pixels against noisy backgrounds, outperforming conventional thresholding methods. Methods like pyramid scene parsing network (PSPNet), DeepLab, and U-Net effectively learn features against strong image noise [15,16]. Lightweight networks like segmenting objects by locations (SOLO v2) reduce the computational demand of laser stripe segmentation [3]. ...
Article
Full-text available
This paper proposes an efficient approach for extracting feature points from weld images in noisy construction environments. Inspired by the human pose estimation, the proposed method reformulates the weld feature point extraction as a skeletal keypoint detection task. A quick object detector locates the weld region amidst complex backgrounds, followed by efficient feature point extraction via two coordinate classification tasks. This approach achieves sub-pixel accuracy at a low computational cost and confines the annotation within one bounding box and four keypoints per image, eliminating pixel-level labeling. Test results demonstrate real-time, accurate feature point extraction with superior efficiency and robustness compared to traditional methods. The proposed approach thus facilitates the quality control for automated welding in real-world construction scenarios.
... Applying deep learning technology can effectively extract data features and improve classification accuracy [33]. With the development of deep learning, many neural network algorithms for semantic segmentation have been proposed, such as fully convolutional networks (FCNs) [34], pyramid scene parsing network (PSPNets) [35], SegNet [36], etc. Many scholars have introduced these algorithms into the meteorological field [37], such as the deep semantic segmentation model that extracts multi-source observation data from satellites, radar, and lightning detectors introduced by Zhou Kanghui [38]. ...
Article
Full-text available
This study explores the application of the fully convolutional network (FCN) algorithm to the field of meteorology, specifically for the short-term nowcasting of severe convective weather events such as hail, convective wind gust (CG), thunderstorms, and short-term heavy rain (STHR) in Gansu. The training data come from the European Center for Medium-Range Weather Forecasts (ECMWF) and real-time ground observations. The performance of the proposed FCN model, based on 2017 to 2021 training datasets, demonstrated a high prediction accuracy, with an overall error rate of 16.6%. Furthermore, the model exhibited an error rate of 18.6% across both severe and non-severe weather conditions when tested against the 2022 dataset. Operational deployment in 2023 yielded an average critical success index (CSI) of 24.3%, a probability of detection (POD) of 62.6%, and a false alarm ratio (FAR) of 71.2% for these convective events. It is noteworthy that the predicting performance for STHR was particularly effective with the highest POD and CSI, as well as the lowest FAR. CG and hail predictions had comparable CSI and FAR scores, although the POD for CG surpassed that for hail. The FCN model’s optimal performances in terms of hail prediction occurred at the 4th, 8th, and 10th forecast hours, while for CG, the 6th hour was most accurate, and for STHR, the 2nd and 4th hours were most effective. These findings underscore the FCN model’s ideal suitability for short-term forecasting of severe convective weather, presenting extensive prospects for the automation of meteorological operations in the future.
... Therefore, the stripe line with a certain width must be transformed into a single-pixel stripe line to accurately obtain the center position information. This process is called the extraction of the centerline of the structured light stripe [6][7][8][9][10][11][12]. ...
Article
Full-text available
In this study, we proposed a fast line-structured light stripe center extraction algorithm based on an improved barycenter algorithm to address the problem that the conventional strip center extraction algorithm cannot meet the requirements of a structured light 3D measurement system in terms of speed and accuracy. First, the algorithm performs pretreatment of the structured light image and obtains the approximate position of the stripe center through skeleton extraction. Next, the normal direction of each pixel on the skeleton is solved using the gray gradient method. Then, the weighted gray center of the gravity method is used to solve the stripe center coordinates along the normal direction. Finally, a smooth strip centerline is fitted using the least squares method. The experimental results show that the improved algorithm achieved significant improvement in speed, sub-pixel level accuracy, and a good structured light stripe center extraction effect, as well as the repeated measurement accuracy of the improved algorithm is within 0.01 mm, and the algorithm has good repeatability.
... Recently, using deep learning techniques to extend the performance of traditional algorithms is becoming popular among researchers. Liu et al. [2], Zhao et al. [3], and Yu et al. [4] proposed neural networks for the noise reduction process before centerline extraction, respectively. Learning-based methods perform automatic laser stripe region detection and segmentation by learning the distribution properties of the noise from a large data set, which has improved the performance of subsequent algorithms for extracting the centerline. ...
Article
Full-text available
To overcome the stray light noise in the centerline extraction method during line structured light 3D reconstruction process, an end-to-end trainable neural network for laser stripe centerline extraction based on Convolutional Neural Network and Multi-Layer Perception is proposed. The proposed network can self-adapt to a variety of lighting (brightness) conditions and overcome the interference of different stray lights. In addition, unlike prior work on enhancing the accuracy of centerline extraction using deep learning methods that only performs it for noise reduction in pre-processing, the proposed network unifies the noise reduction and prediction processes, so that it can be optimized end-to-end directly on centerline extraction performance. The network learns an intermediate feature representation of noise reduction, which requires less complexity for data annotation, reduces the training difficulty, and has more scalability. Experiments show that the proposed method can perform centerline extraction with relatively high accuracy for laser stripes of different widths, brightness, and inclination, thus obtaining a smooth and stable reconstructed surface in the structured light 3D reconstruction process.
... Scholars have made exploration and research in relevant aspects. Representative works include Alwaheba et al. [4,5] applied scanning contact potentiometry for defects detection, and for determining the location coordinates of defects in welded joints, Shen et al. [6] proposed water flooding segmentation algorithm to weld defect detection, Chen et al. [7] extracted X-ray weld image defects based on SUSAN algorithm, Li et al. [8] identified weld defects based on independent component analysis, Yu et al. [9,10] extracted weld centerline based on pyramid sparse network, Abdelkader et al. [11] considered the characteristics of low contrast, poor quality, and uneven illumination of X-ray images, and studied weld defect extraction based on X-ray images. Ding et al. [12] proposed the wavelet soft and hard threshold compromise denoising method, Patil et al. [13] proposed the techniques of local binary pattern in which local binary code describing region, generating by multiplying threshold with specified weight to conforming pixel and summing up by grey-level co-occurrence matrix to extract statistical texture features, Boaretto et al. [14] extracted potential defects based on feedforward multilayer perceptron with back propagation learning algorithm. ...
Article
Full-text available
To solve the problems of low precision of weak feature extraction, heavy reliance on labor and low efficiency of weak feature extraction in X-ray weld detection image of ultra-high voltage (UHV) equipment key parts, an automatic feature extraction algorithm is proposed. Firstly, the original weld image is denoised while retaining the characteristic information of weak defects by the proposed monostable stochastic resonance method. Then, binarization is achieved by combining Laplacian edge detection and Otsu threshold segmentation. Finally, the automatic identification of weld defect area is realized based on the sequential traversal of binary tree. Several characteristic analysis dimensions are established for weld defects of UHV key parts, including defect area, perimeter, slenderness ratio, duty cycle, etc. The experiment using the weld detection image of the actual production site shows that the proposed method can effectively extract the weak feature information of weld defects and further provide reference for decision-making.
... Du et al. [19] studied several weld-image algorithms to combat strong noise in robot GMAW, and adopted fast image segmentation, feature region recognition, and feature search technology in a convolutional neural network (CNN) to accurately identify the weld features. Yu et al. [20] proposed a method of structural light strip centerline weld extraction, based on a pyramid scene analysis network and a Steger algorithm. The ability to recognize the centerline of structural light under the influence of reflection interference is optimized. ...
... The mathematical model of the welding torch angle [26] is shown in Figure 14. The first set of data for P A1 (X A1 , Y A1 , Z A1 ), P(X 1 , Y 1 , Z 1 ), P A2 (X A2 , Y A2 , Z A2 ), and the normal vector of sheet A and sheet B, are shown in Equations (18)- (20): ...
Article
Full-text available
Fillet welds of highly reflective materials are common in industrial production. It is a great challenge to accurately locate the fillet welds of highly reflective materials. Therefore, this paper proposes a fillet weld identification and location method that can overcome the negative effects of high reflectivity. The proposed method is based on improving the semantic segmentation performance of the DeeplabV3+ network for structural light and reflective noise, and, with MobilnetV2, replaces the main trunk network to improve the detection efficiency of the model. To solve the problem of the irregular and discontinuous shapes of the structural light skeleton extracted by traditional methods, an improved closing operation using dilation in a combined Zhang-suen algorithm was proposed for structural light skeleton extraction. Then, a three-dimensional reconstruction as a mathematical model of the system was established to obtain the coordinates of the weld feature points and the welding-torch angle. Finally, many experiments on highly reflective stainless steel fillet welds were carried out. The experimental results show that the average detection errors of the system in the Y-axis and Z-axis are 0.3347 mm and 0.3135 mm, respectively, and the average detection error of the welding torch angle is 0.1836° in the test of a stainless steel irregular fillet weld. The method is robust, universal, and accurate for highly reflective irregular fillet welds.
... The former method is mainly used for laser scattering and reflection of the uneven reflective metal surface. In contrast, the latter is a more typical method to extract the centreline with a high accuracy and stability based on the Hessian matrix; thus, it is widely used in the stripe extraction of various structured lights [28,29]. ...
Article
Laser scanning technology has been increasingly utilised for the detection of pavement three-dimensional (3D) surface texture. This paper aims to develop a 3D laser scanning system and corresponding methods for the precise extraction and evaluation of the mean texture depth (MTD). The contributions are as follows: 1) an improved Steger method is developed to extract the centreline of the laser stripe; 2) 3D point cloud data of the pavement surface are processed using the coordinate transformation and cubic spline interpolation; 3) Monte Carlo expectation method is employed to evaluate pavement MTD, and an appropriate subblock size is obtained based on the optimal reference surface. Based on the sand-patch testing with 35 testing samples, the mean absolute error and Pearson correlation coefficient are 0.017 mm and 0.9864, respectively, indicating the accuracy of the proposed methods for MTD evaluation. Thus, the proposed methods provide a reference for autonomous detection of pavement performance.