ArticlePDF Available

Minimum weighted norm interpolation of seismic records

Authors:

Abstract and Figures

In seismic data processing, we often need to interpolate and extrapolate data at missing spatial locations. The reconstruction problem can be posed as an inverse problem where, from inadequate and incomplete data, we attempt to reconstruct the seismic wavefield at locations where measurements were not acquired. We propose a wavefield reconstruction scheme for spatially band‐limited signals. The method entails solving an inverse problem where a wavenumber‐domain regularization term is included. The regularization term constrains the solution to be spatially band‐limited and imposes a prior spectral shape. The numerical algorithm is quite efficient since the method of conjugate gradients in conjunction with fast matrix–vector multiplications, implemented via the fast Fourier transform (FFT), is adopted. The algorithm can be used to perform multidimensional reconstruction in any spatial domain.
Content may be subject to copyright.
A preview of the PDF is not available
... Spitz (1991) proposes an interpolation in the f-k domain, Gülünay (2003) uses a lowfrequency derived mask function to interpolate data. Later f-k domain interpolation has rapidly developed (Liu and Sacchi, 2004;Zwartjes and Sacchi, 2007;Naghizadeh and Sacchi, 2010;Gao et al., 2010;Hennenfent et al., 2010;Curry, 2010;Gan et al., 2015). Moreover, minimum weighted norm interpolation (MWNI) (Liu and Sacchi, 2004;Trad, 2009), antileakage Fourier transform (ALFT) (Xu et al., 2005), and multicomponent matching pursuit algorithms (Özbek et al., 2010) are applied in the f-x domain. ...
... Later f-k domain interpolation has rapidly developed (Liu and Sacchi, 2004;Zwartjes and Sacchi, 2007;Naghizadeh and Sacchi, 2010;Gao et al., 2010;Hennenfent et al., 2010;Curry, 2010;Gan et al., 2015). Moreover, minimum weighted norm interpolation (MWNI) (Liu and Sacchi, 2004;Trad, 2009), antileakage Fourier transform (ALFT) (Xu et al., 2005), and multicomponent matching pursuit algorithms (Özbek et al., 2010) are applied in the f-x domain. ...
Conference Paper
Full-text available
This work deals with the interpolation of seismic data in the discrete cosine transform (DCT) domain. Interpolation is a necessary step before applying pre-stack migration. In the past, the geophysics community focused on developing interpolation techniques by exploring the simplicity of the signals in the Fourier domain. For example, for a single linear event, the Hankel matrix built for each frequency realization has a rank equal to one. However, the missing traces increase the rank of the Hankel matrix. Hence, by reducing the rank of the Hankel matrices, we can reconstruct the missing traces. This method has been extensively used in industry; however, rank-based reconstruction suffers from the computational costs of calculating the singular value de-compositions (SVD) of matrices for rank reduction. Moreover, it is difficult to fine-tune the correct rank for each dataset and each patch. To alleviate these issues, we introduce a sparsity-based reconstruction method in the DCT domain. The missing traces, regardless of the geometry of the events (e.g., linear or hyperbolic), appear in the 2D discrete cosine transform domain as background noise. We therefore design an optimization method and solve for a model that reduces this noise from the DCT coefficients by promoting sparsity. Accordingly, our proposed method is SVD-free and does not require knowledge of the appropriate rank of the Hankel matrices.
... Trace regularization can be classified into two categories based on how accurately it reflects the irregular distribution of the input data. The first technique involves an alignment process that projects data at irregular points onto the nearest regular grid and then interpolate the missing data within that grid (Liu and Sacchi, 2004;Abma and Kabir, 2006;Trad, 2009;Chiu, 2014;Trad, 2014;Kim et al., 2015;Wang et al., 2020b;Yeeh et al., 2020bYeeh et al., , 2023b. ...
... It consists of the initial step to project the data to the nearest grid points, and the second step to fill in the missing points in the regular grid. Representative trace interpolation methods that realize this strategy include Minimum Weighted Norm Interpolation (MWNI; Liu and Sacchi, 2004;Chiu, 2014;Yeeh et al., 2020b), and Projection onto Convex Sets (POCS; Abma and Kabir, 2006;Kim et al., 2015). This strategy is favored due to its compatibility with efficient computational methods such as FFT and Convolutional Neural Networks (CNN). ...
Thesis
This thesis proposes a novel method for regularizing seismic traces in multidimensional spaces using a simplex-based algorithm. A trace-based seismic trace interpolator is developed using machine learning that can be trained using only observed seismic data, eliminating the need to construct the additional training dataset. In addition, the proposed strategy for selecting optimal input data during the inference strategy enables an efficient and logical inference process. This strategy can be used to improve the performance of detailed analysis and interpretation of seismic data. Validation with synthetic and field data confirms the effectiveness of the proposed method. Experiments with SEAM (Society of Exploration Geophysicists Advanced Modeling) phase 1 synthetic data demonstrated the effective regularization of observed data in common shot gather, that are irregularly distributed in areas where towed streamers are likely to be located. Analysis of the results of this experiment suggests that the presence of an observation trace close to the query point for the regularization reduces the complexity of the process and leads to more accurate predictions. In the case of field data, pre-stack time migrated data from the Vincent oil field in Western Australia was employed. The regularization was performed using x and y coordinates, including an examination of the distribution of barycentric coordinates as a function of R-value, a critical hyperparameter in the construction of input-label pairs. A comparison with model-constrained MWNI (Minimum weighted norm interpolation) method demonstrates the superiority of the proposed method in terms of computational efficiency and accuracy. In addition, the analysis between inferred field data and input parameters provides the consistent results with those from synthetic data, suggesting that increased trace correlation due to proximity leads to better regularization performance. The method proposed in this thesis has several advantages over existing techniques. It accurately reflects the coordinates of irregular locations of seismic traces, overcoming a limitation common to many fast Fourier transform-based or image processing-derived machine learning methods. In addition, the reduced computational demand and flexibility of the developed algorithm is expected to make seismic data processing and interpretation more efficient. Keywords: Simplex, Delaunay tessellation, Trace regularization, Trace interpolation, Machine learning
... Several multidimensional methods for regularization and interpolation of seismic data are currently available. Most popular methods are those based on Fourier kernels, such as Minimum Weighted Norm Interpolation (MWNI) (Liu and Sacchi, 2004), Anti-Leakage Fourier Transform (ALFT) (Xu et al., 2005), Matching Pursuit Fourier Interpolation (MPFI) (Nguyen and Winnett, 2011), and Projection Onto Convex Sets (POCS) interpolation (Abma and Kabir, 2006). ...
... The MWNI algorithm entails solving an inverse problem that include a wavenumber domain regularization term. It minimizes a wavenumber weighted norm that incorporates a prior spectral signature of the unknown k-space data spectrum (Liu and Sacchi, 2004). The problem is solved via the Iteratively Reweighted Least-Squares (IRLS) method with inner solver given by the Conjugate Gradient (CG) method. ...
Article
Full-text available
The reconstruction based on the partial CRS stacking operator presents higher signal-to-noise ratio and better continuity of events. However, irregularly sampled land data often introduce errors in the CRS attributes, creating artifacts that contaminate the seismic data. Recently, the combination of Fourier and CRS-based reconstruction algorithms has significantly solved these problems. The approach consists of applying a Fourier-based interpolation method as a regularization operator to the original data and then the CRS attributes are searched in the reconstructed data. The CRS attributes determined in this way are more accurate and they can be applied in two different forms, either in the interpolation and regularization of the original data or in the denoising of the Fourier-based reconstructed data. We propose to compare the combination of the Fourier-based interpolation methods MWNI and MPFI with the CRS-based interpolation method in order to evaluate which is the best preconditioner of the prestack data to search the CRS attributes. We applied the proposed flowcharts combining the interpolation methods mentioned above to the land seismic data from the Tacutu basin, which is vintage data with very low fold and noisy. The reconstructed data obtained by the combinations show significant improvements compared to the data reconstructed using the algorithms separately, in other words, the weaknesses and limitations of each method are overcome when they are applied in combination. The MWNI+CRS combination flow produces the best results, with the stacked section of reconstructed data showing better noise removal, enhancement of coherent events, better definition and continuity of steeply dipping events.
... To solve the sparsity problem, various methods to interpolate crossline data have been studied. The first approach for the crossline trace interpolation is based on mathematical assumptions using mathematical transformation (Liu and Sacchi, 2004;Abma and Kabir, 2006;Ö zbek et al., 2010;Vassallo et al., 2010;Xu et al., 2010;Wang et al., 2014). The second approach is to train the interpolation function using machine learning (ML) to find connections between subsampled data and complete data and to predict traces to fill the gaps using the trained model (Mandelli et al., 2018;Yoon et al., 2020). ...
Article
Recently, machine learning (ML) techniques have been actively applied for seismic trace interpolation. However, because most research is based on training-inference strategies that treat missing trace gather data as a 2D image with a blank area, a sufficient number of fully sampled data are required for training. This study proposes trace interpolation using ML, which uses only irregularly sampled field data, both in training and inference, by modifying the training-inference strategies of trace-based interpolation techniques. In this study, we describe a method for constructing networks that vary depending on the maximum number of consecutive gaps in seismic field data and the training method. To verify the applicability of the proposed method to field data, we applied our method to time-migrated seismic data acquired from the Vincent oilfield in the Exmouth Sub-basin area of Western Australia and compared the results with those of the conventional trace interpolation method. Both methods showed high interpolation performance, as confirmed by quantitative indicators, and the interpolation performance was uniformly good at all frequencies.
... As reservoir geology becomes more complex and monitor fields differ from the baseline, it is crucial to ensure that the areas are equalized before implementing current seismic imaging methods. To achieve this, current regularization methods can be applied to baseline and monitor seismic-response datasets (e.g., Liu and Sacchi, 2004;Coimbra et al., 2016;Camargo et al., 2021, in prestack dataset). Even after regularization, there can still be significant differences between the fields, which can lead to errors in the imaging process. ...
Conference Paper
Imaging methods in the time-migration domain often rely on less accurate velocity models than those in the depth domain. This study proposes a wave-type equation approach to derive Reverse Time Migration (RTM) and Full-Waveform Inversion (FWI) methods in the time-migration domain. We start from the wave equation and derive the imaging condition from the adjoint equation, which enables us to perform an iterative least-square RTM or FWI, depending on the context. We also modify the image operator in the inversion method to update the velocity model in time. Based on synthetic data and a time-lapse case, our results reveal changes in the simulated reservoir and demonstrate the feasibility of these imaging techniques as alternatives to seismic processing in the depth-migration domain. In addition to being a new feature to be considered in seismic processing in the time-migration domain.
... For example, wave-equation-based methods (Ronen, 1987;Fomel, 2003) depend on the known subsurface velocity distribution that is not available in the usual case. Transform-domain-based methods assume that the restored seismic data can be sparsely represented or can be represented by low-rank tensors after transformation, such as the Fourier transform (Sacchi et al., 1998;Liu and Sacchi, 2004;Trad, 2009;Naghizadeh and Innanen, 2011), Radon transform (Thorson and Claerbout, 1985;Ibrahim and Sacchi, 2014), curvelet transform Kim et al., 2012;Shahidi et al., 2013), and texture patch-based transform (Ma, 2013). The prediction filter methods (Spitz, 1991;Naghizadeh and Sacchi, 2009) rely on the linearity assumption of seismic events in the fx domain. ...
Article
Seismic data interpolation is essential in a seismic data processing workflow, recovering data from sparse sampling. Traditional and deep learning based methods have been widely used in the seismic data interpolation field and have achieved remarkable results. In this paper, we propose a seismic data interpolation method through the novel application of diffusion probabilistic models (DPM). DPM transform the complex end-to-end mapping problem into a progressive denoising problem, enhancing the ability to reconstruct complex situations of missing data, such as large proportions and large-gap missing data. The inter polation process begins with a standard Gaussian distribution and seismic data with missing traces, then removes noise iteratively with a Unet trained for different noise levels. Our#xD;proposed DPM-based interpolation method allows interpolation for various missing cases, including regularly missing, irregularly missing, consecutively missing, noisy missing, and different ratios of missing cases. The generalization ability to different seismic datasets is also discussed in this article. Numerical results of synthetic and field data show satisfactory interpolation performance of the DPM-based interpolation method in comparison with the f- x prediction filtering method, the curvelet transform method, the low dimensional mani fold method (LDMM) and the coordinate attention (CA)-based Unet method, particularly in cases with large proportions and large-gap missing data. Diffusion is all we need for seismic data interpolation.
Article
Seismic data interpolation is a vital technology for improving seismic data density. In recent years, deep learning approaches have demonstrated significant potential in this field, yielding impressive results. Nonetheless, challenges still persist and have not been adequately addressed. First, the lack of reliable labeled training datasets induces concerns about the network’s adaptiveness under supervised learning schemes. Additionally, due to inadequate spatial sampling, aliasing frequently poses considerable difficulties for deep neural networks. In this study, we tackle the issue of aliased seismic data interpolation through self-supervised learning. A novel dip-informed neural network (DINN) is introduced to explicitly integrate local dip information into the neural network and regularize the reconstruction of missing traces. To address the training challenges associated with regularly sampled seismic data interpolation under self-supervised learning schemes, a randomized mix training algorithm is developed. The experimental results along with comparisons to existing methods using both synthetic and field datasets demonstrate the effectiveness and robustness of our approach.
Article
The increasing use of sparse acquisitions in seismic data acquisition offers advantages in cost and time savings. However, it results in irregularly sampled seismic data, adversely impacting the quality of the final images. In this paper, we propose the ResFFT-CAE network, a convolutional neural network with residual blocks based on the Fourier transform. Incorporating residual blocks allows the network to extract both high- and low-frequency features from the seismic data. The high-frequency features capture detailed information, while the low-frequency features integrate the overall data structure, facilitating superior recovery of irregularly sampled seismic data in the trace and shot domains. We evaluated the performance of the ResFFT-CAE network on both synthetic and field data. On synthetic data, we compared the ResFFT-CAE network with the compressive sensing (CS) method utilizing the curvelet transform. For field data, we conducted comparisons with other neural networks, including the convolutional autoencoder (CAE) and U-Net. The results demonstrated that the ResFFT-CAE network consistently outperformed other approaches in all scenarios. It produced images of superior quality, characterized by lower residuals and reduced distortions. Furthermore, when evaluating model generalization, tests using models trained on synthetic data also exhibited promising results. In conclusion, the ResFFT-CAE network shows great promise as a highly efficient tool for the regularizing irregularly sampled seismic data. Its excellent performance suggests potential applications in the preconditioning of seismic data analysis and processing flows.
Article
Marine vibrators have been favored by seismic acquisition in recent years because of its greater waveform control, repeatability and lower environmental damage. However, it presents a processing challenge not found with air-guns: the Doppler effect. The current industry standard method for source motion correction is based on spatiotemporal filtering, or frequency-wavenumber (F-K) domain division. However, both correction methods generate spatial aliasing when the shot interval is coarse. The passage presents a deconvolution-interpolation method implemented in the F-K domain to correct moving marine vibrator data. By deploying a linear composite operator within the sparse inversion framework, including a mask function, a F-K domain convolution operator, a sampling matrix and a dictionary mapping seismic data to a basis function, the method achieves interpolation, correction and noise attenuation simultaneously of noisy Doppler-shifted marine vibrator data under coarse shot interval in the F-K domain. The power function threshold model is proposed to be deployed in the Fast iterative soft-thresholding algorithm (FISTA) for inversion, thus leading to a substantial saving of iterations. Furthermore, the mask function preserves the effective spectrum during beyond-alias interpolation and denoising. Finally, the amount of observed data involved during the inversion process can be halved by utilizing the conjugate symmetry of real signal Fourier transform. We demonstrate the impact of Doppler effect and its correction under coarse shot interval on seismic data and structural imaging, while considering the interference of noise. Synthetic and field data examples verify the effectiveness of our method in mitigating the aforementioned disturbances.
Chapter
Preface Symbols and Acronyms 1. Setting the Stage. Problems With Ill-Conditioned Matrices Ill-Posed and Inverse Problems Prelude to Regularization Four Test Problems 2. Decompositions and Other Tools. The SVD and its Generalizations Rank-Revealing Decompositions Transformation to Standard Form Computation of the SVE 3. Methods for Rank-Deficient Problems. Numerical Rank Truncated SVD and GSVD Truncated Rank-Revealing Decompositions Truncated Decompositions in Action 4. Problems with Ill-Determined Rank. Characteristics of Discrete Ill-Posed Problems Filter Factors Working with Seminorms The Resolution Matrix, Bias, and Variance The Discrete Picard Condition L-Curve Analysis Random Test Matrices for Regularization Methods The Analysis Tools in Action 5. Direct Regularization Methods. Tikhonov Regularization The Regularized General Gauss-Markov Linear Model Truncated SVD and GSVD Again Algorithms Based on Total Least Squares Mollifier Methods Other Direct Methods Characterization of Regularization Methods Direct Regularization Methods in Action 6. Iterative Regularization Methods. Some Practicalities Classical Stationary Iterative Methods Regularizing CG Iterations Convergence Properties of Regularizing CG Iterations The LSQR Algorithm in Finite Precision Hybrid Methods Iterative Regularization Methods in Action 7. Parameter-Choice Methods. Pragmatic Parameter Choice The Discrepancy Principle Methods Based on Error Estimation Generalized Cross-Validation The L-Curve Criterion Parameter-Choice Methods in Action Experimental Comparisons of the Methods 8. Regularization Tools Bibliography Index.
Article
The compressional wave reflection coefficient R( theta ) given by the Zoeppritz equations is simplified. The result is arranged into three terms which contribute to three distinct features of the R( theta ) curve: (1) the normal-incidence magnitude, (2) the behavior at intermediate angles of about 30 degrees, and (3) the approach to critical angle. Thus the author approximately diagonalize the multivariate relationship between elastic properties and curve features. The coefficient for intermediate angles has two terms: one term is proportional to TRIAN OP sigma , the contrast in Poisson's ratio; and the other term is TRIAN OP A//0, which describes the bland decrease of R( theta ) in the absence of contract in Poisson's ratio. When angles approaching critical are not included, R( theta ) may be adequately approximated by a parabola.
Article
Spatio-temporal analysis of seismic records is of particular relevance in many geophysical applications, e.g., vertical seismic profiles, plane-wave slowness estimation in seismographic array processing and in sonar array processing. The goal is to estimate from a limited number of receivers the 2-D spectral signature of a group of events that are recorded on a linear array of receivers. When the spatial coverage of the array is small, conventional f-k analysis based on Fourier transform leads to f-k panels that are dominated by sidelobes. An algorithm that uses a Bayesian approach to design an artifacts-reduced Fourier transform has been developed to overcome this shortcoming. A by-product of the method is a high-resolution periodogram. This extrapolation gives the periodogram that would have been recorded with a longer array of receivers if the data were a limited superposition of monochromatic planes waves. The technique is useful in array processing for two reasons. First, it provides spatial extrapolation of the array (subject to the above data assumption) and second, missing receivers within and outside the aperture are treated as unknowns rather than as zeros. The performance of the technique is illustrated with synthetic examples for both broad-band and narrowband data. Finally, the applicability of the procedure is assessed analyzing the f-k spectral signature of a vertical seismic profile (VSP).
Article
Interpolation of seismic traces is an effective means of improving migration when the data set exhibits spatial aliasing. A multichannel interpolation method is described which requires neither a priori knowlege of the directions of lateral coherence of the events, nor estimation of these directions. The method is based on the fact that linear events present in a section made of equally spaced traces may be interpolated exactly, regardless of the original spatial interval, without any attempt to determine their true dips. The predictability of linear events in the f-x domain allows the missing traces to be expressed as the output of a linear system, the input of which consists of the recorded traces. Synthetic examples show that this method is insensitive to random noise and that it correctly handles curvatures and lateral amplitude variations. -from Author