ArticlePDF Available

Five-dimensional interpolation: Recovering from acquisition constraints

Authors:

Abstract and Figures

Although 3D seismic data are being acquired in larger volumes than ever before, the spatial sampling of these volumes is not always adequate for certain seismic processes. This is especially true of marine and land wide-azimuth acquisitions, leading to the development of multidimensional data interpolation techniques. Simultaneous interpolation in all five seismic data dimensions (inline, crossline, offset, azimuth, and frequency) has great utility in predicting missing data with correct amplitude and phase variations. Although there are many techniques that can be implemented in five dimensions, this study focused on sparse Fourier reconstruction. The success of Fourier interpolation methods depends largely on two factors: (1) having efficient Fourier transform operators that permit the use of large multidimensional data windows and (2) constraining the spatial spectrum along dimensions where seismic amplitudes change slowly so that the sparseness and band limitation assumptions remain valid. Fourier reconstruction can be performed when enforcing a sparseness constraint on the 4D spatial spectrum obtained from frequency slices of five-dimensional windows. Binning spatial positions into a fine 4D grid facilitates the use of the FFT, which helps on the convergence of the inversion algorithm. This improves the results and computational efficiency. The 5D interpolation can successfully interpolate sparse data, improve AVO analysis, and reduce migration artifacts. Target geometries for optimal interpolation and regularization of land data can be classified in terms of whether they preserve the original data and whether they are designed to achieve surface or subsurface consistency.
Content may be subject to copyright.
Five-dimensional interpolation: Recovering from acquisition constraints
Daniel Trad
1
ABSTRACT
Although 3D seismic data are being acquired in larger vol-
umes than ever before, the spatial sampling of these volumes
is not always adequate for certain seismic processes. This is
especially true of marine and land wide-azimuth acquisi-
tions, leading to the development of multidimensional data
interpolation techniques. Simultaneous interpolation in all
five seismic data dimensions inline, crossline, offset, azi-
muth, and frequency has great utility in predicting missing
data with correct amplitude and phase variations. Although
there are many techniques that can be implemented in five di-
mensions, this study focused on sparse Fourier reconstruc-
tion. The success of Fourier interpolation methods depends
largely on two factors: 1 having efficient Fourier transform
operators that permit the use of large multidimensional data
windows and 2 constraining the spatial spectrum along di-
mensions where seismic amplitudes change slowly so that
the sparseness and band limitation assumptions remain valid.
Fourier reconstruction can be performed when enforcing a
sparseness constraint on the 4D spatial spectrum obtained
from frequency slices of five-dimensional windows. Binning
spatial positions into a fine 4D grid facilitates the use of the
FFT, which helps on the convergence of the inversion algo-
rithm. This improves the results and computational efficien-
cy. The 5D interpolation can successfully interpolate sparse
data, improve AVO analysis, and reduce migration artifacts.
Target geometries for optimal interpolation and regulariza-
tion of land data can be classified in terms of whether they
preserve the original data and whether they are designed to
achieve surface or subsurface consistency.
INTRODUCTION
All current 3D seismic acquisition geometries have poor sam-
pling along at least one dimension. This affects migration quality,
which is based on the principle of constructive and destructive inter-
ference of data and thus is sensitive to irregular and coarse sampling
Abma et al., 2007.Analysis of amplitude variations with offset and
azimuth AV O , AVA z , which we want to observe in the migrated
domain, are also affected by the presence of gaps and undersam-
pling.
There are many different approaches to tackling this problem. The
only perfect solution is to acquire well-sampled data; all other ap-
proaches deal with the symptoms of the problem rather than the
problem itself, and there is no guarantee that they can adequately
solve it. However, given that, in the real world, we usually cannot go
back to the field and fix the actual problem, we need to address this
issue using the processing tools at our disposal.
Most seismic algorithms implicitly apply some sort of interpola-
tion because they assume correctly sampled data. Typically, missing
samples are assumed to be zero or similar to neighboring values. The
advantage of using a separate interpolation algorithm is that more in-
telligent assumptions can be made by using a priori information. For
example, sinc interpolation uses the constraint that there is no energy
at frequencies above Nyquist. This is more reasonable than assum-
ing that the unrecorded data are zeros. Interpolation algorithms can
then be viewed as methods to precondition the data with intelligent
constraints.
Interpolation of wide-azimuth land data presents many challeng-
es, some quite different from those of interpolating narrow-azimuth
marine data sets. The most familiar interpolation algorithms have
been developed for marine streamer surveys. Marine data are usual-
ly well sampled in the inline direction and coarsely sampled in the
crossline direction. Many algorithms based on Fourier interpolation
are quite successful at infilling the crossline direction, even in the
presence of aliasing and complex structure Schonewille et al., 2003;
Xu et al., 2005; Abma and Kabir, 2006; Poole and Herrmann, 2007;
Zwartjes and Sacchi, 2007. Land data interpolation brings addition-
al complications because of noise, topography, and the wide-azi-
muth nature of the data. In particular, the azimuth distribution re-
quires interpolation to use information from all spatial dimensions at
the same time because sampling along any particular subset of the
four spatial dimensions is usually very poor.
Multidimensional interpolation algorithms have become feasible
even for five dimensions Trad et al., 2005. This capability raises
Manuscript received by the Editor 12 August 2008; revised manuscript received 29 May 2009; published online 25 November 2009.
1
CGGVeritas, Calgary,Alberta, Canada. E-mail: daniel.trad@cggveritas.com.
© 2009 Society of Exploration Geophysicists. All rights reserved.
GEOPHYSICS, VOL. 74, NO. 6 NOVEMBER-DECEMBER 2009; P. V123–V132, 12 FIGS., 1 TABLE.
10.1190/1.3245216
V123
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
new possibilities but also brings new challenges and questions. The
general principle is the same: Missing data are assumed to have a
similar nature to data recorded in their neighborhood, but the term
“neighborhood” can have different meanings in multiple dimen-
sions. An additional complication for wide-azimuth data interpola-
tion in five dimensions is that these data are always very irregular
and sparse in at least two of the four spatial dimensions because of
acquisition and processing costs.
Interpolation implementations have two different aspects: the
general interpolation strategy choice of spatial dimensions, win-
dow size, and target geometry and the mathematical engine used to
predict the new traces from some kind of model. A discussion of
these two aspects follows.
INTERPOLATION STRATEGIES
Interpolation methods differ in complexity, assumptions, and op-
erator size. Local methods e.g., short-length prediction filters use
simple models usually linear events to represent the data in small
windows. Therefore, they tend to be robust, fast, adaptable, and easy
to implement. Their shortcoming is an inability to interpolate large
gaps because the local information they need does not exist there
are no data around the trace to interpolate.
Global methods use all of the data simultaneously up to some ap-
erture limit defined by the physics of the problem and models with
many degrees of freedom because they cannot assume simple data
events at a large scale. They are slower, less adaptable, and harder to
implement. However, they can, at least in theory, interpolate large
gaps by using information supplied from distant data. Most practical
methods fall between these two extremes; but the sparser the sam-
pling, the larger the operator size needs to be. If the geology is com-
plex, some methods with a large operator can smear geologic fea-
tures and decrease resolution.Asafe choice is to work with global in-
terpolation methods that behave like local interpolators when local
information is available.
A related distinction is the number of dimensions that the algo-
rithm can handle simultaneously. Usually, the time dimension is well
sampled, so only spatial dimensions need be interpolated. Although
3D seismic data have four spatial dimensions, many traditional
methods use data along one spatial dimension only. If the method is
cascaded through the different dimensions, the order of these opera-
tions becomes extremely important. However, interpolation of
sparse wide-azimuth data is more likely to succeed in a full 5D space
because often at every point there is at least one spatial direction
along which seismic amplitudes change slowly. Information along
this direction helps to constrain the problem along the other dimen-
sions where events are harder to predict.
Also, seismic amplitude variations are smoother in five dimen-
sions than they are in any projection into a lower dimensional space.
To see why, consider an analogy: Imagine the shadow of an airplane
flying over a mountain range. The shadow of the airplane is a com-
plex path even if the airplane goes in a simple trajectory. Interpola-
tion of the airplane flight path is much more difficult on the 2D sur-
face shadow than in the original 3D space. A similar argument can
be made about seismic wavefield variations in the full 5D space.
My approach to interpolation is to work with large operators in 5D
windows. In practice, the data window size is often constrained by
the processing system capabilities, particularly when using clusters
in busy computing networks. I normally apply windows of 30 30
lines, 1000-m offsets, and all azimuths. Larger windows are occa-
sionally required to deal with very sparse data. The spatial dimen-
sions in these windows are chosen so that the data look as simple as
possible along each dimension. After extensive testing in different
domains shot, receiver, cross spreads, and common-offset vector
domains, I have chosen the inline-crossline-azimuth-offset-fre-
quency i.e., midpoint, offset, and azimuth with NMO-corrected
data for the following reasons:
1 These are the dimensions where amplitude variations are most
important structure, AVO, and AVAz. Interpolation is always
an approximation of the truth, and that approximation is better
along the dimensions where the algorithm is applied.
2 AVO and AVAz are usually slow after NMO; therefore, data
have limited bandwidth in the Fourier spectra along these di-
mensions. The azimuth dimension also has the advantage of be-
ing cyclic in nature, making it particularly fit for discrete Fouri-
er transform representation.
3 The interval between samples in the inline crossline dimen-
sions i.e., midpoints is on the order of the common-midpoint
CMP bin size. In the shot or receiver domain, the sampling
can be as coarse as shot/receiver line sampling several CMP
bins.
Figure 1 shows a simple synthetic experiment to demonstrate the
advantage of 5D interpolation over 3D interpolation. The original
traces from an orthogonal survey were replaced by synthetic seismic
events while preserving the original recording geometry of the trac-
es. The distance between receiver lines was 500 m, with 12 receiver
lines per shot. Every second receiver line of this synthetic data set
was removed, simulating a 1000-m line interval, and then predicted
with Fourier reconstruction.
In the first case, I interpolate on a shot-by-shot basis three dimen-
sions, and in the second case in the inline-crossline-offset-azimuth-
frequency domain five dimensions. It is evident in Figure 1 that the
algorithm can reproduce all data complexity when using five inter-
400 500 600 700
Receiver line
Receiver trace number
0
1
2
400 500 600 700
Receiver trace number
Time
(
s
)
1
2
Time
(
s
)
a
)
c
)
b) d)
Figure 1. Synthetic data: comparison of 5D versus 3D interpolation.
a Synthetic data, one shot window. bAfter removing every sec-
ond line. c Interpolation in 5D inline/crossline/offset/azimuth/fre-
quency. d Interpolation in 3D receiver. x, receiver . y, frequency.
Traces are sorted according to receiver numbers.
V124 Trad
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
polation dimensions, but it is unable to repeat this using only three
dimensions. Because the algorithm is exactly the same, this example
shows the importance of the additional information supplied by the
extra dimensions for Fourier interpolation.
The actual location of the newly created traces is an important is-
sue for interpolation. I can distinguish six cases, of which only four
are used for land data wide-azimuth surveys:
1 Preserving original data interpolation
a Decrease shot and receiver interval decrease bin size.
b Decrease shot and receiver line interval increase offset
and azimuth sampling.
c Make shot and receiver line interval equal to shot and re-
ceiver interval fully sampled. This is a particular case of
1b.
2 Replacing data totally with predicted traces regularization
a Target geometry regular on shot and receiver locations
surface consistency.
b Target geometry regular on CMP, offset, and azimuth sub-
surface consistency.
c Target geometry regular on surface and in subsurface.
Possibilities 1a, 1b, 2a, and 2b each have important applications
see Table 1. Adapting to the acquired data by adding new shots and
receivers following the original design types 1a and 1b has the ad-
vantage that original data can be preserved and interpolation is well
constrained. Preserving the original data is generally safer than re-
placing all of the acquisition with interpolated data, particularly for
complex noisy data from structured areas in the presence of topogra-
phy. This approach works well for Kirchhoff time and depth migra-
tion. By adding new shots and receivers, the subsurface sampling
can be improved according to well-understood acquisition concepts
e.g., Cordsen et al., 2000.
Type 2a, surface-consistent interpolation with perfectly regular
shot and receiver lines, is useful for wave equation migration, inter-
polation of very irregular surveys, and time-lapse applications. Type
2b, subsurface-consistent uniform coverage of offsets and azimuths
for each CMP, is desirable for migration in general. However, this
design implies a large number of shots and receivers with nonuni-
form shot and receiver fold. This is a problem for ray-tracing meth-
ods and any kind of shot or receiver processing. Therefore, its appli-
cation seems to be limited to time migration and, because of the large
size of the resulting data sets, for small surveys. Probably, it can also
be applied well to common-offset Gaussian beam migration.
Finally, types 1c and 2c, complete coverage of shots and receiv-
ers, are desirable for all seismic processing, but the resulting large
size of the data makes it impractical.
Any of these interpolation types can be used for infilling acquisi-
tion gaps. A modification of type 2b from polar to Cartesian coordi-
nates can be used to produce common-offset vector gathers. Types
Table 1. Types of land data interpolation and main benefits. The size of the circle is proportional to the real use in production
(based on use from 2005 to 2008). The font style in the bottom row reflects a positive (bold) or negative (italic) remark.
Main application Type
Heavy use
Increasing
inline-
crossline
sampling
1a
Increasing
offset-
azimuth
sampling
1b
Regularizing
shot/receiver
positions
2a
Regularizing
CMP/offset/
azimuth positions
2b
Full sampling
1c and 2c
Occasional use
Possible, but never
used
Interpolation of large gaps 䊊䊊
Time Kirchhoff migration 䊊䊊
Depth Kirchhoff migration
Wave-equation migration
Merging surveys with
different bin size and/or
design 2D and 3D, parallel
and orthogonal, etc.  䊊䊊
Increase resolution for
steep dips relax antialias
filters during migration 䊊䊊
Improve CIGs AVO,
AVAz, velocity analysis 䊊䊊
4D applications matching
time-lapse surveys  䊊䊊
Main use Merging Time/depth
Kirchhoff
migration
Wave
equation
migration
Time migration
small surveys
Not used
because of
high cost
Main advantage-
disadvantage
Reliable
Sometimes
produces
time slice
artifacts
Reliable
Difficult for
topography
Good
sampling for
any
migration
Less reliable
Great sampling
Expensive
for shot-receiver
processing
Best
possible
sampling
Expensive
for all
processing
Five-dimensional interpolation V125
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
1c, 2b, and 2c are fully implemented and have been used in internal
tests but have not yet been used in production projects. Notice that
one case missing in the table is to replace all data with predictions
onto a given geometry. This is the situation in 4D time lapse, where it
is usual to interpolate the monitor locations to match the baseline.
For typical wide-azimuth land data surveys in a complex environ-
ment, the safest choice seems to be surface-consistent interpolation
1a and 1b. This allows one to preserve the original data untouched
and to apply careful quality control QC to the new traces. Some QC
parameters can be added to the headers, making it possible to discard
new traces with high risk or low confidence after the interpolation.
There are several possible quality parameters. Two QC parameters
that complement each other and are often useful are 1 the distance
along the four spatial dimensions between the new and the original
traces and 2 the ratio of original to interpolated traces.
For the 5D configuration discussed in this paper, a meaningful cal-
culation of the first parameter requires a weighted average of the dis-
tance along inline, crossline, offset, and azimuth. The weights de-
pend on the structural complexity, residual moveout, and anisotropy.
The second parameter refers to the ratio of the number of original
traces to the number of sampling points on the 4D spatial grid used in
the numerical algorithm. This ratio is usually much smaller than the
ratio of input to output traces for a given area.
INTERPOLATION ENGINE
The second major component of the interpolation problem is the
choice of a mathematical algorithm to predict new information giv-
en a set of recorded traces. One method with the flexibility to adapt to
the requirements for multidimensional global interpolation is mini-
mum weighted norm interpolation MWNI兲共Liu and Sacchi, 2004,
which extends the work from Sacchi and Ulrych 1996 to multiple
dimensions. MWNI is a constrained inversion algorithm. The actual
data d are the result of a sampling matrix T acting on an unknown
fully sampled data set m m and d are containers for multidimen-
sional data, and T is a mapping between these two containers.
The unknown interpolated data are constrained to have the same
multidimensional spectrum as the original data. Enforcing this con-
straint requires a multidimensional Fourier transform, which is the
most expensive part of the algorithm. To solve for the unknown data,
a cost function is defined for every frequency slice and is minimized
using standard optimization techniques. The cost function J is de-
fined, frequency by frequency, as
J d Tm
2
m
W
, 1
where 储储
2
indicates an
2
-norm and 储储
w
indicates an
2
-weighted
norm calculated as
m
W
m
H
F
n
1
p
k
2
F
n
m. 2
In equation 2, F
n
is the multidimensional Fourier transform, with n
indicating the number of spatial dimensions of the data, m
H
the
transpose conjugate of the model m, and p
k
the multidimensional
spectrum of the unknown data.
The multidimensional vector p
k
contains weight factors that give
freedom to the model to be large where it needs to be large. They can
be obtained by bootstrapping from the previous temporal frequency
in a manner similar to that done for Radon transforms Herrmann et
al., 2000. These weights are defined in the
-k domain, where
is
the temporal frequency and k is the wavenumber vector along each
spatial dimension. They link the frequency slices, making the fre-
quency axis behave as the fifth interpolation dimension, although
frequencies are not really interpolated.
The model m is in the
-x domain x is a vector representing all
spatial directions.Ifk
max
is the maximum wavenumber on each di-
mension for the maximum dip of the data, then the case of p
k
1 for
k k
max
and p
k
0 for k k
max
corresponds to sinc interpolation.
The variable
in equation 1 is a hyperparameter that controls the
balance between fitting the data and enforcing sparseness on the
spectrum. This parameter is eliminated by changing the cost func-
tion in equation 1 to the standard form and using the residuals to de-
fine the number of iterations Trad et al., 2003. The actual geophysi-
cal meaning of the spatial dimensions is irrelevant to the algorithm.
However, for the method to work well, at least one of these dimen-
sions must have a sparse spectrum or a band limited spectrum.
The multidimensional spectrum can be calculated using discrete
Fourier transforms DFTs that exactly honor space locations or the
fast Fourier transforms FFTs that require binning the data into a
grid with exactly one trace per position. In practice, I define m to be a
regular supersampled 4D grid that contains many more traces than
the target geometry. This allows us to use FFTs but forces us to bin
the data during the interpolation.
The bin intervals along the spatial dimensions are kept small to
avoid smearing and data distortion. The binning errors along the in-
line/crossline directions can be made negligible by subdividing
CMP bins into subbins if necessary, but CMP grid bin size usually is
adequate. The binning errors along offset and azimuth dimensions
are kept small by applying NMO and static corrections before inter-
polation. However, data with significant residual NMO and strong
anisotropy require small bin intervals along offset and azimuth.
Large bins reduce computation time and improve numerical stability
but reduce precision. There is a trade-off between precision and nu-
merical stability that requires careful parameterization and imple-
mentation. A good rule of thumb for land surveys is to use, as offset
bin interval, a fraction of the receiver group interval e.g.,
1
2
or
1
4
, de-
creasing from near to far offsets and geologic complexity. Azimuth
intervals are usually chosen in the range between 20° and 45°, de-
creasing with offset and anisotropy.
DFTs can also be used for the spectrum with the advantage that
they do not require binning. The problem with DFTs is computation-
al cost. For N variables, a 1D FFT requires computation time propor-
tional to N log N, but DFT requires a computation time proportional
to N
2
. This constraint makes the cost in two spatial dimensions pro-
portional to N
4
and four spatial dimensions proportional to N
8
. Al-
though numerical tricks such as nonuniform FFTs Duijndam and
Schonewille, 1999 can improve these numbers dramatically, a 4D
DFT algorithm is quite expensive in terms of computer time and has
been unfeasible for production demands until now. Very recently,
this has become possible Gordon Poole, personal communication,
2009, although it demands large computer resources.
There are many differences between working with FFTs or DFTs.
On the negative side, working with FFTs has the potential to distort
data because of the binning. However, binning spatial coordinates is
often applied in seismic processing, even by methods that can use
exact spatial coordinates. For example, when working on common-
offset volumes, a binning along offset is applied. On the positive
side, when working with FFTs, the results improve because the in-
creased speed of the iterations permits us to obtain a solution close to
the one that would have been obtained after full convergence.
Furthermore, the nature of the system of equations solved at every
V126 Trad
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
frequency changes, depending on whether we use regular sampling,
irregular sampling, or regular sampling as a result of binning. To un-
derstand why, let us incorporate the sparseness constraint into the
operator by transforming equation 1 from the general form to the
standard form Hansen, 1998. By defining a new model u
k
,
u
k
p
k
1
F
n
m
x
, 3
which is m after transforming to the
-k domain and inverse weight-
ing with p
k
, equation 1 becomes
J d TF
n
1
p
k
u
k
2
u
k
2
. 4
The weighted norm 共储
w
now becomes an
2
-norm, and the oper-
ator absorbs the spectral weights. This allows us to include the
sparseness constraint into the operator, i.e., to modify the basis func-
tions of the transformation to include the sparseness constraint Trad
et al., 2003. The mapping between d and the new model u is now
performed by the operator:
L TF
n
1
p
k
. 5
Solving this equation for u
k
requires solving the following system
of equations:
p
k
H
F
n
T
H
TF
n
1
p
k
Iu
k
p
k
H
F
n
T
H
d
H
, 6
where I is the identity matrix and the super index H means conjugate
transpose.
Because of the large size of the system of equations in our prob-
lem, on the order of 10
5
equations, the final full solution u
k
is never
achieved. Instead, an approximate solution is obtained by using an
iterative algorithm and running only a few iterations. Components of
u
k
that have a weak mapping through operator L such as low-am-
plitude spectral components can be resolved with this limited num-
ber of iterations only if the system of equation 6 has good conver-
gence. This convergence improves as the operator L TF
n
1
p
k
be-
comes closer to orthogonal, i.e., as
L
H
L I. 7
The operator p
k
is usually a diagonal operator; therefore, conver-
gence depends mainly on the two operators F
n
1
and T, which in turn
depend on the spatial axes and the missing samples, respectively.
The operator F
n
1
maps the 4D spatial wavenumber k to the inter-
polated 4D spatial axis x.Ifx and k are perfectly regular, then
F
n
F
n
1
I. If, in addition, there are no missing traces, the left side of
equation 6 is diagonal and the system converges in one iteration. The
wavenumber axes k one axis per spatial dimension can always be
made regular, but the axes x depend on the input data. Binning the in-
put data makes x regular.
The sampling operator T that connects interpolated data to ac-
quired data depends on the missing traces. It is orthogonal when
there are no missing traces. Binning the data without decreasing
sampling interval does not affect T but can introduce data distortion.
Making bin intervals small to avoid data distortion introduces nonor-
thogonality into the system of equations 6, making convergence
more difficult.
As we see from this analysis, there is a trade-off between nonor-
thogonality on F
n
1
and T. Moreover, there is a trade-off between
binning and data distortion on one side and convergence of the sys-
tem of equations on the other. Precisely honoring spatial coordinates
slows down convergence because of the nonorthogonality of F
n
1
.
Alternatively, F
n
1
can be made regular by fine binning of x, practi-
cally without loss of precision, but T becomes nonorthogonal. In-
creasing bin interval does not affect F
n
1
and decreases nonorthogo-
nality on T but introduces data distortion.
Figure 2 illustrates the effects of sampling in the matrix distribu-
tion for the left side of the system of equation 6. Let us consider two
different cases: coarse regular sampling left column and irregular
sampling right column. The matrix distribution for these two cases
is shown when applying three different methods: coarse binning,
true locations, and fine binning.
The first row, Figure 2a and b, shows the structure of the system of
equations when coarse binning is used which allows the use of
FFTs. The system of equations is quite sparse, with most elements
along the main diagonal; therefore, the optimization converges very
quickly. In the decimation case on the left Figure 2a, the secondary
peaks produced by operator aliasing are as strong as the nonalias
component. In practice, they can be taken care of by filters and boot-
strapping weights from low to high frequency.
The second row, Figure 2c and d, shows the same for irregular
sampling true spatial locations. The system of equations is fully
populated because the irregularly sampled Fourier transform intro-
duces cross-terms between the model elements the basis functions
are nonorthogonal and convergence is slower Figure 2c and d.Op-
erator aliasing, on the other hand, becomes less strong Figure 2c.
The third row, Figure 2e and f, shows the same for fine binning. The
system of equations becomes almost fully populated again, but in
this case not because of F as before but because of T. The multidi-
mensional case is more difficult to visualize, but the same ideas ap-
ply. In that case, the nonaliased directions help to constrain the solu-
tion and attenuate the effect of aliasing.
In my experience, if the bin size is not made too small, the large
computational advantage of FFT algorithms over DFTs is more ben-
eficial than the consequent increase on nonorthogonality on T. This
is possible when working along spatial dimensions where the data
look simple. In this case, the method can preserve localized ampli-
tude variations better than inversion using irregularly sampled spa-
tial locations because it is possible to iterate more and to obtain a so-
10
20
30
10
20
30
10
20
30
10
20
30
10 20 30
10 20 30 10 20 30
10 20 30
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
a) b)
c) d)
e) f)
A=(FT
H
TF
1
)
40
60
20
40 6020
40
60
20
40 6020
Figure 2. Matrix distributions for the left side of the system of equa-
tions 6 for irregularly sampled data in two different cases, coarse
sampling and gaps columns, using coarse binning, true locations,
and fine binning rows. a Coarse binning on decimated data. b
Coarse binning on data with gaps. c True locations on decimated
data. d True locations on data with gaps. e Fine binning on deci-
mated data. f Fine binning on data with gaps. The color represents
amplitudes in absolute numbers, with dark blue representing zeros.
Five-dimensional interpolation V127
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
lution closer to the one obtained with full convergence. This high fi-
delity for localized events makes the algorithm very useful for land
data, where amplitude changes very quickly. At the same time, it
makes the method less useful in removing noise.
Pseudorandom noise noise that looks random but has a coherent
source can be propagated into the new traces, becoming locally co-
herent and therefore very difficult to remove. Although this is a dis-
advantage, it is important to realize that interpolation and noise at-
tenuation have different and sometimes conflicting goals. Noise at-
tenuation should predict only signal and should filter out noncoher-
ent events. Interpolation should predict all of the data, even if the
events are very weak or badly sampled. Undersampled events can
look noncoherent; therefore, their preservation depends on the algo-
rithm not being too selective in terms of coherence. Although simul-
taneous interpolation and noise attenuation is a very desirable goal,
better chances of success are achieved by applying noise attenuation
and interpolation iteratively in sequence rather than in a single pass.
On the other hand, there are many advantages in using exact posi-
tions to eliminate aliasing and difficulties of binning for complex
structure. The first aspect is balanced by working in the full 5D space
of the data. The second can be addressed in most cases by using small
binning intervals.
Some problems appear often, however, when binning long offsets
in structured data because of rapid amplitude variations caused by
anisotropy and residual moveout. This is a problem for land and
ocean-bottom OBC data, where far offsets usually have poor sam-
pling because of the rectangular shape of shot patches. Also, this is-
sue makes binning more difficult for marine data where residual mo-
veout can be very significant at long offsets.A possible solution is to
use larger bins along inline and crossline for far offsets, taking ad-
vantage of the fact that the Fresnel zone increases in size with offset.
A practical combination would be a hybrid method where binning is
applied for near and middle offsets and exact locations are applied
on long offsets.
A complete discussion of the topic is beyond the scope of this pa-
per. The comments above are intended to point out the effect of sam-
pling in the system of equation 6 and the impact this has in predicting
localized amplitude variations in the data.
APPLICATIONS AND DATA EXAMPLES
Applications for land data interpolation usually involve increas-
ing inline and crossline sampling decreasing bin size and/or in-
creasing offset and azimuth sampling increasing fold. This classi-
fication is too broad, however, because there are many possible ways
to increase the sampling, just as there are many possible geometry
designs. Table 1 shows several applications classified according to
the six types defined earlier. All of these cases have been used in
practice, but only a few of them are often required in production
projects. In this section, we review examples for the most common
cases:
increasing offset and azimuth sampling by decreasing shot and
receiver line interval to improve migration type 1b
increasing offset and azimuth sampling for better velocity, AVO,
andAVAz estimates after migration type 1b
increasing inline and crossline sampling to improve imaging of
steep reflectors by relaxing antialias filters in migration algo-
rithms type 1a
increasing inline and crossline sampling for changing natural bin
size, as in merging surveys acquired with different geometries
type 1a
infilling missing shots and receivers in acquisition gaps type 1b
in this example
Increasing offset and azimuth sampling for imaging
The first example shows the benefits of interpolation for aniso-
tropic 3D depth migration in a Canadian Foothills area with signifi-
cant structure, topography, and noise. These surveys often can bene-
fit from interpolation because usually they have shot and receiver
lines acquired quite far apart because of high acquisition costs on to-
pographic areas. Foothills acquisitions on structurally complex ar-
eas, however, are difficult to interpolate because small imperfections
in static corrections affect coherence in the space domain. Also,
these data often are severely affected by ground roll noise, which
makes interpolation difficult particularly for shallow structures.
Figure 3a shows an orthogonal geometry vertical lines are shots,
horizontal lines are receivers. CMPs are overlain on the shot/receiv-
er locations, with their color indicating the fold. Figure 3b shows the
target geometry that contains all of the same shots and receivers as in
Figure 3a, along with the new interpolated shot and receiver lines.
Notice that these new lines follow the geometry of the original lines,
permitting us to preserve all original data because original shots and
receivers do not need to be moved.
By halving shot and receiver line intervals, the CMP fold increas-
es by a factor of four, giving a better sampling of offsets and azi-
muths. This can be seen in Figure 4a and b, which shows the offset/
azimuth distribution for a group of CMPs before and after interpola-
tion. The increased sampling benefits migration because imaging al-
a)
b)
Figure 3. Foothills survey: orthogonal geometry. Shots are located
along vertical lines; receivers are located along horizontal lines. Col-
or represents fold. a Before interpolation. b After interpolation
shot and receiver line spacing decreased by a factor of two.
V128 Trad
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
gorithms rely on interference to form the correct image and therefore
require a proper sampling at least two samples per cycle to work
correctly.
These benefits can be observed in the final stacked image, but they
are more obvious in common-image gathers CIGs. Figure 5 com-
pares the CIGs with and without interpolation. The better continuity
of the events will certainly bring improvements to the results of gath-
er processing, especially for AVAz and AVO analysis which are
very sensitive to acquisition footprint and automated processes
such as tomography based on residual curvature analysis. Figure 6
shows the image stack from 0 to 1000-m offsets. The continuity of
events has been improved over nearly all of the section.
Increasing offset-azimuth sampling for AVO
Prestack migration of the seismic data prior to performing AVO
inversion has been advocated for more than 10 years Mosher et al.,
1996. However, poor sampling typical of land seismic acquisition
makes practical implementation of this concept quite difficult.
Downton et al. 2008 demonstrate that these problems can be ad-
dressed by performing interpolation prior to prestack migration, re-
sulting in betterAVO estimates. These workers performed a series of
comparisons of processing flows for AVO, including the 5D interpo-
lation method presented in this paper. By calculating the correlation
between AVO estimates to well-log information for the Viking For-
mation in Alberta, they concluded that the combination of interpola-
a
)
b
)
Figure 4. Foothills survey: offset/azimuth distribution in an area of
the a original and b interpolated surveys.
o
a
b)
o
Figure 5. Foothills survey: migrated gathers 3D anisotropic depth
migration from a original data and b with interpolation before
migration.
a
)
b
)
Figure 6. Foothills survey: migrated stack section, 01000 m a
without interpolation, and b with interpolation before migration.
Five-dimensional interpolation V129
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
tion and prestack time migration provided the best AVO estimates,
achieving a correlation increase from 0.39 for migration without in-
terpolation to 0.57 for migration after interpolation. This improve-
ment can be taken as evidence of amplitude preservation during in-
terpolation.
Figure 7 shows CIGs from prestack time migration with and with-
out interpolation before migration. The migration applied in this
case was type 1b, decreasing shot and receiver lines by half and in-
creasing fold by four times. Hunt et al. 2008 give a complete de-
scription of the experiment.
Increasing inline-crossline sampling for steep dips
This example, also described in Gray et al. 2006, shows the ben-
efits of reducing the bin size increasing inline-crossline sampling
before migration rather than afterward. The land data set in this ex-
ample was acquired over a structured area in Thailand using an or-
thogonal layout. The objective of the interpolation was to obtain
more information on steep dips by including moderate- to high-fre-
quency energy that the migration antialias filter removed from the
original, more coarsely sampled data. For this purpose, the shot
spacing along lines was halved to reduce the bin size from 12.5
50 to 12.5 25 m. Figure 8 shows the shot locations after inter-
polation. The red dots indicate the locations of the original shots, and
the blue dots indicate the locations of the new shots.
As a comparison, a prestack time migration stack was produced
using the original acquired data; then the stack was interpolated, as
shown in Figure 9a. In Figure 9b, the prestack data were interpolated
before migration using 5D interpolation. The prestack interpolation
produced a data set input for migration that was better sampled than
the noninterpolated data set. This allowed the migration to operate
with greater fidelity on the steep-dip events in this case, applying
fewer antialiasing constraints. The prestack interpolation did not add
information to the data, but it did allow the migration to make better
use of the information that was already in the data, allowing it to pro-
duce an image with greater structural detail.
Increasing inline-crossline sampling for survey merging
Often, surveys acquired with different natural bin sizes need to be
merged into a common grid. If a survey is gridded into a bin size
o
o
a
)
b
)
Figure 7. Foothills II. CIGs from prestack time migration a without
interpolation and b with interpolation before migration.
Figure 8. Thailand: shot locations after interpolation for an orthogo-
nal geometry. Red dots are original shots; blue dots are new interpo-
lated shots. The two large gaps are 1000 1500 m in diameter be-
fore interpolation.
a)
b)
Figure 9. Thailand: prestack time migration stacks. a Interpolation
performed after stacking the migrated images. b Interpolation per-
formed before migration. The improved imaging of the steep-dip
event in the center of the section is evident in b. Data courtesy of
PTT Exploration and Production
V130 Trad
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
smaller than it was designed for, the CMP coverage becomes very
poor, affecting further interpretation even after migration. A solution
is to use prestack interpolation to reduce the natural bin size to match
the merge grid. This can be achieved by increasing sampling in the
inline-crossline domain or, alternatively, by using the surface-con-
sistent approach to decrease the distance between shots and receiv-
ers along lines.
Trad et al. 2008 show a case history from the Surmont bitumen
project in northern Alberta. In this area, nine surveys had to be
merged into a common 10 10-m CMP grid. Of the nine surveys in
the project, one was acquired with a natural bin size of 15 30 m,
giving poor coverage when binned in the 10 10-m CMP grid used
for the merge. Furthermore, this survey was the only one in the
project with a parallel design the other surveys were acquired with
an orthogonal geometry. By adding new shots and receivers using
the method presented in this paper, the coarser survey was trans-
formed from a parallel geometry with 10 30-m bin size to an or-
thogonal survey of 10 10-m bin size and twice the original fold.
The original data were fully preserved and the numbers of shots and
receivers were each increased by three, so the final size was nine
times the original size. The interpolation allowed this survey to
merge with the other surveys in the Surmont area, avoiding the need
for reshooting.
Figure 10a and b shows one CMP before and after interpolation.
Figure 11a shows a time slice from the stack of original data in the
10 10-m grid. Figure 11b shows the same time slice from the stack
of the interpolated data.
Infilling large gaps
It is common for 3D acquisitions to have large gaps with missing
shots or receivers because of inaccessibility in some areas lakes,
hills, population, etc.. Although it usually is impossible to infill
large gaps completely, decreasing their size has a large impact on mi-
gration results. The following example shows the infilling of a large
gap produced by an open-pit coal mine in an area with structured ge-
ology. This obstacle prevented shots and receivers from being de-
ployed at this location during the 3D acquisition. New shots and re-
ceivers were added on the border of the gap. Time migration of the
b)
a)
0
Figure 10. Surmont: comparison of a CMP a before and b after in-
terpolation. Empty traces have been added to the CMP before inter-
polation to match the traces obtained after interpolation. The CMP in
b was created by using information from many other CMPs not
shown in the figure.
a
)
b)
Figure 11. Surmont: time slice comparison from a the stack of the
original data and b a time slice from the stack of the interpolated
data.
Five-dimensional interpolation V131
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
original seismic data produced the image in Figure 12a. After inter-
polation, time migration produced the image in Figure 12b. The in-
terpolated image shows an anticline underneath the open-pit mine
that was confirmed by 2D seismic and well logs acquired before the
existence of the mine.
DISCUSSION
Interpolation fills missing traces by incorporating information in
the data using a priori assumptions. This provides, for standard pro-
cesses, information that is already in the data but is not accessible
without these constraints. On the negative side, results from interpo-
lated data sometimes can be worse than results without interpola-
tion. This can happen because some processes such as stacking
might work better for zero traces than for wrong traces. In addition,
interpolation can add spurious information in a coherent manner, a
problem that stacking is unable to fix. Interpolation must be applied
very carefully to ensure this does not happen.
Several factors play against interpolation because it is by nature
an ill-conditioned problem. Not only do unrecorded samples have to
be estimated, but they also must be located in a manner consistent
with the rest of the data, for example, at the proper elevations with
proper static information. Usually, geometries that can benefit from
interpolation do not lend themselves to good noise attenuation. Ac-
curate interpolation becomes more difficult as the structure becomes
more complex, as gaps become larger or sampling poorer, and as the
signal-to-noise ratio gets lower. Therefore, careful QC is necessary
to select interpolated data according to some quality criteria. A use-
ful criterion is the minimum distance between a trace and its original
neighbors, but many other QC parameters can be estimated and
saved into headers. After interpolation, these parameters can be used
together to decide if a new trace is acceptable.
CONCLUSIONS
Wide-azimuth geometries often are undersampled along one or
more dimensions, and interpolation is a very useful tool to precondi-
tion the data for prestack processes such as migration, AVO, and
AVAz. I have discussed a 5D interpolation technique to create new
shots and receivers for 3D land seismic data that honors amplitude
variations along inline, crossline, offset, and azimuth. Although not
intended to replace acquiring adequate data for processing, this tool
is useful for overcoming acquisition constraints and for obtaining
benefits of tighter acquisition sampling patterns, higher fold, and/or
smaller bin size.
By working in five dimensions, this interpolation method can in-
crease sampling in surveys that are problematic for lower dimen-
sional interpolators. The technique might be applied to overcome ac-
quisition constraints at a fraction of field acquisition costs, merge
data sets with different bin sizes, and eliminate differences caused by
acquisition, avoiding the need to reshoot surveys. Benefits include
more reliable prestack processes: velocity analysis, prestack migra-
tion, AVO and AVAz analyses, reduction of migration artifacts, and
improved imaging of steep dips.
ACKNOWLEDGMENTS
I would like to thank CGGVeritas for permission to publish this
paper and CGGVeritas Library Canada, PTT Exploration and Pro-
duction, PetroCanada, ConocoPhillips Canada Ltd., and Total E&P
Canada Ltd. for data examples. Special thanks are owed to several
colleagues who helped produce the interpolation examples and who
provided useful ideas and discussions on interpolation over the
years. In particular, my thanks to Bin Liu and Mauricio Sacchi,
whose work constitutes the cornerstone of this method.
REFERENCES
Abma, R., and N. Kabir, 2006, 3D interpolation of irregular data with POCS
algorithm: Geophysics, 71, no. 6, E91–E97.
Abma, R., C. Kelley, and J. Kaldy, 2007, Sources and treatments of migra-
tion-introduced artifacts and noise: 77th Annual International Meeting,
SEG, Expanded Abstracts, 2349–2353.
Cordsen, A., M. Galbraith, and J. Peirce, 2000, Planning land 3-D seismic
surveys: SEG.
Downton, J., B. Durrani, L. Hunt, S. Hadley, and M. Hadley, 2008, 5D inter-
polation, PSTM and AVO inversion for land seismic data: 70th Annual
Conference and Technical Exhibition, EAGE, Extended Abstracts, G029.
Duijndam, A. J. W., and M. A. Schonewille, 1999, Nonuniform fast Fourier
transform: Geophysics, 64, 539–551.
Gray, S., D. Trad, B. Biondi, and L. Lines, 2006, Towards wave-equation im-
aging and velocity estimation: CSEG Recorder, 31, 47–53.
Hansen, P., 1998, Rank-deficient and discrete ill-posed problems: Numerical
aspects of linear inversion: Society for Industrial and Applied Mathemat-
ics.
Herrmann, P., T. Mojesky, M. Magesan, and P. Hugonnet, 2000, De-aliased,
high-resolution Radon transforms: 70th Annual International Meeting,
SEG, Expanded Abstracts, 1953–1957.
Hunt, L., J. Downton, S. Reynolds, S. Hadley, M. Hadley, D. Trad, and B.
Durrani, 2008, Interpolation, PSTM, & AVO for Viking and Nisku targets
in West Central Alberta: CSEG Recorder, 33, 7–19.
Liu, B., and M. D. Sacchi, 2004, Minimum weighted norm interpolation of
seismic records: Geophysics, 69, 1560–1568.
Mosher, C. C., T. H. Keho, A. B. Weglein, and D. J. Foster, 1996, The impact
of migration on AVO: Geophysics, 61, 1603–1615.
Poole, G., and P. Herrmann, 2007, Multidimensional data regularization for
modern acquisition geometries: 77th Annual International Meeting, SEG,
ExpandedAbstracts, 2585–2589.
Sacchi, M. D., and T. J. Ulrych, 1996, Estimation of the discrete Fourier
transform A linear inversion approach: Geophysics, 61, 1128–1136.
Schonewille, M. A., R. Romijn, A. J. W. Duijndam, and L. Ongkiehong,
2003, A general reconstruction scheme for dominant azimuth 3D seismic
data: Geophysics, 68, 2092–2105.
Trad, D., J. Deere, and S. Cheadle, 2005, Understanding land data interpola-
tion: 75th Annual International Meeting, SEG, Expanded Abstracts,
2158–2161.
Trad, D., M. Hall, and M. Cotra, 2008, Merging surveys with multidimen-
sional interpolation: CSPG CSEG CWLS Conference, Expanded Ab-
stracts, 172–176.
Trad, D., T. Ulrych, and M. Sacchi, 2003, Latest views of the sparse Radon
transform: Geophysics, 68, 386–399.
Xu, S., Y. Zhang, D. Pham, and G. Lambare, 2005, Antileakage Fourier trans-
form for seismic data regularization: Geophysics, 70, no. 4, V87–V95.
Zwartjes, P. M., and M. D. Sacchi, 2007, Fourier reconstruction of nonuni-
formly sampled, aliased seismic data: Geophysics, 72, no. 1, V21–V32.
a) b)
Figure 12. Coal mine: comparison of time migration images a
without and b with interpolation for a 3D survey acquired on top of
a large gap.
V132 Trad
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
... Trace regularization can be classified into two categories based on how accurately it reflects the irregular distribution of the input data. The first technique involves an alignment process that projects data at irregular points onto the nearest regular grid and then interpolate the missing data within that grid (Liu and Sacchi, 2004;Abma and Kabir, 2006;Trad, 2009;Chiu, 2014;Trad, 2014;Kim et al., 2015;Wang et al., 2020b;Yeeh et al., 2020bYeeh et al., , 2023b. ...
... Second, the proposed approach can be extended to trace regularization in higherdimensional spaces, effectively solving the regularization problems of 5-dimensional seismic data. The development of a 5-D interpolator is particularly crucial for prestack seismic trace interpolation (Trad, 2009(Trad, , 2014, which must simultaneously interpolate in the offset x, y and spatial x, y directions. Chapter 2 of this thesis mathematically introduces the interpolators for dimensions five or higher, and through the experiments in Chapters 3 and 4, it has been confirmed that regularization in the offset and spatial directions is possible. ...
Thesis
This thesis proposes a novel method for regularizing seismic traces in multidimensional spaces using a simplex-based algorithm. A trace-based seismic trace interpolator is developed using machine learning that can be trained using only observed seismic data, eliminating the need to construct the additional training dataset. In addition, the proposed strategy for selecting optimal input data during the inference strategy enables an efficient and logical inference process. This strategy can be used to improve the performance of detailed analysis and interpretation of seismic data. Validation with synthetic and field data confirms the effectiveness of the proposed method. Experiments with SEAM (Society of Exploration Geophysicists Advanced Modeling) phase 1 synthetic data demonstrated the effective regularization of observed data in common shot gather, that are irregularly distributed in areas where towed streamers are likely to be located. Analysis of the results of this experiment suggests that the presence of an observation trace close to the query point for the regularization reduces the complexity of the process and leads to more accurate predictions. In the case of field data, pre-stack time migrated data from the Vincent oil field in Western Australia was employed. The regularization was performed using x and y coordinates, including an examination of the distribution of barycentric coordinates as a function of R-value, a critical hyperparameter in the construction of input-label pairs. A comparison with model-constrained MWNI (Minimum weighted norm interpolation) method demonstrates the superiority of the proposed method in terms of computational efficiency and accuracy. In addition, the analysis between inferred field data and input parameters provides the consistent results with those from synthetic data, suggesting that increased trace correlation due to proximity leads to better regularization performance. The method proposed in this thesis has several advantages over existing techniques. It accurately reflects the coordinates of irregular locations of seismic traces, overcoming a limitation common to many fast Fourier transform-based or image processing-derived machine learning methods. In addition, the reduced computational demand and flexibility of the developed algorithm is expected to make seismic data processing and interpretation more efficient. Keywords: Simplex, Delaunay tessellation, Trace regularization, Trace interpolation, Machine learning
... The implementation steps are as follows: conduct a discrete Fourier transform (DFT) along the time axis of the input seismic data to acquire the Fourier spectrum in the frequency domain. Compute the energy curves at various angles for the Fourier spectrum and utilize these energy curves as weighting factors applied to the entire Fourier spectrum [30][31][32]. Identify the Fourier spectrum component with the highest energy after weighting (effective signal) and integrate this component into the original Fourier spectrum without weighting. Perform an inverse Fourier transform on the Fourier spectrum component obtained in the preceding step and output the result to the corresponding original input positions. ...
Article
Full-text available
Existing goaves (e.g., shafts and roadways) in mines represent important hidden dangers during the production of underlying coal seams. In this view, the accurate identification, analysis, and delimitation of the scope of goaves have become important in the 3D seismic exploration of mines. In particular, an accurate identification of the boundary swing position of goaves for 3D seismic data volumes within a certain depth interval is key and difficult at the same time. Here, a wide-band and wide-azimuth observation system was used to obtain high-resolution 3D seismic data. The complex structure of a mine was analyzed, and a seismic double processing system was applied to verify the fine processing effect of a goaf and improve the resolution of the 3D seismic data. Based on the seismic attribute identification characteristics of the goaf structure, we decided to adopt multi-attribute comprehensive identification and data fusion technologies to accurately determine the position of the goaf and of its boundary. Combining this information with the mine roadway engineering layout, we verified the accurateness and correctness of the goaf boundary location. Our study provides a good example of the accurate identification of the 3D seismic data of a roadway goaf.
... Later f-k domain interpolation has rapidly developed (Liu and Sacchi, 2004;Zwartjes and Sacchi, 2007;Naghizadeh and Sacchi, 2010;Gao et al., 2010;Hennenfent et al., 2010;Curry, 2010;Gan et al., 2015). Moreover, minimum weighted norm interpolation (MWNI) (Liu and Sacchi, 2004;Trad, 2009), antileakage Fourier transform (ALFT) (Xu et al., 2005), and multicomponent matching pursuit algorithms (Özbek et al., 2010) are applied in the f-x domain. ...
Conference Paper
Full-text available
This work deals with the interpolation of seismic data in the discrete cosine transform (DCT) domain. Interpolation is a necessary step before applying pre-stack migration. In the past, the geophysics community focused on developing interpolation techniques by exploring the simplicity of the signals in the Fourier domain. For example, for a single linear event, the Hankel matrix built for each frequency realization has a rank equal to one. However, the missing traces increase the rank of the Hankel matrix. Hence, by reducing the rank of the Hankel matrices, we can reconstruct the missing traces. This method has been extensively used in industry; however, rank-based reconstruction suffers from the computational costs of calculating the singular value de-compositions (SVD) of matrices for rank reduction. Moreover, it is difficult to fine-tune the correct rank for each dataset and each patch. To alleviate these issues, we introduce a sparsity-based reconstruction method in the DCT domain. The missing traces, regardless of the geometry of the events (e.g., linear or hyperbolic), appear in the 2D discrete cosine transform domain as background noise. We therefore design an optimization method and solve for a model that reduces this noise from the DCT coefficients by promoting sparsity. Accordingly, our proposed method is SVD-free and does not require knowledge of the appropriate rank of the Hankel matrices.
... These methods transform data into a sparse domain and perform data reconstruction through constrained conditions. Some commonly used sparse transforms include the Fourier transform [3], non-uniform Fourier transform [4], Radon transform [5][6][7][8], Curvelet transform [9,10] and Seislet transform [11][12][13]. Among them, RT-based reconstruction approaches are very popular as they have two main advantages. ...
Article
Full-text available
The apex-shifted hyperbolic Radon transform (ASHRT) based on the Stolt-stretch operator can be implemented in the frequency domain, which accelerates the computation efficiency of ASHRT. However, the Stolt-stretch operator has limitations when it comes to velocity variations. Therefore, this paper introduces a new ASHRT approach based on post-stack phase shift plus interpolation (PSPI) imaging and modeling operators. This new approach is designed to better adapt to changes in medium velocity and enhance the quality of data reconstruction. When combining this novel transformation with sparsity constraints for model testing and real data applications, the experimental results indicate that it is an effective data reconstruction tool, with superior data reconstruction results compared to traditional ASHRT based on the Stolt-stretch operator.
... With respect to the processing, visualization, and interpretation of common-offset GPR data, it is almost always necessary to have measurements spaced at a regular interval along the profile line, where this interval is small enough such that important details regarding the subsurface structure are not missed and spatial aliasing is avoided. Indeed, migration algorithms involving Fourier or finite-difference methods generally require regularly sampled data as input, and although Kirchhoff migration may be applied to irregularly sampled data (e.g., [5]), it is well known that strong artifacts can result when performing migration in the presence of aliasing and/or gaps in the data [6]- [7]. In practice, however, one or more of the following situations commonly arise when collecting GPR data: (i) there exist locations along the profile where data are missing, most commonly because the corresponding region could not be surveyed; (ii) recorded traces are spaced at irregular intervals, usually because the GPR system was set to acquire traces continuously in time but the survey speed along the profile varied; and (iii) the spacing of traces is too large to avoid spatial aliasing, often because a choice was made to favor data coverage over quality at the field site. ...
Article
Ground-Penetrating Radar (GPR) is a powerful geophysical tool for efficient, high-resolution mapping of the shallow subsurface. Because of physical and economical limitations, a commonly encountered issue is that the corresponding profiles are incomplete in the sense that measurements are desired where they do not exist for the purpose of data visualization, interpretation, and imaging. Such missing data may result, for example, from (i) regions along the profile where surveying is not possible; (ii) measurements being collected at a regular interval in time but not in space; or (iii) the choice of a large measurement spacing to favor data coverage over quality. Although a number of methods have been proposed for the interpolation of GPR data to tackle this problem, they typically suffer from rather simplistic assumptions that are not satisfied for many GPR datasets. To address these shortcomings, we consider in this paper a novel GPR data reconstruction strategy based on multiple-point geostatistics, where missing GPR data are stochastically simulated conditioned on existing measurements and patterns observed in a representative training image. A key feature in our approach is the consideration of a multivariate image containing both continuous and categorical GPR reflection amplitude data, which helps to guide the simulations towards realistic structures. To demonstrate the power of this single strategy for multiple data reconstruction needs, we show its successful application to a variety of examples in the context of three problems: gap-filling, trace-spacing regularization, and trace densification.
... Towed streamer systems have been widely used in marine seismic exploration, and typically have dense inline data and sparse crossline data, which may cause a problem in obtaining high resolution multidimensional seismic images in the crossline direction (Trad, 2009). To solve the sparsity problem, various methods to interpolate crossline data have been studied. ...
Article
Recently, machine learning (ML) techniques have been actively applied for seismic trace interpolation. However, because most research is based on training-inference strategies that treat missing trace gather data as a 2D image with a blank area, a sufficient number of fully sampled data are required for training. This study proposes trace interpolation using ML, which uses only irregularly sampled field data, both in training and inference, by modifying the training-inference strategies of trace-based interpolation techniques. In this study, we describe a method for constructing networks that vary depending on the maximum number of consecutive gaps in seismic field data and the training method. To verify the applicability of the proposed method to field data, we applied our method to time-migrated seismic data acquired from the Vincent oilfield in the Exmouth Sub-basin area of Western Australia and compared the results with those of the conventional trace interpolation method. Both methods showed high interpolation performance, as confirmed by quantitative indicators, and the interpolation performance was uniformly good at all frequencies.
Article
Convolutional neural networks (CNNs) have emerged as a primary method for seismic data interpolation. These networks are trained on a large dataset, and interpolation can be accomplished by inputting incomplete data into the learned network without the need for parameter selection, as in traditional method. However, most studies have focused on two-dimension (2D)/ three-dimension (3D) interpolation cases, leaving five-dimension (5D) situations not properly handled. To address this issue, we propose a novel 5D convolutional block designed to capture features in seismic records in five dimensions, leading to the construction of a simple and effective 5D fully convolution network (5D-FCN). Specifically, leveraging the linearity of convolution, we represent 5D convolution operator with the aid of a number of 3D convolution operators. This allows for efficient implementation of 5D convolution using 3D convolution on popular deep learning platforms such as Pytorch and Tensorflow. We conduct experiments on both synthetic and field examples. Compared with 3D-CNN-based (3D-Unet) method, which interpolates 5D data split into a combination of 3D blocks, and the damped rank-reduction method (DRR), the 5D-FCN demonstrates superior interpolation performance. In the synthetic example, the signal-to-noise ratio (SNR) values for data recovered by DRR, 3D-Unet, and 5D-FCN are 13.54 dB, 12.08 dB, and 15.00 dB, respectively. In the field example, our method exhibits superior spatial coherence in the reconstruction results.
Article
We propose an unsupervised framework to reconstruct the missing data from the noisy and incomplete five-dimensional (5D) seismic data. The proposed method comprises two main components: a deep learning network and a projection onto convex sets (POCS) method. The model works iteratively, passing the data between the two components and splitting the data into a group of patches using a patching scheme. Specifically, the patching scheme breaks the input data into small segments which are then reshaped to a vector of one dimension feeding the deep learning model. Afterward, POCS is utilized to optimize the output data from the deep learning model, which is proposed to denoise and interpolate the extracted patches. The proposed deep learning model consists of several blocks, that are, fully connected layers, attention block, and several skip connections. Following this, the output of the POCS algorithm is considered as the input of the deep learning model for the following iteration. The proposed model iteratively works in an unsupervised scheme where labeled data is not required. A performance comparison with benchmark methods using several synthetic and field examples shows that the proposed method outperforms the traditional methods.
Article
Seismic data processing, specifically tasks like denoising and interpolation, often hinges on sparse solutions of linear systems. Group sparsity plays an essential role in this context by enhancing sparse inversion. It introduces more refined constraints, which preserve the inherent relationships within seismic data. To this end, we propose a robust Orthogonal Matching Pursuit algorithm, combined with Radon operators in the frequency-slowness f- p domain, to tackle the strong group-sparsity problem. This approach is vital for interpolating seismic data and attenuating erratic noise simultaneously. Our algorithm takes advantage of group sparsity by selecting the dominant slowness group in each iteration and fitting Radon coefficients with a robust ℓ 1 -ℓ 1 norm by the alternating direction method of multipliers (ADMM) solver. Its ability to resist erratic noise, along with its superior performance in applications such as simultaneous source deblending and reconstruction of noisy onshore datasets, underscores the importance of group sparsity. Both synthetic and real comparative analyses further demonstrate that strong group sparsity inversion consistently outperforms corresponding traditional methods without the group sparsity constraint. These comparisons emphasize the necessity of integrating group sparsity in these applications, thereby showing its indispensable role in optimizing seismic data processing.
Article
Amplitude variation with offset (AVO) analysis is often limited to areas where multidimensional propagation effects such as reflector dip and diffractions from faults can be ignored. Migration-inversion provides a frame-work for extending the use of seismic amplitudes to areas where structural or stratigraphic effects are important. In this procedure, sources and receivers are downward continued into the earth using uncollapsed prestack migration. Instead of stacking the data as in normal migration, the prestack migrated data are used in AVO analysis or other inversion techniques to infer local earth properties. The prestack migration can take many forms. In particular, prestack time migration of common-angle sections provides a convenient tool for improving the lateral resolution and spatial positioning of AVO anomalies. In this approach, a plane-wave decomposition is first applied in the offset direction, separating the wavefield into different propagating angles. The data are then gathered into common-angle sections and migrated one angle at a time. The common-angle migrations have a simple form and are shown to adequately preserve amplitude as a function of angle. Normal AVO analysis is then applied to the prestack migrated data. Examples using seismic lines from the Gulf of Mexico show how migration improves AVO analysis. In the first set of examples, migration is shown to improve imaging of subtle spatial variations in bright spots. Subsequent AVO analysis reveals dim spots associated with dry-hole locations that were not resolvable using traditional processing techniques, including both conventional AVO and poststack migration. A second set of examples shows improvements in AVO response after migration is used to reduce interference from coherent noise and diffractions. A final example shows the impact of migration on the spatial location of dipping AVO anomalies. In all cases, migration improves both the signal-to-noise ratio and spatial resolution of AVO anomalies.
Chapter
Preface Symbols and Acronyms 1. Setting the Stage. Problems With Ill-Conditioned Matrices Ill-Posed and Inverse Problems Prelude to Regularization Four Test Problems 2. Decompositions and Other Tools. The SVD and its Generalizations Rank-Revealing Decompositions Transformation to Standard Form Computation of the SVE 3. Methods for Rank-Deficient Problems. Numerical Rank Truncated SVD and GSVD Truncated Rank-Revealing Decompositions Truncated Decompositions in Action 4. Problems with Ill-Determined Rank. Characteristics of Discrete Ill-Posed Problems Filter Factors Working with Seminorms The Resolution Matrix, Bias, and Variance The Discrete Picard Condition L-Curve Analysis Random Test Matrices for Regularization Methods The Analysis Tools in Action 5. Direct Regularization Methods. Tikhonov Regularization The Regularized General Gauss-Markov Linear Model Truncated SVD and GSVD Again Algorithms Based on Total Least Squares Mollifier Methods Other Direct Methods Characterization of Regularization Methods Direct Regularization Methods in Action 6. Iterative Regularization Methods. Some Practicalities Classical Stationary Iterative Methods Regularizing CG Iterations Convergence Properties of Regularizing CG Iterations The LSQR Algorithm in Finite Precision Hybrid Methods Iterative Regularization Methods in Action 7. Parameter-Choice Methods. Pragmatic Parameter Choice The Discrepancy Principle Methods Based on Error Estimation Generalized Cross-Validation The L-Curve Criterion Parameter-Choice Methods in Action Experimental Comparisons of the Methods 8. Regularization Tools Bibliography Index.
Article
Seismic surveys generally have irregular areas where data cannot be acquired. These data should often be interpolated. A projection onto convex sets (POCS) algorithm using Fourier transforms allows interpolation of irregularly populated grids of seismic data with a simple iterative method that produces high-quality results. The original 2D image restoration method, the Gerchberg-Saxton algorithm, is extended easily to higher dimensions, and the 3D version of the process used here produces much better interpolations than typical 2D methods. The only parameter that makes a substantial difference in the results is the number of iterations used, and this number can be overestimated without degrading the quality of the results. This simplicity is a significant advantage because it relieves the user of extensive parameter testing. Although the cost of the algorithm is several times the cost of typical 2D methods, the method is easily parallelized and still completely practical.
Article
Spatio-temporal analysis of seismic records is of particular relevance in many geophysical applications, e.g., vertical seismic profiles, plane-wave slowness estimation in seismographic array processing and in sonar array processing. The goal is to estimate from a limited number of receivers the 2-D spectral signature of a group of events that are recorded on a linear array of receivers. When the spatial coverage of the array is small, conventional f-k analysis based on Fourier transform leads to f-k panels that are dominated by sidelobes. An algorithm that uses a Bayesian approach to design an artifacts-reduced Fourier transform has been developed to overcome this shortcoming. A by-product of the method is a high-resolution periodogram. This extrapolation gives the periodogram that would have been recorded with a longer array of receivers if the data were a limited superposition of monochromatic planes waves. The technique is useful in array processing for two reasons. First, it provides spatial extrapolation of the array (subject to the above data assumption) and second, missing receivers within and outside the aperture are treated as unknowns rather than as zeros. The performance of the technique is illustrated with synthetic examples for both broad-band and narrowband data. Finally, the applicability of the procedure is assessed analyzing the f-k spectral signature of a vertical seismic profile (VSP).
Article
To address the issue of inadequate sampling, typical of land seismic data, an AVO processing flow should include interpolation and prestack migration prior to the AVO inversion. It is well established that seismic data should be prestack migrated before AVO, yet the irregular sampling inherent in land data can introduce migration artifacts which distort the estimates of the AVO inversion. By performing 5D minimum weighted norm interpolation prior to the PSTM, the wavefield is better sampled leading to better migration and AVO results. By working in five dimensions, the algorithm can interpolate through gaps that are problematic for lower dimensional interpolators. The 5D interpolation is amplitude preserving and appears to improve the signal-to-noise ratio with minimal evidence of smearing. In order to support these assertions, a series of parallel processing test flows were performed and compared on a 3D seismic survey from Alberta, Canada with extensive well control. For each of these flows, Ostrander gathers at key wells, AVO attributes, and their ties to 29 wells were examined. The interpolation PSTM flow prior to AVO inversion produced the best correlation to the well control.
Article
Artifacts in migration images may be caused by noise in the input, by the geometry of the acquisition, or by the migration algorithm itself. In this paper, we will cover some of the sampling aspects of Kirchhoff migration algorithms that produce artifacts and ignore the issue of the input traces containing anything but pure signal. Particular focus will be on the parameterization of anti-aliasing filtering and some aspects of constructive and destructive summation of signals that cause artifacts. We will also consider some of the effects of the geometry of the acquisition, or the acquisition footprint, that may cause significant artifacts. In this presentation we will be concentrating on the effects of irregularities generated by the migration algorithm and the effects of irregular azimuth distributions and holes in the acquisition. We will largely ignore such issues as the position of the traces within bins and amplitude fidelity.