Content uploaded by Daniel Trad
Author content
All content in this area was uploaded by Daniel Trad on May 22, 2015
Content may be subject to copyright.
Five-dimensional interpolation: Recovering from acquisition constraints
Daniel Trad
1
ABSTRACT
Although 3D seismic data are being acquired in larger vol-
umes than ever before, the spatial sampling of these volumes
is not always adequate for certain seismic processes. This is
especially true of marine and land wide-azimuth acquisi-
tions, leading to the development of multidimensional data
interpolation techniques. Simultaneous interpolation in all
five seismic data dimensions 共inline, crossline, offset, azi-
muth, and frequency兲 has great utility in predicting missing
data with correct amplitude and phase variations. Although
there are many techniques that can be implemented in five di-
mensions, this study focused on sparse Fourier reconstruc-
tion. The success of Fourier interpolation methods depends
largely on two factors: 共1兲 having efficient Fourier transform
operators that permit the use of large multidimensional data
windows and 共2兲 constraining the spatial spectrum along di-
mensions where seismic amplitudes change slowly so that
the sparseness and band limitation assumptions remain valid.
Fourier reconstruction can be performed when enforcing a
sparseness constraint on the 4D spatial spectrum obtained
from frequency slices of five-dimensional windows. Binning
spatial positions into a fine 4D grid facilitates the use of the
FFT, which helps on the convergence of the inversion algo-
rithm. This improves the results and computational efficien-
cy. The 5D interpolation can successfully interpolate sparse
data, improve AVO analysis, and reduce migration artifacts.
Target geometries for optimal interpolation and regulariza-
tion of land data can be classified in terms of whether they
preserve the original data and whether they are designed to
achieve surface or subsurface consistency.
INTRODUCTION
All current 3D seismic acquisition geometries have poor sam-
pling along at least one dimension. This affects migration quality,
which is based on the principle of constructive and destructive inter-
ference of data and thus is sensitive to irregular and coarse sampling
共Abma et al., 2007兲.Analysis of amplitude variations with offset and
azimuth 共AV O , AVA z 兲, which we want to observe in the migrated
domain, are also affected by the presence of gaps and undersam-
pling.
There are many different approaches to tackling this problem. The
only perfect solution is to acquire well-sampled data; all other ap-
proaches deal with the symptoms of the problem rather than the
problem itself, and there is no guarantee that they can adequately
solve it. However, given that, in the real world, we usually cannot go
back to the field and fix the actual problem, we need to address this
issue using the processing tools at our disposal.
Most seismic algorithms implicitly apply some sort of interpola-
tion because they assume correctly sampled data. Typically, missing
samples are assumed to be zero or similar to neighboring values. The
advantage of using a separate interpolation algorithm is that more in-
telligent assumptions can be made by using a priori information. For
example, sinc interpolation uses the constraint that there is no energy
at frequencies above Nyquist. This is more reasonable than assum-
ing that the unrecorded data are zeros. Interpolation algorithms can
then be viewed as methods to precondition the data with intelligent
constraints.
Interpolation of wide-azimuth land data presents many challeng-
es, some quite different from those of interpolating narrow-azimuth
marine data sets. The most familiar interpolation algorithms have
been developed for marine streamer surveys. Marine data are usual-
ly well sampled in the inline direction and coarsely sampled in the
crossline direction. Many algorithms based on Fourier interpolation
are quite successful at infilling the crossline direction, even in the
presence of aliasing and complex structure 共Schonewille et al., 2003;
Xu et al., 2005; Abma and Kabir, 2006; Poole and Herrmann, 2007;
Zwartjes and Sacchi, 2007兲. Land data interpolation brings addition-
al complications because of noise, topography, and the wide-azi-
muth nature of the data. In particular, the azimuth distribution re-
quires interpolation to use information from all spatial dimensions at
the same time because sampling along any particular subset of the
four spatial dimensions is usually very poor.
Multidimensional interpolation algorithms have become feasible
even for five dimensions 共Trad et al., 2005兲. This capability raises
Manuscript received by the Editor 12 August 2008; revised manuscript received 29 May 2009; published online 25 November 2009.
1
CGGVeritas, Calgary,Alberta, Canada. E-mail: daniel.trad@cggveritas.com.
© 2009 Society of Exploration Geophysicists. All rights reserved.
GEOPHYSICS, VOL. 74, NO. 6 共NOVEMBER-DECEMBER 2009兲; P. V123–V132, 12 FIGS., 1 TABLE.
10.1190/1.3245216
V123
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
new possibilities but also brings new challenges and questions. The
general principle is the same: Missing data are assumed to have a
similar nature to data recorded in their neighborhood, but the term
“neighborhood” can have different meanings in multiple dimen-
sions. An additional complication for wide-azimuth data interpola-
tion in five dimensions is that these data are always very irregular
and sparse in at least two of the four spatial dimensions because of
acquisition and processing costs.
Interpolation implementations have two different aspects: the
general interpolation strategy 共choice of spatial dimensions, win-
dow size, and target geometry兲 and the mathematical engine used to
predict the new traces from some kind of model. A discussion of
these two aspects follows.
INTERPOLATION STRATEGIES
Interpolation methods differ in complexity, assumptions, and op-
erator size. Local methods 共e.g., short-length prediction filters兲 use
simple models 共usually linear events兲 to represent the data in small
windows. Therefore, they tend to be robust, fast, adaptable, and easy
to implement. Their shortcoming is an inability to interpolate large
gaps because the local information they need does not exist 共there
are no data around the trace to interpolate兲.
Global methods use all of the data simultaneously 共up to some ap-
erture limit defined by the physics of the problem兲 and models with
many degrees of freedom because they cannot assume simple data
events at a large scale. They are slower, less adaptable, and harder to
implement. However, they can, at least in theory, interpolate large
gaps by using information supplied from distant data. Most practical
methods fall between these two extremes; but the sparser the sam-
pling, the larger the operator size needs to be. If the geology is com-
plex, some methods with a large operator can smear geologic fea-
tures and decrease resolution.Asafe choice is to work with global in-
terpolation methods that behave like local interpolators when local
information is available.
A related distinction is the number of dimensions that the algo-
rithm can handle simultaneously. Usually, the time dimension is well
sampled, so only spatial dimensions need be interpolated. Although
3D seismic data have four spatial dimensions, many traditional
methods use data along one spatial dimension only. If the method is
cascaded through the different dimensions, the order of these opera-
tions becomes extremely important. However, interpolation of
sparse wide-azimuth data is more likely to succeed in a full 5D space
because often at every point there is at least one spatial direction
along which seismic amplitudes change slowly. Information along
this direction helps to constrain the problem along the other dimen-
sions where events are harder to predict.
Also, seismic amplitude variations are smoother in five dimen-
sions than they are in any projection into a lower dimensional space.
To see why, consider an analogy: Imagine the shadow of an airplane
flying over a mountain range. The shadow of the airplane is a com-
plex path even if the airplane goes in a simple trajectory. Interpola-
tion of the airplane flight path is much more difficult on the 2D sur-
face 共shadow兲 than in the original 3D space. A similar argument can
be made about seismic wavefield variations in the full 5D space.
My approach to interpolation is to work with large operators in 5D
windows. In practice, the data window size is often constrained by
the processing system capabilities, particularly when using clusters
in busy computing networks. I normally apply windows of 30⫻ 30
lines, 1000-m offsets, and all azimuths. Larger windows are occa-
sionally required to deal with very sparse data. The spatial dimen-
sions in these windows are chosen so that the data look as simple as
possible along each dimension. After extensive testing in different
domains 共shot, receiver, cross spreads, and common-offset vector
domains兲, I have chosen the inline-crossline-azimuth-offset-fre-
quency 共i.e., midpoint, offset, and azimuth兲 with NMO-corrected
data for the following reasons:
1兲 These are the dimensions where amplitude variations are most
important 共structure, AVO, and AVAz兲. Interpolation is always
an approximation of the truth, and that approximation is better
along the dimensions where the algorithm is applied.
2兲 AVO and AVAz are usually slow 共after NMO兲; therefore, data
have limited bandwidth in the Fourier spectra along these di-
mensions. The azimuth dimension also has the advantage of be-
ing cyclic in nature, making it particularly fit for discrete Fouri-
er transform representation.
3兲 The interval between samples in the inline crossline dimen-
sions 共i.e., midpoints兲 is on the order of the common-midpoint
共CMP兲 bin size. In the shot or receiver domain, the sampling
can be as coarse as shot/receiver line sampling 共several CMP
bins兲.
Figure 1 shows a simple synthetic experiment to demonstrate the
advantage of 5D interpolation over 3D interpolation. The original
traces from an orthogonal survey were replaced by synthetic seismic
events while preserving the original recording geometry of the trac-
es. The distance between receiver lines was 500 m, with 12 receiver
lines per shot. Every second receiver line of this synthetic data set
was removed, simulating a 1000-m line interval, and then predicted
with Fourier reconstruction.
In the first case, I interpolate on a shot-by-shot basis 共three dimen-
sions兲, and in the second case in the inline-crossline-offset-azimuth-
frequency domain 共five dimensions兲. It is evident in Figure 1 that the
algorithm can reproduce all data complexity when using five inter-
400 500 600 700
Receiver line
Receiver trace number
0
1
2
400 500 600 700
Receiver trace number
Time
(
s
)
1
2
Time
(
s
)
a
)
c
)
b) d)
Figure 1. Synthetic data: comparison of 5D versus 3D interpolation.
共a兲 Synthetic data, one shot 共window兲. 共b兲After removing every sec-
ond line. 共c兲 Interpolation in 5D 共inline/crossline/offset/azimuth/fre-
quency兲. 共d兲 Interpolation in 3D 共receiver. x, receiver . y, frequency兲.
Traces are sorted according to receiver numbers.
V124 Trad
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
polation dimensions, but it is unable to repeat this using only three
dimensions. Because the algorithm is exactly the same, this example
shows the importance of the additional information supplied by the
extra dimensions for Fourier interpolation.
The actual location of the newly created traces is an important is-
sue for interpolation. I can distinguish six cases, of which only four
are used for land data wide-azimuth surveys:
1兲 Preserving original data 共interpolation兲
a兲 Decrease shot and receiver interval 共decrease bin size兲.
b兲 Decrease shot and receiver line interval 共increase offset
and azimuth sampling兲.
c兲 Make shot and receiver line interval equal to shot and re-
ceiver interval 共fully sampled兲. This is a particular case of
1b.
2兲 Replacing data totally with predicted traces 共regularization兲
a兲 Target geometry regular on shot and receiver locations
共surface consistency兲.
b兲 Target geometry regular on CMP, offset, and azimuth 共sub-
surface consistency兲.
c兲 Target geometry regular on surface and in subsurface.
Possibilities 1a, 1b, 2a, and 2b each have important applications
共see Table 1兲. Adapting to the acquired data by adding new shots and
receivers following the original design 共types 1a and 1b兲 has the ad-
vantage that original data can be preserved and interpolation is well
constrained. Preserving the original data is generally safer than re-
placing all of the acquisition with interpolated data, particularly for
complex noisy data from structured areas in the presence of topogra-
phy. This approach works well for Kirchhoff time and depth migra-
tion. By adding new shots and receivers, the subsurface sampling
can be improved according to well-understood acquisition concepts
共e.g., Cordsen et al., 2000兲.
Type 2a, surface-consistent interpolation with perfectly regular
shot and receiver lines, is useful for wave equation migration, inter-
polation of very irregular surveys, and time-lapse applications. Type
2b, subsurface-consistent uniform coverage of offsets and azimuths
for each CMP, is desirable for migration in general. However, this
design implies a large number of shots and receivers with nonuni-
form shot and receiver fold. This is a problem for ray-tracing meth-
ods and any kind of shot or receiver processing. Therefore, its appli-
cation seems to be limited to time migration and, because of the large
size of the resulting data sets, for small surveys. Probably, it can also
be applied well to common-offset Gaussian beam migration.
Finally, types 1c and 2c, complete coverage of shots and receiv-
ers, are desirable for all seismic processing, but the resulting large
size of the data makes it impractical.
Any of these interpolation types can be used for infilling acquisi-
tion gaps. A modification of type 2b from polar to Cartesian coordi-
nates can be used to produce common-offset vector gathers. Types
Table 1. Types of land data interpolation and main benefits. The size of the circle is proportional to the real use in production
(based on use from 2005 to 2008). The font style in the bottom row reflects a positive (bold) or negative (italic) remark.
Main application Type
쎲 Heavy use
Increasing
inline-
crossline
sampling
共1a兲
Increasing
offset-
azimuth
sampling
共1b兲
Regularizing
shot/receiver
positions
共2a兲
Regularizing
CMP/offset/
azimuth positions
共2b兲
Full sampling
共1c and 2c兲
Occasional use
䊊 Possible, but never
used
Interpolation of large gaps 쎲 䊊䊊
Time Kirchhoff migration 쎲 䊊䊊
Depth Kirchhoff migration 쎲
Wave-equation migration 쎲
Merging surveys with
different bin size and/or
design 共2D and 3D, parallel
and orthogonal, etc.兲 쎲 䊊䊊
Increase resolution for
steep dips 共relax antialias
filters during migration兲 쎲 䊊䊊
Improve CIGs 共AVO,
AVAz, velocity analysis兲 쎲 䊊䊊
4D applications 共matching
time-lapse surveys兲 䊊䊊
Main use Merging Time/depth
Kirchhoff
migration
Wave
equation
migration
Time migration
small surveys
Not used
because of
high cost
Main advantage-
disadvantage
Reliable
Sometimes
produces
time slice
artifacts
Reliable
Difficult for
topography
Good
sampling for
any
migration
Less reliable
Great sampling
Expensive
for shot-receiver
processing
Best
possible
sampling
Expensive
for all
processing
Five-dimensional interpolation V125
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
1c, 2b, and 2c are fully implemented and have been used in internal
tests but have not yet been used in production projects. Notice that
one case missing in the table is to replace all data with predictions
onto a given geometry. This is the situation in 4D time lapse, where it
is usual to interpolate the monitor locations to match the baseline.
For typical wide-azimuth land data surveys in a complex environ-
ment, the safest choice seems to be surface-consistent interpolation
共1a and 1b兲. This allows one to preserve the original data untouched
and to apply careful quality control 共QC兲 to the new traces. Some QC
parameters can be added to the headers, making it possible to discard
new traces with high risk or low confidence after the interpolation.
There are several possible quality parameters. Two QC parameters
that complement each other and are often useful are 共1兲 the distance
along the four spatial dimensions between the new and the original
traces and 共2兲 the ratio of original to interpolated traces.
For the 5D configuration discussed in this paper, a meaningful cal-
culation of the first parameter requires a weighted average of the dis-
tance along inline, crossline, offset, and azimuth. The weights de-
pend on the structural complexity, residual moveout, and anisotropy.
The second parameter refers to the ratio of the number of original
traces to the number of sampling points on the 4D spatial grid used in
the numerical algorithm. This ratio is usually much smaller than the
ratio of input to output traces for a given area.
INTERPOLATION ENGINE
The second major component of the interpolation problem is the
choice of a mathematical algorithm to predict new information giv-
en a set of recorded traces. One method with the flexibility to adapt to
the requirements for multidimensional global interpolation is mini-
mum weighted norm interpolation 共MWNI兲共Liu and Sacchi, 2004兲,
which extends the work from Sacchi and Ulrych 共1996兲 to multiple
dimensions. MWNI is a constrained inversion algorithm. The actual
data d are the result of a sampling matrix T acting on an unknown
fully sampled data set m 共m and d are containers for multidimen-
sional data, and T is a mapping between these two containers兲.
The unknown 共interpolated兲 data are constrained to have the same
multidimensional spectrum as the original data. Enforcing this con-
straint requires a multidimensional Fourier transform, which is the
most expensive part of the algorithm. To solve for the unknown data,
a cost function is defined for every frequency slice and is minimized
using standard optimization techniques. The cost function J is de-
fined, frequency by frequency, as
J⳱ 储d ⳮ Tm储
2
Ⳮ
储m储
W
, 共1兲
where 储储
2
indicates an ᐉ
2
-norm and 储储
w
indicates an ᐉ
2
-weighted
norm calculated as
储m储
W
⳱ m
H
F
n
ⳮ1
兩p
k
兩
ⳮ2
F
n
m. 共2兲
In equation 2, F
n
is the multidimensional Fourier transform, with n
indicating the number of spatial dimensions of the data, m
H
the
transpose conjugate of the model m, and p
k
the multidimensional
spectrum of the unknown data.
The multidimensional vector p
k
contains weight factors that give
freedom to the model to be large where it needs to be large. They can
be obtained by bootstrapping from the previous temporal frequency
in a manner similar to that done for Radon transforms 共Herrmann et
al., 2000兲. These weights are defined in the
-k domain, where
is
the temporal frequency and k is the wavenumber vector along each
spatial dimension. They link the frequency slices, making the fre-
quency axis behave as the fifth interpolation dimension, although
frequencies are not really interpolated.
The model m is in the
-x domain 共x is a vector representing all
spatial directions兲.Ifk
max
is the maximum wavenumber on each di-
mension for the maximum dip of the data, then the case of p
k
⳱ 1 for
kⱕ k
max
and p
k
⳱ 0 for k ⬎ k
max
corresponds to sinc interpolation.
The variable
in equation 1 is a hyperparameter that controls the
balance between fitting the data and enforcing sparseness on the
spectrum. This parameter is eliminated by changing the cost func-
tion in equation 1 to the standard form and using the residuals to de-
fine the number of iterations 共Trad et al., 2003兲. The actual geophysi-
cal meaning of the spatial dimensions is irrelevant to the algorithm.
However, for the method to work well, at least one of these dimen-
sions must have a sparse spectrum or a band limited spectrum.
The multidimensional spectrum can be calculated using discrete
Fourier transforms 共DFTs兲 that exactly honor space locations or the
fast Fourier transforms 共FFTs兲 that require binning the data into a
grid with exactly one trace per position. In practice, I define m to be a
regular supersampled 4D grid that contains many more traces than
the target geometry. This allows us to use FFTs but forces us to bin
the data during the interpolation.
The bin intervals along the spatial dimensions are kept small to
avoid smearing and data distortion. The binning errors along the in-
line/crossline directions can be made negligible by subdividing
CMP bins into subbins if necessary, but CMP grid bin size usually is
adequate. The binning errors along offset and azimuth dimensions
are kept small by applying NMO and static corrections before inter-
polation. However, data with significant residual NMO and strong
anisotropy require small bin intervals along offset and azimuth.
Large bins reduce computation time and improve numerical stability
but reduce precision. There is a trade-off between precision and nu-
merical stability that requires careful parameterization and imple-
mentation. A good rule of thumb for land surveys is to use, as offset
bin interval, a fraction of the receiver group interval 共e.g.,
1
2
or
1
4
兲, de-
creasing from near to far offsets and geologic complexity. Azimuth
intervals are usually chosen in the range between 20° and 45°, de-
creasing with offset and anisotropy.
DFTs can also be used for the spectrum with the advantage that
they do not require binning. The problem with DFTs is computation-
al cost. For N variables, a 1D FFT requires computation time propor-
tional to N log N, but DFT requires a computation time proportional
to N
2
. This constraint makes the cost in two spatial dimensions pro-
portional to N
4
and four spatial dimensions proportional to N
8
. Al-
though numerical tricks such as nonuniform FFTs 共Duijndam and
Schonewille, 1999兲 can improve these numbers dramatically, a 4D
DFT algorithm is quite expensive in terms of computer time and has
been unfeasible for production demands until now. Very recently,
this has become possible 共Gordon Poole, personal communication,
2009兲, although it demands large computer resources.
There are many differences between working with FFTs or DFTs.
On the negative side, working with FFTs has the potential to distort
data because of the binning. However, binning spatial coordinates is
often applied in seismic processing, even by methods that can use
exact spatial coordinates. For example, when working on common-
offset volumes, a binning along offset is applied. On the positive
side, when working with FFTs, the results improve because the in-
creased speed of the iterations permits us to obtain a solution close to
the one that would have been obtained after full convergence.
Furthermore, the nature of the system of equations solved at every
V126 Trad
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
frequency changes, depending on whether we use regular sampling,
irregular sampling, or regular sampling as a result of binning. To un-
derstand why, let us incorporate the sparseness constraint into the
operator by transforming equation 1 from the general form to the
standard form 共Hansen, 1998兲. By defining a new model u
k
,
u
k
⳱ p
k
ⳮ1
F
n
m
x
, 共3兲
which is m after transforming to the
-k domain and inverse weight-
ing with p
k
, equation 1 becomes
J⳱ 储d ⳮ TF
n
ⳮ1
p
k
u
k
储
2
Ⳮ
储u
k
储
2
. 共4兲
The weighted norm 共储 储
w
兲 now becomes an ᐉ
2
-norm, and the oper-
ator absorbs the spectral weights. This allows us to include the
sparseness constraint into the operator, i.e., to modify the basis func-
tions of the transformation to include the sparseness constraint 共Trad
et al., 2003兲. The mapping between d and the new model u is now
performed by the operator:
L⳱ TF
n
ⳮ1
p
k
. 共5兲
Solving this equation for u
k
requires solving the following system
of equations:
共p
k
H
F
n
T
H
TF
n
ⳮ1
p
k
Ⳮ
I兲u
k
⳱ p
k
H
F
n
T
H
d
H
, 共6兲
where I is the identity matrix and the super index H means conjugate
transpose.
Because of the large size of the system of equations in our prob-
lem, on the order of 10
5
equations, the final full solution u
k
is never
achieved. Instead, an approximate solution is obtained by using an
iterative algorithm and running only a few iterations. Components of
u
k
that have a weak mapping through operator L 共such as low-am-
plitude spectral components兲 can be resolved with this limited num-
ber of iterations only if the system of equation 6 has good conver-
gence. This convergence improves as the operator L ⳱ TF
n
ⳮ1
p
k
be-
comes closer to orthogonal, i.e., as
L
H
L→ I. 共7兲
The operator p
k
is usually a diagonal operator; therefore, conver-
gence depends mainly on the two operators F
n
ⳮ1
and T, which in turn
depend on the spatial axes and the missing samples, respectively.
The operator F
n
ⳮ1
maps the 4D spatial wavenumber k to the inter-
polated 4D spatial axis x.Ifx and k are perfectly regular, then
F
n
F
n
ⳮ1
⳱ I. If, in addition, there are no missing traces, the left side of
equation 6 is diagonal and the system converges in one iteration. The
wavenumber axes k 共one axis per spatial dimension兲 can always be
made regular, but the axes x depend on the input data. Binning the in-
put data makes x regular.
The sampling operator T that connects interpolated data to ac-
quired data depends on the missing traces. It is orthogonal when
there are no missing traces. Binning the data without decreasing
sampling interval does not affect T but can introduce data distortion.
Making bin intervals small to avoid data distortion introduces nonor-
thogonality into the system of equations 6, making convergence
more difficult.
As we see from this analysis, there is a trade-off between nonor-
thogonality on F
n
ⳮ1
and T. Moreover, there is a trade-off between
binning 共and data distortion兲 on one side and convergence of the sys-
tem of equations on the other. Precisely honoring spatial coordinates
slows down convergence because of the nonorthogonality of F
n
ⳮ1
.
Alternatively, F
n
ⳮ1
can be made regular by fine binning of x, practi-
cally without loss of precision, but T becomes nonorthogonal. In-
creasing bin interval does not affect F
n
ⳮ1
and decreases nonorthogo-
nality on T but introduces data distortion.
Figure 2 illustrates the effects of sampling in the matrix distribu-
tion for the left side of the system of equation 6. Let us consider two
different cases: coarse regular sampling 共left column兲 and irregular
sampling 共right column兲. The matrix distribution for these two cases
is shown when applying three different methods: coarse binning,
true locations, and fine binning.
The first row, Figure 2a and b, shows the structure of the system of
equations when coarse binning is used 共which allows the use of
FFTs兲. The system of equations is quite sparse, with most elements
along the main diagonal; therefore, the optimization converges very
quickly. In the decimation case on the left 共Figure 2a兲, the secondary
peaks produced by operator aliasing are as strong as the nonalias
component. In practice, they can be taken care of by filters and boot-
strapping weights from low to high frequency.
The second row, Figure 2c and d, shows the same for irregular
sampling 共true spatial locations兲. The system of equations is fully
populated because the irregularly sampled Fourier transform intro-
duces cross-terms between the model elements 共the basis functions
are nonorthogonal兲 and convergence is slower 共Figure 2c and d兲.Op-
erator aliasing, on the other hand, becomes less strong 共Figure 2c兲.
The third row, Figure 2e and f, shows the same for fine binning. The
system of equations becomes almost fully populated again, but in
this case not because of F as before but because of T. The multidi-
mensional case is more difficult to visualize, but the same ideas ap-
ply. In that case, the nonaliased directions help to constrain the solu-
tion and attenuate the effect of aliasing.
In my experience, if the bin size is not made too small, the large
computational advantage of FFT algorithms over DFTs is more ben-
eficial than the consequent increase on nonorthogonality on T. This
is possible when working along spatial dimensions where the data
look simple. In this case, the method can preserve localized ampli-
tude variations better than inversion using irregularly sampled spa-
tial locations because it is possible to iterate more and to obtain a so-
10
20
30
10
20
30
10
20
30
10
20
30
10 20 30
10 20 30 10 20 30
10 20 30
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
a) b)
c) d)
e) f)
A=(FT
H
TF
⫺1
)
40
60
20
40 6020
40
60
20
40 6020
Figure 2. Matrix distributions for the left side of the system of equa-
tions 6 for irregularly sampled data in two different cases, coarse
sampling and gaps 共columns兲, using coarse binning, true locations,
and fine binning 共rows兲. 共a兲 Coarse binning on decimated data. 共b兲
Coarse binning on data with gaps. 共c兲 True locations on decimated
data. 共d兲 True locations on data with gaps. 共e兲 Fine binning on deci-
mated data. 共f兲 Fine binning on data with gaps. The color represents
amplitudes in absolute numbers, with dark blue representing zeros.
Five-dimensional interpolation V127
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
lution closer to the one obtained with full convergence. This high fi-
delity for localized events makes the algorithm very useful for land
data, where amplitude changes very quickly. At the same time, it
makes the method less useful in removing noise.
Pseudorandom noise 共noise that looks random but has a coherent
source兲 can be propagated into the new traces, becoming locally co-
herent and therefore very difficult to remove. Although this is a dis-
advantage, it is important to realize that interpolation and noise at-
tenuation have different and sometimes conflicting goals. Noise at-
tenuation should predict only signal and should filter out noncoher-
ent events. Interpolation should predict all of the data, even if the
events are very weak or badly sampled. Undersampled events can
look noncoherent; therefore, their preservation depends on the algo-
rithm not being too selective in terms of coherence. Although simul-
taneous interpolation and noise attenuation is a very desirable goal,
better chances of success are achieved by applying noise attenuation
and interpolation iteratively in sequence rather than in a single pass.
On the other hand, there are many advantages in using exact posi-
tions to eliminate aliasing and difficulties of binning for complex
structure. The first aspect is balanced by working in the full 5D space
of the data. The second can be addressed in most cases by using small
binning intervals.
Some problems appear often, however, when binning long offsets
in structured data because of rapid amplitude variations caused by
anisotropy and residual moveout. This is a problem for land and
ocean-bottom 共OBC兲 data, where far offsets usually have poor sam-
pling because of the rectangular shape of shot patches. Also, this is-
sue makes binning more difficult for marine data where residual mo-
veout can be very significant at long offsets.A possible solution is to
use larger bins along inline and crossline for far offsets, taking ad-
vantage of the fact that the Fresnel zone increases in size with offset.
A practical combination would be a hybrid method where binning is
applied for near and middle offsets and exact locations are applied
on long offsets.
A complete discussion of the topic is beyond the scope of this pa-
per. The comments above are intended to point out the effect of sam-
pling in the system of equation 6 and the impact this has in predicting
localized amplitude variations in the data.
APPLICATIONS AND DATA EXAMPLES
Applications for land data interpolation usually involve increas-
ing inline and crossline sampling 共decreasing bin size兲 and/or in-
creasing offset and azimuth sampling 共increasing fold兲. This classi-
fication is too broad, however, because there are many possible ways
to increase the sampling, just as there are many possible geometry
designs. Table 1 shows several applications classified according to
the six types defined earlier. All of these cases have been used in
practice, but only a few of them are often required in production
projects. In this section, we review examples for the most common
cases:
• increasing offset and azimuth sampling by decreasing shot and
receiver line interval to improve migration 共type 1b兲
• increasing offset and azimuth sampling for better velocity, AVO,
andAVAz estimates after migration 共type 1b兲
• increasing inline and crossline sampling to improve imaging of
steep reflectors by relaxing antialias filters in migration algo-
rithms 共type 1a兲
• increasing inline and crossline sampling for changing natural bin
size, as in merging surveys acquired with different geometries
共type 1a兲
• infilling missing shots and receivers in acquisition gaps 共type 1b
in this example兲
Increasing offset and azimuth sampling for imaging
The first example shows the benefits of interpolation for aniso-
tropic 3D depth migration in a Canadian Foothills area with signifi-
cant structure, topography, and noise. These surveys often can bene-
fit from interpolation because usually they have shot and receiver
lines acquired quite far apart because of high acquisition costs on to-
pographic areas. Foothills acquisitions on structurally complex ar-
eas, however, are difficult to interpolate because small imperfections
in static corrections affect coherence in the space domain. Also,
these data often are severely affected by ground roll noise, which
makes interpolation difficult — particularly for shallow structures.
Figure 3a shows an orthogonal geometry 共vertical lines are shots,
horizontal lines are receivers兲. CMPs are overlain on the shot/receiv-
er locations, with their color indicating the fold. Figure 3b shows the
target geometry that contains all of the same shots and receivers as in
Figure 3a, along with the new interpolated shot and receiver lines.
Notice that these new lines follow the geometry of the original lines,
permitting us to preserve all original data because original shots and
receivers do not need to be moved.
By halving shot and receiver line intervals, the CMP fold increas-
es by a factor of four, giving a better sampling of offsets and azi-
muths. This can be seen in Figure 4a and b, which shows the offset/
azimuth distribution for a group of CMPs before and after interpola-
tion. The increased sampling benefits migration because imaging al-
a)
b)
Figure 3. Foothills survey: orthogonal geometry. Shots are located
along vertical lines; receivers are located along horizontal lines. Col-
or represents fold. 共a兲 Before interpolation. 共b兲 After interpolation
共shot and receiver line spacing decreased by a factor of two兲.
V128 Trad
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
gorithms rely on interference to form the correct image and therefore
require a proper sampling 共at least two samples per cycle兲 to work
correctly.
These benefits can be observed in the final stacked image, but they
are more obvious in common-image gathers 共CIGs兲. Figure 5 com-
pares the CIGs with and without interpolation. The better continuity
of the events will certainly bring improvements to the results of gath-
er processing, especially for AVAz and AVO analysis 共which are
very sensitive to acquisition footprint兲 and automated processes
such as tomography based on residual curvature analysis. Figure 6
shows the image stack from 0 to 1000-m offsets. The continuity of
events has been improved over nearly all of the section.
Increasing offset-azimuth sampling for AVO
Prestack migration of the seismic data prior to performing AVO
inversion has been advocated for more than 10 years 共Mosher et al.,
1996兲. However, poor sampling typical of land seismic acquisition
makes practical implementation of this concept quite difficult.
Downton et al. 共2008兲 demonstrate that these problems can be ad-
dressed by performing interpolation prior to prestack migration, re-
sulting in betterAVO estimates. These workers performed a series of
comparisons of processing flows for AVO, including the 5D interpo-
lation method presented in this paper. By calculating the correlation
between AVO estimates to well-log information for the Viking For-
mation in Alberta, they concluded that the combination of interpola-
a
)
b
)
Figure 4. Foothills survey: offset/azimuth distribution in an area of
the 共a兲 original and 共b兲 interpolated surveys.
o
a
)
b)
o
Figure 5. Foothills survey: migrated gathers 共3D anisotropic depth
migration兲 from 共a兲 original data and 共b兲 with interpolation before
migration.
a
)
b
)
Figure 6. Foothills survey: migrated stack section, 0–1000 m 共a兲
without interpolation, and 共b兲 with interpolation before migration.
Five-dimensional interpolation V129
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
tion and prestack time migration provided the best AVO estimates,
achieving a correlation increase from 0.39 for migration without in-
terpolation to 0.57 for migration after interpolation. This improve-
ment can be taken as evidence of amplitude preservation during in-
terpolation.
Figure 7 shows CIGs from prestack time migration with and with-
out interpolation before migration. The migration applied in this
case was type 1b, decreasing shot and receiver lines by half and in-
creasing fold by four times. Hunt et al. 共2008兲 give a complete de-
scription of the experiment.
Increasing inline-crossline sampling for steep dips
This example, also described in Gray et al. 共2006兲, shows the ben-
efits of reducing the bin size 共increasing inline-crossline sampling兲
before migration rather than afterward. The land data set in this ex-
ample was acquired over a structured area in Thailand using an or-
thogonal layout. The objective of the interpolation was to obtain
more information on steep dips by including moderate- to high-fre-
quency energy that the migration antialias filter removed from the
original, more coarsely sampled data. For this purpose, the shot
spacing along lines was halved to reduce the bin size from 12.5
⫻ 50 to 12.5⫻ 25 m. Figure 8 shows the shot locations after inter-
polation. The red dots indicate the locations of the original shots, and
the blue dots indicate the locations of the new shots.
As a comparison, a prestack time migration stack was produced
using the original acquired data; then the stack was interpolated, as
shown in Figure 9a. In Figure 9b, the prestack data were interpolated
before migration using 5D interpolation. The prestack interpolation
produced a data set input for migration that was better sampled than
the noninterpolated data set. This allowed the migration to operate
with greater fidelity on the steep-dip events — in this case, applying
fewer antialiasing constraints. The prestack interpolation did not add
information to the data, but it did allow the migration to make better
use of the information that was already in the data, allowing it to pro-
duce an image with greater structural detail.
Increasing inline-crossline sampling for survey merging
Often, surveys acquired with different natural bin sizes need to be
merged into a common grid. If a survey is gridded into a bin size
o
o
a
)
b
)
Figure 7. Foothills II. CIGs from prestack time migration 共a兲 without
interpolation and 共b兲 with interpolation before migration.
Figure 8. Thailand: shot locations after interpolation for an orthogo-
nal geometry. Red dots are original shots; blue dots are new 共interpo-
lated兲 shots. The two large gaps are 1000– 1500 m in diameter 共be-
fore interpolation兲.
a)
b)
Figure 9. Thailand: prestack time migration stacks. 共a兲 Interpolation
performed after stacking the migrated images. 共b兲 Interpolation per-
formed before migration. The improved imaging of the steep-dip
event in the center of the section is evident in 共b兲. 共Data courtesy of
PTT Exploration and Production兲
V130 Trad
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
smaller than it was designed for, the CMP coverage becomes very
poor, affecting further interpretation even after migration. A solution
is to use prestack interpolation to reduce the natural bin size to match
the merge grid. This can be achieved by increasing sampling in the
inline-crossline domain or, alternatively, by using the surface-con-
sistent approach to decrease the distance between shots and receiv-
ers along lines.
Trad et al. 共2008兲 show a case history from the Surmont bitumen
project in northern Alberta. In this area, nine surveys had to be
merged into a common 10⫻ 10-m CMP grid. Of the nine surveys in
the project, one was acquired with a natural bin size of 15⫻ 30 m,
giving poor coverage when binned in the 10⫻ 10-m CMP grid used
for the merge. Furthermore, this survey was the only one in the
project with a parallel design 共the other surveys were acquired with
an orthogonal geometry兲. By adding new shots and receivers using
the method presented in this paper, the coarser survey was trans-
formed from a parallel geometry with 10⫻ 30-m bin size to an or-
thogonal survey of 10⫻ 10-m bin size and twice the original fold.
The original data were fully preserved and the numbers of shots and
receivers were each increased by three, so the final size was nine
times the original size. The interpolation allowed this survey to
merge with the other surveys in the Surmont area, avoiding the need
for reshooting.
Figure 10a and b shows one CMP before and after interpolation.
Figure 11a shows a time slice from the stack of original data in the
10⫻ 10-m grid. Figure 11b shows the same time slice from the stack
of the interpolated data.
Infilling large gaps
It is common for 3D acquisitions to have large gaps with missing
shots or receivers because of inaccessibility in some areas 共lakes,
hills, population, etc.兲. Although it usually is impossible to infill
large gaps completely, decreasing their size has a large impact on mi-
gration results. The following example shows the infilling of a large
gap produced by an open-pit coal mine in an area with structured ge-
ology. This obstacle prevented shots and receivers from being de-
ployed at this location during the 3D acquisition. New shots and re-
ceivers were added on the border of the gap. Time migration of the
b)
a)
0
Figure 10. Surmont: comparison of a CMP 共a兲 before and 共b兲 after in-
terpolation. Empty traces have been added to the CMP before inter-
polation to match the traces obtained after interpolation. The CMP in
共b兲 was created by using information from many other CMPs 共not
shown in the figure兲.
a
)
b)
Figure 11. Surmont: time slice comparison from 共a兲 the stack of the
original data and 共b兲 a time slice from the stack of the interpolated
data.
Five-dimensional interpolation V131
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/
original seismic data produced the image in Figure 12a. After inter-
polation, time migration produced the image in Figure 12b. The in-
terpolated image shows an anticline underneath the open-pit mine
that was confirmed by 2D seismic and well logs acquired before the
existence of the mine.
DISCUSSION
Interpolation fills missing traces by incorporating information in
the data using a priori assumptions. This provides, for standard pro-
cesses, information that is already in the data but is not accessible
without these constraints. On the negative side, results from interpo-
lated data sometimes can be worse than results without interpola-
tion. This can happen because some processes such as stacking
might work better for zero traces than for wrong traces. In addition,
interpolation can add spurious information in a coherent manner, a
problem that stacking is unable to fix. Interpolation must be applied
very carefully to ensure this does not happen.
Several factors play against interpolation because it is by nature
an ill-conditioned problem. Not only do unrecorded samples have to
be estimated, but they also must be located in a manner consistent
with the rest of the data, for example, at the proper elevations with
proper static information. Usually, geometries that can benefit from
interpolation do not lend themselves to good noise attenuation. Ac-
curate interpolation becomes more difficult as the structure becomes
more complex, as gaps become larger or sampling poorer, and as the
signal-to-noise ratio gets lower. Therefore, careful QC is necessary
to select interpolated data according to some quality criteria. A use-
ful criterion is the minimum distance between a trace and its original
neighbors, but many other QC parameters can be estimated and
saved into headers. After interpolation, these parameters can be used
together to decide if a new trace is acceptable.
CONCLUSIONS
Wide-azimuth geometries often are undersampled along one or
more dimensions, and interpolation is a very useful tool to precondi-
tion the data for prestack processes such as migration, AVO, and
AVAz. I have discussed a 5D interpolation technique to create new
shots and receivers for 3D land seismic data that honors amplitude
variations along inline, crossline, offset, and azimuth. Although not
intended to replace acquiring adequate data for processing, this tool
is useful for overcoming acquisition constraints and for obtaining
benefits of tighter acquisition sampling patterns, higher fold, and/or
smaller bin size.
By working in five dimensions, this interpolation method can in-
crease sampling in surveys that are problematic for lower dimen-
sional interpolators. The technique might be applied to overcome ac-
quisition constraints at a fraction of field acquisition costs, merge
data sets with different bin sizes, and eliminate differences caused by
acquisition, avoiding the need to reshoot surveys. Benefits include
more reliable prestack processes: velocity analysis, prestack migra-
tion, AVO and AVAz analyses, reduction of migration artifacts, and
improved imaging of steep dips.
ACKNOWLEDGMENTS
I would like to thank CGGVeritas for permission to publish this
paper and CGGVeritas Library Canada, PTT Exploration and Pro-
duction, PetroCanada, ConocoPhillips Canada Ltd., and Total E&P
Canada Ltd. for data examples. Special thanks are owed to several
colleagues who helped produce the interpolation examples and who
provided useful ideas and discussions on interpolation over the
years. In particular, my thanks to Bin Liu and Mauricio Sacchi,
whose work constitutes the cornerstone of this method.
REFERENCES
Abma, R., and N. Kabir, 2006, 3D interpolation of irregular data with POCS
algorithm: Geophysics, 71, no. 6, E91–E97.
Abma, R., C. Kelley, and J. Kaldy, 2007, Sources and treatments of migra-
tion-introduced artifacts and noise: 77th Annual International Meeting,
SEG, Expanded Abstracts, 2349–2353.
Cordsen, A., M. Galbraith, and J. Peirce, 2000, Planning land 3-D seismic
surveys: SEG.
Downton, J., B. Durrani, L. Hunt, S. Hadley, and M. Hadley, 2008, 5D inter-
polation, PSTM and AVO inversion for land seismic data: 70th Annual
Conference and Technical Exhibition, EAGE, Extended Abstracts, G029.
Duijndam, A. J. W., and M. A. Schonewille, 1999, Nonuniform fast Fourier
transform: Geophysics, 64, 539–551.
Gray, S., D. Trad, B. Biondi, and L. Lines, 2006, Towards wave-equation im-
aging and velocity estimation: CSEG Recorder, 31, 47–53.
Hansen, P., 1998, Rank-deficient and discrete ill-posed problems: Numerical
aspects of linear inversion: Society for Industrial and Applied Mathemat-
ics.
Herrmann, P., T. Mojesky, M. Magesan, and P. Hugonnet, 2000, De-aliased,
high-resolution Radon transforms: 70th Annual International Meeting,
SEG, Expanded Abstracts, 1953–1957.
Hunt, L., J. Downton, S. Reynolds, S. Hadley, M. Hadley, D. Trad, and B.
Durrani, 2008, Interpolation, PSTM, & AVO for Viking and Nisku targets
in West Central Alberta: CSEG Recorder, 33, 7–19.
Liu, B., and M. D. Sacchi, 2004, Minimum weighted norm interpolation of
seismic records: Geophysics, 69, 1560–1568.
Mosher, C. C., T. H. Keho, A. B. Weglein, and D. J. Foster, 1996, The impact
of migration on AVO: Geophysics, 61, 1603–1615.
Poole, G., and P. Herrmann, 2007, Multidimensional data regularization for
modern acquisition geometries: 77th Annual International Meeting, SEG,
ExpandedAbstracts, 2585–2589.
Sacchi, M. D., and T. J. Ulrych, 1996, Estimation of the discrete Fourier
transform — A linear inversion approach: Geophysics, 61, 1128–1136.
Schonewille, M. A., R. Romijn, A. J. W. Duijndam, and L. Ongkiehong,
2003, A general reconstruction scheme for dominant azimuth 3D seismic
data: Geophysics, 68, 2092–2105.
Trad, D., J. Deere, and S. Cheadle, 2005, Understanding land data interpola-
tion: 75th Annual International Meeting, SEG, Expanded Abstracts,
2158–2161.
Trad, D., M. Hall, and M. Cotra, 2008, Merging surveys with multidimen-
sional interpolation: CSPG CSEG CWLS Conference, Expanded Ab-
stracts, 172–176.
Trad, D., T. Ulrych, and M. Sacchi, 2003, Latest views of the sparse Radon
transform: Geophysics, 68, 386–399.
Xu, S., Y. Zhang, D. Pham, and G. Lambare, 2005, Antileakage Fourier trans-
form for seismic data regularization: Geophysics, 70, no. 4, V87–V95.
Zwartjes, P. M., and M. D. Sacchi, 2007, Fourier reconstruction of nonuni-
formly sampled, aliased seismic data: Geophysics, 72, no. 1, V21–V32.
a) b)
Figure 12. Coal mine: comparison of time migration images 共a兲
without and 共b兲 with interpolation for a 3D survey acquired on top of
a large gap.
V132 Trad
Downloaded 26 Nov 2009 to 206.174.202.253. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/