Article

How Observations and Structure Affect the Geostatistical Solution to the Steady‐State Inverse Problem

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The solution to the steady-state inverse problem can be expanded into a series of spline functions with weights adjusted to reproduce the observations within the observation error. The splines depend on the model spatial structure, the ground water flow model, and the location of the observations. This representation of the solution, which is a rigorous and exact expansion, provides insight into the form of the best estimate and explicitly shows how observations and the conceptual model may affect the solution.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The above cross-correlation analysis is also similar to the interpolation splines used by Kitanidis [1998], Snodgrass and Kitanidis [1998], and Fienen et al. [2008]. Note that the cross correlation is the foundation of the cokriging approach [e.g., Kitanidis and Vomvoris, 1983;Hoeksema and Kitanidis, 1984;Yeh et al., 1995b;Yeh and Zhang, 1996;Li and Yeh, 1999], the nonlinear geostatistical inverse approach [e.g., Kitanidis, 1995;Zhang and Yeh, 1997;Hanna and Yeh, 1998;Yeh, 1998, 1999;Hughson and Yeh, 2000], the HT inverse model [e.g., Yeh and Liu, 2000;Zhu and Yeh, 2005], and the geostatistical inverse modeling of electrical resistivity tomography . ...
... The above cross-correlation analysis is also similar to the interpolation splines used by Kitanidis [1998], Snodgrass and Kitanidis [1998], and Fienen et al. [2008]. Note that the cross correlation is the foundation of the cokriging approach [e.g., Kitanidis and Vomvoris, 1983;Hoeksema and Kitanidis, 1984;Yeh et al., 1995b;Yeh and Zhang, 1996;Li and Yeh, 1999], the nonlinear geostatistical inverse approach [e.g., Kitanidis, 1995;Zhang and Yeh, 1997;Hanna and Yeh, 1998;Yeh, 1998, 1999;Hughson and Yeh, 2000], the HT inverse model [e.g., Yeh and Liu, 2000;Zhu and Yeh, 2005], and the geostatistical inverse modeling of electrical resistivity tomography . ...
Article
Using cross-correlation analysis, we demonstrate that flux measurements at observation locations during hydraulic tomography (HT) surveys carry nonredundant information about heterogeneity that are complementary to head measurements at the same locations. We then hypothesize that a joint interpretation of head and flux data, even when the same observation network as head has been used, can enhance the resolution of HT estimates. Subsequently, we use numerical experiments to test this hypothesis and investigate the impact of flux conditioning and prior information (such as correlation lengths and initial mean models (i.e., uniform mean or distributed means)) on the HT estimates of a nonstationary, layeredmedium. We find that the addition of flux conditioning to HT analysis improves the estimates in all of theprior models tested. While prior information on geologic structures could be useful, its influence on the estimates reduces as more nonredundant data (i.e., flux) are used in the HT analysis. Lastly, recommendations for conducting HT surveys and analysis are presented.
... We estimate s by the continuous function s [18] s Xb QH T n 83 ...
... in which b is the 1 Â p vector of the b-estimate and n is a 1 Â n vector of weights associated with the measurements. We ®nd b and n by solving the function-estimate form of the linearized cokriging system [18] RHX HX T 0 ...
Article
Including tracer data into geostatistically based methods of inverse modeling is computationally very costly when all concentration measurements are used and the sensitivities of many observations are calculated by the direct differentiation approach. Harvey and Gorelick (Water Resour Res 1995;31(7):1615–26) have suggested the use of the first temporal moment instead of the complete concentration record at a point. We derive a computationally efficient adjoint-state method for the sensitivities of the temporal moments that require the solution of the steady-state flow equation and two steady-state transport equations for the forward problem and the same number of equations for each first-moment measurement. The efficiency of the method makes it feasible to evaluate the sensitivity matrix many times in large domains. We incorporate our approach for the calculation of sensitivities in the quasi-linear geostatistical method of inversing (“iterative cokriging”). The application to an artificial example of a tracer introduced into an injection well shows good convergence behavior when both head and first-moment data are used for inversing, whereas inversing of arrival times alone is less stable.
... Figure 13 (a) also shows the RMSE of estimated hydraulic conductivity for each case confirming the RMSE value decreases dramatically and the uncertainty is significantly reduced as the pumping rate increases. This is expected since the sensitivity of the data, i.e., Jacobian of the forward model ∂F ∂k , increases as the pumping rate increases, thus the measurement information that contains important features becomes less contaminated by the measurement error (Kitanidis, 1998). Still, the inversion with data at the small pumping rate can identify the structure of underlying channels by incorporating the meaningful data-driven prior from WGAN-GP in the inversion. ...
Preprint
Key Points: • Subsurface characterization using the Wasserstein generative adversarial network with gradient penalty. • Gaussian, channelized, and fractured fields are tested to demonstrate the accuracy and efficiency of our approach. • The ensemble-based and optimization-based approaches are compared to demonstrate why the ensemble-based approach performs better with GANs. Abstract Estimating spatially distributed properties such as hydraulic conductivity (K) from available sparse measurements is a great challenge in subsurface characterization. However, the use of inverse modeling is limited for ill-posed, high-dimensional applications due to computational costs and poor prediction accuracy with sparse datasets. In this paper, we combine Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP), a deep generative model that can accurately capture complex subsurface structure, and Ensemble Smoother with Multiple Data Assimilation (ES-MDA), an ensemble-based inversion method, for accurate and accelerated subsurface characterization. WGAN-GP are trained to generate high-dimensional K fields from a low-dimensional latent space and ES-MDA then updates the latent variables by assimilating available measurements. Several subsurface examples are used to evaluate the accuracy and efficiency of the proposed method and the main features of the unknown K fields are characterized accurately with reliable uncertainty quantification. Furthermore, the estimation performance is compared with a widely-used varia-tional, i.e., optimization-based, inversion approach, and the proposed approach outper-forms the variational inversion method, especially for the channelized and fractured field examples. We explain such superior performance by visualizing the objective function in the latent space: because of nonlinear and aggressive dimension reduction via gener-ative modeling, the objective function surface becomes extremely complex while the ensemble approximation can smooth out the multi-modal surface during the minimization. This suggests that the ensemble-based approach works well over the variational approach when combined with deep generative models at the cost of forward model runs unless convergence-ensuring modifications are implemented in the variational inversion.
... An statistical approach was developed by Neuman and Yakowitz (1979); Carrera and Neuman (1986) and applied to a real aquifer in Cortaro Basin Neuman et al. (1980). Kitanidis (1998) employed a geostatistical approach, i.e., the logarithms of transmissivity, random functions and variograms, to solve the inverse problems. A more computationally efficient scheme based on variational methods was introduced by Neuman (1980) that eliminated computing derivatives of hydraulic conductivity. ...
Article
Full-text available
In geothermal reservoir characterization and basin modeling, often conclusions are drawn and decisions are made using uncertain or incomplete data sets. Particularly, there are limited hydrogeological data in the Berlin area in the North German Basin. The groundwater in this sedimentary basin is divided into a shallow freshwater aquifer (with about 500 m depth) and a brackish to saline groundwater aquifer within deeper sedimentary layers. Between these two different groundwater compartments, a natural hydrogeological boundary is provided by the presence of an impervious clay-enriched layer (Rupelian Clay), which is discontinuous, eroded or not deposited in some local areas. Thereby, the distribution of hydraulic conductivity of Rupelian Clay aquitard that represents a vertical and horizontal partitioning of the aquifers below Berlin is of main importance in groundwater management. We use an inverse modeling approach to estimate the spatial distribution of hydraulic conductivity of the Rupelian Clay aquitard, using available local data within the Berlin province. We use a commercial finite element fluid flow simulator that interfaces to a parameter estimation package. A Gauss–Levenberg–Marquardt algorithm is used to adjust the hydraulic conductivity of the aquitard such that the hydraulic head observations are reproduced. Subsequently, the updated hydraulic conductivity of the Rupelian Clay is used as input to the forward modeling, in order to estimate the pressure and temperature fields. The results of the inverse modeling suggest a more continuous distribution of the Rupelian Clay layer below the Berlin area in comparison with previous published studies. Hence, the convective heat and fluid flow are more restricted, and there is less interaction between shallow and deep aquifers. Change in the predicted temperature field is more pronounced for deeper strata.
... This problem can be more severe if an optimal parameter set instead of parameter distribution is used. As bias is expected in estimating spatial properties from inversion [Kitanidis, 1998], MAD embraces this bias with distributions of parameters. ...
Article
Full-text available
Tracer tests performed under natural or forced gradient flow conditions can provide useful information for characterizing subsurface properties, through monitoring, modeling, and interpretation of the tracer plume migration in an aquifer. Nonreactive tracer experiments were conducted at the Hanford 300 Area, along with constant-rate injection tests and electromagnetic borehole flow meter tests. A Bayesian data assimilation technique, the method of anchored distributions (MAD) (Rubin et al., 2010), was applied to assimilate the experimental tracer test data with the other types of data and to infer the three-dimensional heterogeneous structure of the hydraulic conductivity in the saturated zone of the Hanford formation. In this study, the Bayesian prior information on the underlying random hydraulic conductivity field was obtained from previous field characterization efforts using constant-rate injection and borehole flow meter test data. The posterior distribution of the conductivity field was obtained by further conditioning the field on the temporal moments of tracer breakthrough curves at various observation wells. MAD was implemented with the massively parallel three-dimensional flow and transport code PFLOTRAN to cope with the highly transient flow boundary conditions at the site and to meet the computational demands of MAD. A synthetic study proved that the proposed method could effectively invert tracer test data to capture the essential spatial heterogeneity of the three-dimensional hydraulic conductivity field. Application of MAD to actual field tracer data at the Hanford 300 Area demonstrates that inverting for spatial heterogeneity of hydraulic conductivity under transient flow conditions is challenging and more work is needed.
... Giudici et al. [1995] examined the value of combining multiple stimulations of an aquifer to refine distributed transmissivity estimates. Multiple pumping tests with a small network of wells can be much more informative than a single pumping test with a much larger network of observation wells [Snodgrass and Kitanidis, 1998]. Even with only two wells, Kunstmann et al. [1997] showed improved characterization when performing unequal strength dipole pumping tests using one well as a source and the other as a sink, and then reversing the configuration. ...
Article
Full-text available
1] Hydraulic tomography is a powerful technique for characterizing heterogeneous hydrogeologic parameters. An explicit trade-off between characterization based on measurement misfit and subjective characterization using prior information is presented. We apply a Bayesian geostatistical inverse approach that is well suited to accommodate a flexible model with the level of complexity driven by the data and explicitly considering uncertainty. Prior information is incorporated through the selection of a parameter covariance model characterizing continuity and providing stability. Often, discontinuities in the parameter field, typically caused by geologic contacts between contrasting lithologic units, necessitate subdivision into zones across which there is no correlation among hydraulic parameters. We propose an interactive protocol in which zonation candidates are implied from the data and are evaluated using cross validation and expert knowledge. Uncertainty introduced by limited knowledge of dynamic regional conditions is mitigated by using drawdown rather than native head values. An adjoint state formulation of MODFLOW-2000 is used to calculate sensitivities which are used both for the solution to the inverse problem and to guide protocol decisions. The protocol is tested using synthetic two-dimensional steady state examples in which the wells are located at the edge of the region of interest.
... Once large-scale variations are established other methods may be used to assess the influence of important variations that may be smaller than the grid scale. To date, methods of determining large-scale variations and methods of characterizing the likely effects of small-scale variations, have been integrated very little (Kitanidis, 1998). Many aspects of the guidelines presented here are applicable regardless of how a model is calibrated. ...
Article
Full-text available
Fourteen guidelines are described which are intended to produce calibrated groundwater models likely to represent the associated real systems more accurately than typically used methods. The 14 guidelines are discussed in the context of the calibration of a regional groundwater flow model of the Death Valley region in the southwestern United States. This groundwater flow system contains two sites of national significance from which the subsurface transport of contaminants could be or is of concern: Yucca Mountain, which is the potential site of the United States high-level nuclear-waste disposal; and the Nevada Test Site, which contains a number of underground nuclear-testing locations. This application of the guidelines demonstrates how they may be used for model calibration and evaluation, and also to direct further model development and data collection.
Article
Bathymetry, i.e., depth, imaging in a river is of crucial importance for shipping operations and flood management. With advancements in sensor technology and plentiful computational resources, various types of indirect measurements can be used to estimate high-resolution river bed topography. In this work, we image river bed topography using depth-averaged quasi-steady velocity observations related to the topography through the 2-D shallow water equations. The principal component geostatistical approach (PCGA), a fast and scalable variational inverse modeling method powered by low-rank representation of covariance matrix structure, is presented and applied to two riverine bathymetry identification problems. To compare the efficiency and effectiveness of the proposed method, an ensemble-based approach is also applied to the test problems. It is demonstrated that PCGA is superior to the ensemble-based approach in terms of computational effort and accuracy because of the successive linearization of the forward model and the optimal low-rank representation of the prior covariance matrix. To investigate how different low-rank covariance matrix representation by the two approaches can affect the solution accuracy, we analyze the direct survey data of the river bottom topography in the test problem and show that PCGA utilizes more efficient and parsimonious choice of the solution basis than the ensemble-based approach. Geostatistical analysis performed on the direct survey data also confirms the validity of the chosen covariance model and its structural parameters.
Article
In the geostatistical inverse problem of subsurface hydrology, continuous hydraulic parameter fields, in most cases hydraulic conductivity, are estimated from measurements of dependent variables, such as hydraulic heads, under the assumption that the parameter fields are autocorrelated random space functions. Upon discretization, the continuous fields become large parameter vectors with elements. While cokriging-like inversion methods have been shown to be efficient for highly resolved parameter fields when the number of measurements is small, they require the calculation of the sensitivity of each measurement with respect to all parameters, which may become prohibitive with large sets of measured data such as those arising from transient groundwater flow.
Article
Full-text available
We review the main stages of the evolution of ideas and methods for solving the inverse problem in hydrogeology; i.e., the identification of the transmissivity field in single-phase flow from piezometric data, in mainly steady-state and, occasionally, transient flow conditions. We first define the data needed to solve an inverse problem in hydrogeology, then describe the numerous approaches that have been developed over the past 40 years to solve it, emphasizing the major contributions made by Shlomo P. Neuman. Finally, we briefly discuss fitting processes that start by defining the unknown field as geological images (generated by Boolean or geostatistical methods). The early attempts at solving the inverse problem were direct, i.e., the transmissivity field was directly determined by using stream lines of the flow and inverting the flow equation along these lines. Faced with the poor results obtained in this manner, hydrogeologists have tried many different ways of minimizing the balance error representing an integral of the mass-balance error for each mesh for a given transmissivity field. These attempts were accompanied by constraints imposed on the transmissivity field in order to avoid instabilities. The idea then emerged that the unknown field should reproduce the local observations of the pressure at the measurement points instead of minimizing a balance error. Second, it should also satisfy a condition of plausibility, which means that the transmissivity field obtained through the inverse solution should not deviate too far from an a priori estimate of the real transmissivity field. This a priori notion led to the inclusion of a Bayesian approach resulting in the search for an optimal solution by maximum likelihood, as expounded later. Simultaneously, the existence of locally measured values in the transmissivity field (obtained by pumping tests) allowed geostatistical methods to be used in the formulation of the problem; the result of this innovation was that three major approaches came into being: (1) the definition of the a priori transmissivity field by kriging; (2) the method of cokriging; (3) the pilot point method. Furthermore, geostatistics made it possible to pose the inverse problem in a stochastic framework and to solve an ensemble of possible and equally probable fields, each of them equally acceptable as a solution.
Article
Papers addressing fate and transport processes for analyzing groundwater quality are presented. These papers are separated into water-saturated systems, vanadose zone systems and systems containing nonaqueous phase liquids. Within each of these categories, the articles are further divided based on physical, chemical and biological processes relevant to each system. Papers dealing with groundwater monitoring and groundwater remediation are then treated, with papers grouped technology. Finally, papers describing risk assessment and groundwater protection are presented.
Conference Paper
Full-text available
This paper briefly describes nonlinear regression methods, a set of 14 guidelines for model calibration, how they are implemented in and supported by two public domain computer programs, and a demonstration and a test of the methods and guidelines.
Article
The stochastic geostatistical inversion approach is widely used in subsurface inverse problems to estimate unknown parameter fields and corresponding uncertainty from noisy observations. However, the approach requires a large number of forward model runs to determine the Jacobian or sensitivity matrix, thus the computational and storage costs become prohibitive when the number of unknowns, m, and the number of observations, n increase. To overcome this challenge in large-scale geostatistical inversion, the Principal Component Geostatistical Approach (PCGA) has recently been developed as a “matrix-free” geostatistical inversion strategy that avoids the direct evaluation of the Jacobian matrix through the principal components (low-rank approximation) of the prior covariance and the drift matrix with a finite-difference approximation. As a result, the proposed method requires about K runs of the forward problem in each iteration independently of m and n, where K is the number of principal components and can be much less than m and n for large-scale inverse problems. Furthermore, the PCGA is easily adaptable to different forward simulation models and various data types for which the adjoint-state method may not be implemented suitably. In this paper, we apply the PCGA to representative subsurface inverse problems to illustrate its efficiency and scalability. The low-rank approximation of the large-dimensional dense prior covariance matrix is computed through a randomized eigen-decomposition. A hydraulic tomography problem in which the number of observations is typically large is investigated first to validate the accuracy of the PCGA compared with the conventional geostatistical approach. Then the method is applied to a large-scale hydraulic tomography with 3 million unknowns and it is shown that underlying subsurface structures are characterized successfully through an inversion that involves an affordable number of forward simulation runs. Lastly, we present a joint inversion of head and tracer test data using MODFLOW and MT3DMS as coupled black-box forward simulation solvers. These applications demonstrate the advantages of the PCGA, i.e., the scalability to high-dimensional inverse problems and the ability to utilize multiple forward models as black boxes.
Article
In nonlinear geostatistical inverse problems, it often takes a significant amount of computational cost to form linear geostatistical inversion systems by linearizing the forward model. More specifically, the storage cost associated with the sensitivity matrix H (m × n, where m and n are the numbers of measurements and unknowns, respectively) is high, especially when both m and n are large in for instance, 3-D tomography problems. In this research, instead of explicitly forming and directly solving the linear geostatistical inversion system, we use MINRES, a Krylov subspace method, to solve it iteratively. During each iteration in MINRES, we only compute the products Hx and HTx for any appropriately sized vectors x, for which we solve the forward problem twice. As a result, we reduce the memory requirement from O>(mn>) to O>(m>)+O>(n>). This iterative methodology is combined with the Bayesian inverse method in Kitanidis (1996) to solve large-scale inversion problems. The computational advantages of our methodology are demonstrated using a large-scale 3-D numerical hydraulic tomography problem with transient pressure measurements (250,000 unknowns and ̃100,000 measurements). In this case, ̃200 GB of memory would otherwise be required to fully compute and store the sensitivity matrix H at each Newton step during optimization. The CPU cost can also be significantly reduced in terms of the total number of forward simulations. In the end, we discuss potential extension of the methodology to other geostatistical methods such as the Successive Linear Estimator.
Article
1] Reduced-order models (ROMs) approximate the high-dimensional state of a dynamic system with a low-dimensional approximation in a subspace of the state space. Properly constructed, they are used to significantly reduce the computational cost associated with the simulation of complex dynamic systems such as flow and transport in the subsurface. A key component in model reduction is to construct the subspace where we look for approximate solutions. In this work, we apply model reduction in inverse modeling and use the solution parameter space of underdetermined geostatistical inverse problems to construct the subspace in which we seek approximate solutions for any given parameters needed in the inversion process. The subspace is constructed by collecting state variable (e.g., pressure) distributions in the flow domain. Each of the distributions, called snapshots, contains the result of full forward model simulation for a given test with a basis vector in the solution parameter space as input parameters. We then use linear combinations of the snapshots to approximate the forward model solution for any parameters needed in inverse modeling. In geostatistical inverse modeling, the solution parameter space is spanned by the cross-covariance of measurements and parameters; hence, we name the ROM as the geostatistical reduced-order model (GROM). We also show that with minor loss of accuracy in the forward model, the accuracy in parameter estimation is still high, and the saving in computational cost is significant, especially for large-scale inverse problems where the number of unknowns is enormous.
Article
This work examines which generalized covariance function when used in the stochastic approach produces the flattest possible estimate of an unknown function that is consistent with the data. Such an estimate is the plainest possible continuous function, thus in a sense eliminating details that are irrelevant or unsupported by data. The answer is found from the solution of the following variational problem: Determine the function that reproduces the data, has the smallest gradient (in the square norm sense), and has a gradient that vanishes at large distances from the observations. The generalized covariance functions are shown to be the Green's functions for the free-space Laplace equation: the linear distance, in one dimension; the logarithmic distance in two dimensions; and the inverse distance in three dimensions. It is demonstrated that they are appropriate covariance functions for intrinsic random fields, a modification is proposed to facilitate numerical implementation, and a couple of examples are presented to illustrate the applicability of the methodology.
Thesis
Full-text available
La gestion d'un aquifère repose sur l'étude de ses propriétés hydrauliques. Nous proposons une approche computationnelle afin de caractériser la structure physique d'un aquifère et les propriétés hydrauliques associées, en focalisant sur l'étude des milieux hétérogènes fracturés où les écoulements sont fortement chenalisés. Ainsi, une première partie cherche à définir le modèle de fracturation sous-jacent à deux sites naturels suédois à partir d'observations corrigées des biais d'échantillonnage. La modélisation choisie repose sur une distribution des longueurs en loi de puissance et sur un terme de densité dépendant des orientations. Les propriétés hydrauliques des milieux hétérogènes, poreux et fracturés, sont ensuite caractérisées grâce à deux indicateurs statistiques décrivant le degré de chenalisation des écoulements à partir de la distance entre deux chenaux principaux et de la longueur efficace des chenaux. Ces indicateurs permettent de relier les propriétés physiques du milieu (hétérogénéité, organisation) à ses propriétés hydrauliques. Finalement, nous tentons d'identifier les composantes principales de la structure de perméabilité à partir de données de charge hydraulique en résolvant le problème inverse associé. On montre ainsi qu'il est possible d'identifier les principales structures contrôlant les écoulements à condition d'utiliser des données contenant une information sur la perméabilité pertinente à l'échelle du site, une paramétrisation représentative des structures principales et une méthodologie appropriée. Ces différentes approches permettent de définir un modèle pertinent des propriétés hydrauliques d'un milieu complexe.
Article
Full-text available
The optimal control theory has been applied to the problem of determining permeability distribution by matching the history of pressure in a single-phase field, given flow production data. pressure in a single-phase field, given flow production data. The method consists of minimizing a nonquadratic criteria by The steepest descent method; use of an adjoint equation enables the gradient to be obtained numerically in minimum computing time. The method has been tested on a semirealistic example of a field model included in a 9 x 19 grid, with 10 producing wells, using a 5-year pressure and production history. Since the storage capacities are assumed to be known, the transmissivities have been backed up at each grid block (no zonation being needed) in 20 iterations and 100 seconds of CDC 7600 computer time, giving an over-all pressure fit of less than 1 kg/cm2 (equivalent to 14 psi) with the pressures to be adjusted ranging from 482 to 300 kg/cm2 (equivalent to 6,850 to 4,350). As usual, the fitting procedure was continued, for investigation purposes, well beyond the point where satisfactory results from the engineering standpoint have been obtained, which is here about eight iterations. The stability of the procedure with respect to the choice of initial values has been established by numerical experimentation. Moreover, due to the use of the gradient method, no unrealistic value of transmissivities has been generated at any point of the computation. The method is very flexible and is able to take directly into account other types of boundary conditions in monophasic production situations. Extension of the method is currently being tested on the case of multiphase flow problems. Introduction As a corollary to the current progress in numerical simulations and also as a necessity by itself, development of history-matching techniques has been news-worthy and internationally widespread in recent years. A number of significant papers on the subject, both review papers and ones expounding new techniques, have been published in papers and ones expounding new techniques, have been published in the literature of the last decade. In this respect, let us cite Dougherty's review paper that provides valuable guides on the matter. Up to 1972, most of the work done had followed the lines of the perturbation method (according to Dougherty's classification) and had been referred to some arbitrary zonation of the reservoir model transmissivity, or storage coefficient, grid. In such an approach, the solution method pertains to the realm of multiple regression by least squares and the problem is treated as a nonlinear form of classical adjustment by same; alternatively, it may be attacked as a nonlinear programming problem, when a maximum absolute deviation norm is substituted to problem, when a maximum absolute deviation norm is substituted to a squared average deviation norm to control the adjustment of the model. In this group of work, the required sensitivity coefficients are obtained by multiple simulation, varying the parameters one at a time, which clearly enough precludes consideration of somewhat refined zonations; a dozen zones appears to be an accepted limit in this regard. In the course of the iterative procedure implementing the nonlinear least-squares adjustment process, oscilltations and other convergence difficulties have been frequently reported, accountable, to a large extent, by a latent quasisingularity of the solution matrix; description of this appears in Jahns. The final explanation behind this resides presumably in the inadequacy of the zonation to the problem (here again, see Jahns). The work of Jacquard was revolutionary in the domain, inasmuch as it gave access within a reasonable amount of computer time to the full set of sensitivity coefficients respective to each node of the transmissivity, or storage coefficient grid. The possibility of removing the zonation constraints was thus basically opened. It was not used, however, due to adherence to the general least-squares reduction concept. Although substantial progress was achieved in this manner, the stability difficulties progress was achieved in this manner, the stability difficulties were not mastered in every case. SPEJ P. 74
Article
Full-text available
This paper presents a functional formulation of the groundwater flow inverse problem that is sufficiently general to accommodate most commonly used inverse algorithms. Unknown hydrogeological properties are assumed to be spatial functions that can be represented in terms of a (possibly infinite) basis function expansion with random coefficients. The unknown parameter function is related to the measurements used for estimation by a “forward operator” which describes the measurement process. In the particular case considered here, the parameter of interest is the large-scale log hydraulic conductivity, the measurements are point values of log conductivity and piezometric head, and the forward operator is derived from an upscaled groundwater flow equation. The inverse algorithm seeks the “most probable” or maximum a posteriori estimate of the unknown parameter function. When the measurement errors and parameter function are Gaussian and independent, the maximum a posteriori estimate may be obtained by minimizing a least squares performance index which can be partitioned into goodness-of-fit and prior terms. When the parameter is a stationary random function the prior portion of the performance index is equivalent to a regularization term which imposes a smoothness constraint on the estimate. This constraint tends to make the problem well-posed by limiting the range of admissible solutions. The Gaussian maximum a posteriori problem may be solved with variational methods, using functional generalizations of Gauss-Newton or gradient-based search techniques. Several popular groundwater inverse algorithms are either special cases of, or variants on, the functional maximum a posteriori algorithm. These algorithms differ primarily with respect to the way they describe spatial variability and the type of search technique they use (linear versus nonlinear). The accuracy of estimates produced by both linear and nonlinear inverse algorithms may be measured in terms of a Bayesian extension of the Cramer-Rao lower bound on the estimation error covariance. This bound suggests how parameter identifiability can be improved by modifying the problem structure and adding new measurements.
Article
Full-text available
Local hydraulic conductivity and head data are used to quantify the uncertainty which is traced through to target a reliable remediation design. The management procedure is based on the stochastic approach to groundwater flow and contaminant transport modeling, in which the log-hydraulic conductivity is represented as a random field. The remediation design procedure has two steps. The first is solution of the stochastic inverse model. Maximum likelihood and Gaussian conditional mean estimation are used to characterize the random conductivity field based on the hydraulic conductivity and hydraulic head measurements. Based on this statistical characterization, conditional simulation is used to generate numerous realizations (maps) of spatially variable hydraulic conductivity that honor the head and conductivity data. The second step is solution of the groundwater quality management models. The first model, termed the multiple realization management model, simultaneously solves the nonlinear simulation-optimization problem for a sampling of hydraulic conductivity realizations. The second model, termed the Monte Carlo management model, solves the nonlinear simulation-optimization problem individually for a sampling of hydraulic conductivity realizations. -from Authors
Article
A new nonlinear least squares solution for the Hydrogeologic parameters, sources and sinks, and boundary fluxes contained in the equations approximately governing two-dimensional or radial steady state groundwater motion was developed through use of a linearization and iteration procedure applied to the finite element discretization of the problem. Techniques involving (1) use of an iteration parameter to interpolate or extrapolate the changes in computed parameters and head distribution at each iteration and (2) conditioning of the least squares coefficient matrix through use of ridge regression techniques were proven to induce convergence of the procedure for virtually all problems. Because of the regression nature of the solution for the parameter estimation problem, classical methods of regression analysis are promising as an aid to establishing approximate reliability of computed parameters and predicted values of hydraulic head. Care must be taken not to compute so many parameters that the stability of the estimates is destroyed. Reduction of the error variance by adding parameters is desirable provided that the number of degrees of freedom for error remains large.
Article
A geostatistical approach is developed for the prediction of log-transmissivity, hydraulic head, and ultimately seepage velocities in a two-dimensional model of a confined aquifer under steady state conditions. The primary goal is to assess the uncertainty in model predictions associated with the uncertainty and scarcity of input data. The method uses cokriging to predict the most likely values for these functions and uses conditional simulations to generate equally probable realizations of these functions. The method allows for model uncertainty in the prescription of the boundary heads. The method is sucessfully applied to an artificial aquifer.
Article
The problem of estimating Hydrogeologic parameters, in particular, permeability, from input-output measurements is reexamined in a geostatistical framework. The field of the unknown parameters is represented as a `random field' and the estimation procedure consists of two main steps. First, the structure of the parameter field is identified, i.e., mathematical representations of the variogram and the trend are selected and their parameters are established by using all available information, including measurements of hydraulic head and permeability. Second, linear estimation theory is applied to provide minimum variance and unbiased point estimates of hydrogeologic parameters (`kriging'). Structure identification is achieved iteratively in three substeps: structure selection, maximum likelihood estimation, and model validation and diagnostic checking. The methodology was extensively tested through simulations on a simple one-dimensional case. The results are remarkably stable and well behaved. The estimated field is smooth, while small-scale variability is statistically described. As the quality of measurements improves, the procedure reproduces more features of the original field. The results are also shown to be rather insensitive to deviations from assumptions about the geostatistical structure of the field.
Article
The inverse problem is defined here as follows: determine the transmissivity at varius points, given the shape and boundary of the aquifer and recharge intensity and given a set of measured log-transmissivity Y and head H values at a few points. The log-transmissivity distribution is regarded as a realization of a random function of normal and stationary unconditional probability density function (pdf). The solution of the inverse problem is the conditional normal pdf of Y, conditioned on measured H and Y, which is expressed in terms of the unconditional joint pdf of Y and H. The problem is reduced to determining the unconditional head-log-transmissivity covariance and head variogram for a selected Y covariance which depends on a few unknown parameters. This is achieved by solving a first-order approximation of the flow equations. The method is illustrated for an exponential Y covariance, and the effect of head and transmissivity measurements upon the reduction of uncertainty of Y is investigated systematically. It is shown that measurement of H has a lesser impact than those of Y, but a judicious combination may lead to significant reduction of the predicted variance of Y. Possible applications to real aquifers are outlined.
Article
Two separate applications of the geostatistical solution to the inverse problem in groundwater modeling are presented. Both applications estimate the transmissivity field for a two-dimensional model of a confined aquifer under steady flow conditions. The estimates are based on point observations of transmissivity and hydraulic head and also on a model of the aquifer which includes prescribed head boundaries, leakage, and steady state pumping. The model used to describe the spatial variability of the log-transmissivity describes large-scale fluctuations through a linear mean or drift intermediate and small-scale fluctuations through a two-parameter covariance function. The first application presented estimates the log-transmissivities using Gaussian conditional mean estimation. The second application uses an extended form of cokriging. The two methods are compared and their relative merits discussed. The extended cokriging application is applied to the Jordan Aquifer of Iowa. A comparison is also made between the conditional mean application and an analytical approach.
Article
A first-order analytical solution of the inverse problem for aquifer steady flow, presented in paper 1 (Rubin and Dagan, this issue), is applied to the Avra Valley aquifer (Clifton and Neuman, 1982). First, the parameters characterizing the statistical structure of the log-transmissivity Y and water head H fields are estimated by a maximum likelihood procedure. The results for Y are in good agreement with those of (Clifton and Neuman, 1982), in spite of the different methodologies. The incorporation of head measurements is shown to have definite advantages in reducing the estimation variances of Y parameters. Next, the best estimates of Y at various points are obtained by simultaneous conditioning on the measurements of Y and H. It is shown that a substantial reduction in the variance of the conditioned Y is achieved by accounting for H measurements, justifying a posteriori the solution of the inverse problem. Finally, the effective recharge, which is assumed to be uniform, but random, is estimated as part of the process. Although the latter is relatively small for Avra Valley, it might be a parameter of considerable interest in other cases. Further applications of the methodology are suggested.
Article
A quasi-linear theory is presented for the geostatistical solution to the inverse problem. The archetypal problem is to estimate the log transmissivity function from observations of head and log transmissivity at selected locations. The unknown is parameterized as a realization of a random field, and the estimation problem is solved in two phases: structural analysis, where the random field is characterized, followed by estimation of the log transmissivity conditional on all observations. The proposed method generalizes the linear approach of Kitanidis and Vomvoris (1983). The generalized method is superior to the linear method in cases of large contrast in formation properties but informative measurements, i.e., there are enough observations that the variance of estimation error of the log transmissivity is small. The methodology deals rigorously with unknown drift coefficients and yields estimates of covariance parameters that are unbiased and grid independent. The applicability of the methodology is demonstrated through an example that includes structural analysis, determination of best estimates, and conditional simulations.
Article
The purpose of this survey is to review parameter identification procedures in groundwater hydrology and to examine computational techniques which have been developed to solve the inverse problem. Parameter identification methods are classified under the error criterion used in the formulation of the inverse problem. The problem of ill-posedness in connection with the inverse problem is addressed. Typical inverse solution techniques are highlighted. The review also includes the evaluation of methods used for computing the sensitivity matrix. Statistics which can be used to estimate the parameter uncertainty are outlined. Attempts have been made to compare and contrast representative inverse procedures, and direction for future research is suggested.
Article
An iterative stochastic approach is developed to estimate transmissivity and head distributions in heterogeneous aquifers. This approach is similar to the classical cokriging technique; it uses a linear estimator that depends on the covariances of transmissivity and hydraulic head and their cross covariance. The linear estimator is, however, improved successively by solving the governing flow equation and by updating the covariances and cros-covariance function of transmissivity and hydraulic head fields in an iterative manner. As a result the nonlinear relationship between transmissivity and head is incorporated in the estimation, and the estimated fields are approximate conditional means. The ability of the iterative approach is tested with some deterministic and stochastic inverse problems. The results show that the estimated transmissivity and hydraulic head fields have smaller mean square errors than those obtained by classical cokriging even in the aquifer with variance of transmissivity up to 3. .
Article
The study is a continuation and extension of a previous work (Dagan, 1985a) whose aim was to identify the values of the log-transmissivity Y for steady flow. The common basic assumptions are that Y is a normal and stationary random space function, the aquifer is unbounded, and a first-order approximation of the flow equation is adopted. The expected value of the water head H, as well as the Y unconditional autocovariance, are supposed to have analytical expressions which depend on a parameters vector θ. The proposed solution of the inverse problem consists of identifying θ with the aid of the model and of the measurements of Y and H and subsequently computing the statistical moments of Y conditioned on the same data, The additional features of the present study are (1) incorporation of a constant, but random, effective recharge and its identification and (2) accounting for the fact that θ estimation is associated with some uncertainty, whereas before θ was assumed to be identified with certainty. Analytical expressions are derived for the Y and H covariances for an exponential autocovariance of Y. Paper 2 (Rubin and Dagan, this issue) of the study illustrates the applications of the method to a real-life case.
Article
The geostatistical approach to the estimation of transmissivity from head and transmissivity measurements is developed for two-dimensional steady flow. The field of the logarithm of transmissivity (log-transmissivity) is represented as a zero-order intrinsic random field; its spatial structure is described in this application through a two-term covariance function that is linear in the parameters θ1 and θ2. Linearization of the discretized flow equations allows the construction of the joint covariance matrix of the head and log transmissivity measurements as a linear function of θ1 and θ2. In this particular application the coefficient matrices are calculated numerically in a noniterative fashion. Maximum likelihood estimation is employed to estimate θ1 and θ2 as well as additional parameters from measurements. Linear estimation theory (cokriging) then yields point or block-averaged estimates of transmissivity. The approach is first applied to a test case with favorable results. It is shown that the application of the methodology gives good estimates of transmissivities. It is also shown that when the transmissivities are used in a numerical model they reproduce the head measurements quite well. Results from the application of the methodology to the Jordan aquifer in Iowa are also presented.
Article
Introduction to Geostatistics presents practical techniques for engineers and earth scientists who routinely encounter interpolation and estimation problems when analyzing data from field observations. Requiring no background in statistics, and with a unique approach that synthesizes classic and geostatistical methods, this book offers linear estimation methods for practitioners and advanced students. Well illustrated with exercises and worked examples, Introduction to Geostatistics is designed for graduate-level courses in earth sciences and environmental engineering.
Article
Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.
Article
The development of stochastic methods for groundwater flow representation has undergone enormous expansion in recent years. The calibration of groundwater models, the inverse problem, has lately received comparable attention especially and almost exclusively from the stochastic perspective. In this review we trace the evolution of the methods to date with a specific view toward identifying the most important issues involved in the usefulness of the approaches. The methods are critiqued regarding practical usefulness, and future directions for requisite study are discussed.
Chapter
A computer program for simulating ground-water flow in three dimensions is presented. This report includes detailed explanations of physical and mathematical concepts on which the model is developed. Ground-water flow within the aquifer is simulated by using a block-centered finite-difference approach. The program is written in Fortran 77 and has a modular structure, which permits the addition of new packages to the program without modifying existing packages.