Article

Effect of Model Structure on the Accuracy and Uncertainty of Results from Water Quality Models

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Eight one-dimensional steady-state models with different complexity, which describe the phosphate concentration as a function of the distance along a river, were examined with respect to accuracy and uncertainty of the model results and identifiability of the model parameters by means of combined calibration and sensitivity analysis using Monte Carlo simulations. In addition, the models were evaluated by the Akaike information criterion (AIC). All eight models were calibrated on the same data set from the Biebrza River, Poland. Although the accuracy increases with model complexity, the percentage of explained variance is not significantly improved in comparison with the model that describes the phosphate concentration by means of three parameters. This model also yields the minimum value of the AIC and the parameters could be well identified. Identification of the model parameters becomes poorer with increasing model complexity; in other words the parameters become increasingly correlated. This scarcely affects the uncertainty of the model results if correlation is taken into account. If correlation is not taken into account, the uncertainty of model results increases with model complexity. © 1997 by John Wiley & Sons, Ltd.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Clearly conceptual (physically based) models also suffer from errors. These errors are caused by the various assumptions, discretisations and simplifications that are purposely made to make the model manageable (Gilchrist, 1984;Van Geer et al., 1991;Beven and Binley, 1992;Grayson et al., 1992a,b;Van Der Perk, 1997). In many cases the error in a conceptual model will be represented by an additive noise term (Kuczera, 1988), also referred to as the system noise (Van Geer et al., 1991), which can be estimated by means of validation (Willmott, 1981;Luis and McLaughlin, 1992;Heuvelink, 1998;Van Der Perk, 1997). ...
... These errors are caused by the various assumptions, discretisations and simplifications that are purposely made to make the model manageable (Gilchrist, 1984;Van Geer et al., 1991;Beven and Binley, 1992;Grayson et al., 1992a,b;Van Der Perk, 1997). In many cases the error in a conceptual model will be represented by an additive noise term (Kuczera, 1988), also referred to as the system noise (Van Geer et al., 1991), which can be estimated by means of validation (Willmott, 1981;Luis and McLaughlin, 1992;Heuvelink, 1998;Van Der Perk, 1997). Note also that model error may also include a systematic component (Moore and Rowland, 1990;Van Geer et al., 1991). ...
... Although as yet the use of error propagation in GIS is far from a routine exercise, within the environmental sciences uncertainty analyses are by now quite common. Recent examples from various branches within the environmental sciences are Kros et al. (1992), Rossi et al. (1993), Bolstad and Stowe (1994), De Jong (1994, chapter 9), Gotway (1994), Jansen et al. (1994), Leenhardt (1995), Finke et al. (1996), Woldt et al. (1996), Binley et al. (1997), Dobermann and Oberthür (1997), Hunter and Goodchild (1997) and Van Der Perk (1997). ...
... While additional model complexity might be expected to improve the precision of model results, this has proven to be unfounded in a variety of studies (e.g. Gardner et al. 1980, Van der Perk 1997, also see Young et al. 1996). ...
... On the other hand, if there are justified alternatives then ideally, from an analytical point of view, the implications of these also should be considered (e.g. Gardner et al. 1980, van der Perk 1997. This raises two issues. ...
... 4) Results of complex models are more easily mis-interpreted, while not necessarily being any more reliable (e.g. Gardner et al. 1980, Van der Perk 1997. Therefore, there remains a need to promote uncertainty estimation, and to continue to develop models and analytical tools which permit such analysis, and which reflect the resource constraints of users. ...
... Gardner et al . 1980 ; Van der Perk 1997 ) . This raises two issues . ...
... While additional model complexity might be expected to improve the precision of model results, this has proven to be unfounded in a variety of studies (e.g. Gardner et al. 1980; Van der Perk 1997; also see Young et al. 1996). Furthermore, future driving forces such as climate (Parker 1993 ) and distributed pollution sources (Shepherd et al. 1999) are poorly defined and themselves cannot be modelled with much precision. ...
... Often in water quality modelling, the prior parameter uncertainty is not conditioned by observations using one of the aforementioned methods, but is propagated to predictive results by Monte Carlo sampling of independent prior distributions of values, or through first-order approximation. Examples include Van der Perk et al. (1997) , who apply Monte Carlo simulation to a steadystate river quality model, and Aalderink et al. (1996), who evaluate the effect of input uncertainties on a heavy metal model (interestingly, they conclude that distinguishing between the effects of different pollution control scenarios is impossible due to high uncertainty). In the most widely used river water quality models, formal investigation of model uncertainty is very rare. ...
Article
Full-text available
The case is presented for increasing attention to the evaluation of uncertainty in water quality modelling practice, and for this evaluation to be extended to risk management applications. A framework for risk-based modelling of water quality is outlined and presented as a potentially valuable component of a broader risk assessment methodology. Technical considerations for the successful implementation of the modelling framework are discussed. The primary arguments presented are as follows. (1) For a large number of practical applications, deterministic use of complex water quality models is not supported by the available data and/or human resources, and is not warranted by the limited information contained in the results. Modelling tools should be flexible enough to be employed at levels of complexities which suit the modelling task, data and available resources. (2) Monte Carlo simulation has largely untapped potential for the evaluation of model performance, estimation of model uncertainty and identification of factors (including pollution sources, environmental influences and ill-defined objectives) contributing to the risk of failing water quality objectives. (3) For practical application of Monte Carlo methods, attention needs to be given to numerical efficiency, and for successful communication of results, effective interfaces are required. A risk-based modelling tool developed by the authors is introduced.
... Clearly conceptual (physically based) models also suffer from errors. These errors are caused by the various assumptions, discretisations and simplifications that are purposely made to make the model manageable (Gilchrist, 1984;Van Geer et al., 1991;Beven and Binley, 1992;Grayson et al., 1992a,b;Van Der Perk, 1997). In many cases the error in a conceptual model will be represented by an additive noise term (Kuczera, 1988), also referred to as the system noise (Van Geer et al., 1991), which can be estimated by means of validation (Willmott, 1981;Luis and McLaughlin, 1992;Heuvelink, 1998;Van Der Perk, 1997). ...
... These errors are caused by the various assumptions, discretisations and simplifications that are purposely made to make the model manageable (Gilchrist, 1984;Van Geer et al., 1991;Beven and Binley, 1992;Grayson et al., 1992a,b;Van Der Perk, 1997). In many cases the error in a conceptual model will be represented by an additive noise term (Kuczera, 1988), also referred to as the system noise (Van Geer et al., 1991), which can be estimated by means of validation (Willmott, 1981;Luis and McLaughlin, 1992;Heuvelink, 1998;Van Der Perk, 1997). Note also that model error may also include a systematic component (Moore and Rowland, 1990;Van Geer et al., 1991). ...
... Although as yet the use of error propagation in GIS is far from a routine exercise, within the environmental sciences uncertainty analyses are by now quite common. Recent examples from various branches within the environmental sciences are Kros et al. (1992), Rossi et al. (1993), Bolstad and Stowe (1994), De Jong (1994, chapter 9), Gotway (1994), Jansen et al. (1994), Leenhardt (1995), Finke et al. (1996), Woldt et al. (1996), Binley et al. (1997), Dobermann and Oberthür (1997), Hunter and Goodchild (1997) and Van Der Perk (1997). ...
Chapter
Full-text available
GIS users and professionals are aware that the accuracy of GIS results cannot be naively based on the quality of the graphical output. Data stored in a GIS will have been collected or measured, classified, generalised, interpreted or estimated, and in all cases this allows the introduction of errors. With the processing or translation of this data into the GIS itself further propagation or amplification of errors also occur. It is essential that GIS professionals understand these issues systematically if they are to build ever more accurate systems. In this book the author's decade of study into these problems is brought into focus with an authoritative account of the development, application and implementation of error propagation techniques for use in environmental modelling with GIS. Its purpose is to provide a methodology for handling error and error propagation, for which the author is already well-respected internationally. The book is set to immediately become the classic reference source in its field and will be an essential read for GIS and environmental modelling professionals at both the practitioner and research levels.
... These differences in model evaluation statistics indicate an overfitting of the parameter values onto the calibration period, which often occurs in spatially-distributed and parameter-rich hydrological models (e.g., Beven, 2006;Schoups et al., 2008). Such an overfitting is usually connected with high uncertainties in the model prediction (van der Perk, 1997;Pande et al., 2009), which is discussed in more detail in Section 3.4. ...
... The discrepancy between the model evaluation statistics for calibration and validation indicates an overfitting of calibration parameters on the observed time series. This indicates a high degree of prediction uncertainty, which is not induced by parameter uncertainty, but rather an uncertainty in the process representation (van der Perk, 1997;Beven, 2006). In the present study, such uncertainty can be induced by the simplifications performed for the terrace extrapolation and the aggregation of terrace conditions on the subbasin level. ...
... The calibration procedure consists of finding a set of acceptable parameter values with the best agreement between the model outcome and the measurements. This can be achieved by minimising a given objective function (Werner, 2004; Van der Perk, 1997). Hydraulic-morphological river models are often calibrated on high discharges (e.g. for the Rhine those of 1993 and 1995) and the roughness coefficient is applied as the main calibration parameter (Werner, 2004; Pappenberger et al., 2005; Abbott et al., 2001). ...
... Calibration is often performed with parameters of which the values are uncertain. However , an additional source of uncertainty results from the fact that the roughness coefficient is used as calibration parameter (Van der Perk, 1997). ...
... Since field data are important inputs to models, but also crucial for model calibration and validation, it is necessary that the process descriptions in the model are tailored to the available data. Models ignoring important processes which can be fed with input data will be too simple, while models representing processes for which no data are available will be unnecessarily complex (Van der Perk, 1997). In addition, the model process descriptions need to be tailored to the aims of modelling. ...
... Most modelling studies involve such a model development cycle (Figure 2.1), although it is mostly not described in reports. But there are examples of studies describing or explicitly focusing on such a comparison between different model structures (e.g., Van der Perk, 1997; DonellyMakowecki and Moore, 1999; Grayson et al., 1992). The model development cycle involves three phases per candidate model (Figure 2.1). ...
Article
An evaluation is made of the suitability of programming languages for hydrological modellers to create distributed, process-based hydrological models. Both system programming languages and high-level environmental modelling languages are evaluated based on a list of requirements for the optimal programming language for such models. This is illustrated with a case study, implemented using the PCRaster environmental modelling language to create a distributed, process-based hydrological model based on the concepts of KINEROS-EUROSEM. The main conclusion is that system programming languages are not ideal for hydrologists who are not computer programmers because the level of thinking of these languages is too strongly related to specialized computer science. A higher level environmental modelling language is better in the sense that it operates at the conceptual level of the hydrologist. This is because it contains operators that identify hydrological processes that operate on hydrological entities, such as two-dimensional maps, three-dimensional blocks and time-series. The case study illustrates the advantages of using an environmental modelling language as compared with system programming languages in fulfilling requirements on the level of thinking applied in the language, the reusability of the program code, the lack of technical details in the program, a short model development time and learnability. The study shows that environmental modelling languages are equally good as system programming languages in minimizing programming errors, but are worse in generic application and performance. It is expected that environmental modelling languages will be used in future mainly for development of new models that can be tailored to modelling aims and the field data available. Copyright © 2002 John Wiley & Sons, Ltd.
... For example, Li et al. (2019) analyzed the relationship between the standardized precipitation evapotranspiration index (SPEI) and the NDVI in southwestern China and found that drought greatly hindered plant growth. Additionally, vegetation with high transpiration rates, such as eucalyptus trees, can exacerbate drought conditions by consuming significant amounts of groundwater (Tong et al., 2020). The impacts of drought on vegetation activity are complex and are influenced by multiple climatic factors (Ding et al., 2020). ...
Article
Under climate warming, extreme drought events (EDEs) in southwestern China have become more frequent and severe and have had significant impacts on vegetation growth. Clarifying the influence of soil and meteorological droughts on the vegetation photosynthetic rate (PHR) and respiration rate (RER) can help policymakers to anticipate the impacts of drought on vegetation and take measures to reduce losses. In this study, the frequency and features of EDEs from 1990 to 2021 were analyzed using the standardized precipitation evapotranspiration index, and the longest-lasting and most severe EDE was chosen to assess the effects of drought on vegetation activity. Then, a land surface model was used to simulate the vegetation PHR and RER. Finally, the effects of the EDE on the vegetation PHR and RER were analyzed from the perspectives of soil and meteorological droughts. The results revealed that from 1990 to 2021, a total of 11 EDEs were observed in southwestern China, and the longest-lasting and most severe EDE occurred in 2009-2010 (EDE2009/2010). EDE2009/2010 significantly reduced the monthly mean PHR and RER by 9.82 g C m-2 month-1 and 0.80 g C m-2 month-1, respectively, causing a cumulative reduction of approximately 5.61 × 1013 g C. Soil and meteorological droughts had a driving force of 39 % on the PHR changes and an explanatory force of 42 % on the RER reduction. In particular, the soil drought had an average explanatory force of 25 % on the PHR and made a contribution of 24 % to the RER. The drought affected different types of vegetation differently, and crops were more susceptible than grassland and forests on the monthly time scale. The vegetation exhibited resilience to drought, returning to normal PHR and RER levels 2 months after the end of EDE2009/2010. This research contributes to understanding and predicting the impact of EDEs on vegetation growth in southwestern China.
... Lindenschmidt et al. (2007) investigated two empirical equations in the Water Quality Analysis Simulation Program (WASP) modeling framework and concluded that structure uncertainty was more significant than parameter and input uncertainty. Van der Perk (1997) applied eight one-dimensional river phosphorus models to the same dataset to explore the impact of model complexity on model accuracy/uncertainty. Radwan et al. (2004) assessed structural uncertainty by applying two river nutrient models and found that model structure uncertainty was smaller than input and parameter uncertainty estimates; however, they acknowledged that the two models used in the study were very similar. Xia and Jiang (2016) applied a eutrophication model using two unstructured computational grid sizes and showed that the finer resolution improved model performance. ...
Article
Full-text available
Model structure uncertainty is seldom calculated because of the difficulty and time required to perform such analyses. Here we explore how a coastal model using the Monod versus Droop formulations and a 6 km × 6 km versus 2 km × 2 km computational grid size predict primary production and hypoxic area in the Gulf of Mexico. Results from these models were compared to each other and to observations, and sensitivity analyses were performed. The different models fit the observations almost equally well. The 6k-model calculated higher rates of production and settling, and especially a larger hypoxic area, in comparison to the 2k-model. The Monod-based model calculated higher production, especially close to the river delta regions, but smaller summer hypoxic area, than the model using the Droop formulation. The Monod-based model was almost twice as sensitive to changes in nutrient loads in comparison to the Droop model, which can have management implications.
... For instance, to assess the effect of an uncertainty due to model structure with a knowledge nature and scenario level, we can use the scenario analysis technique. However, sensitivity analysis [e.g., Van der Perk, 1997] or Monte-Carlo-based methods [e.g., Pappen berger et al., 2006] are also applicable. Even expert elicitation might be used to quantify the uncertainty due to model structure error [Warmink et al., 2011]. ...
Chapter
Modeling can play a critical role in assessing and mitigating risks posed by natural hazards. Uncertainties surrounding the modeling process can have important implications for the development, application, evaluation, and interpretation of models. This chapter focuses on the application of uncertainty analysis methods and tools to the context of natural hazard modeling. It introduces a framework for identifying and classifying uncertainties, and then provides practical guidance for implementing that framework. The chapter reviews terminology and offers examples of application to natural hazard modeling, culminating in an abbreviated illustration of uncertainty analysis in the context of wildfire and debris flow modeling. The chapter introduces a typology to categorically describe sources of uncertainty along three dimensions, and presents an “uncertainty matrix” as a graphical tool to illustrate the essential features of the typology. It also presents a decision tree to facilitate proper application of the uncertainty matrix.
... simulated with greater detail, the number of parameters in physically-based models will accordingly increase. This issue is exacerbated when physically-based models are employed at large scales such as catchments where limited available data does not support the embedded complexity and therefore can have an adverse effect on prediction uncertainty (Uhlenbrook et al., 1999;van der Perk, 1997). ...
... While additional model complexity might be expected to improve the precision of model results, this has proven to be unfounded in a variety of studies (e.g. Gardner et al. 1980, Van der Perk 1997, and Lees et al. 2000, also see Young et al. 1996). Furthermore, future driving forces such as climate (Hulme et al. 2002) and distributed pollution sources (Shepherd et al. 1999) are poorly defined and they cannot be modelled with much precision. ...
Article
Full-text available
One dimensional flow transport to monitor Methylococcus in chokocho River were observed through mathematical modeling approach, the study examined the transport system of this microbes through non point sources in these localities, investigation were carried out, to monitor the rate of concentration considering various conditions expressed in the system, these derived solution produces a model for the study, the research has express various condition and approach other variables in the system pressured the deposition of Methylococcus in the River, the derived solution has definitely produced various concept that express the rate of vertical and longitudinal dispersions, the velocity of flow were also examined in the system, these parameters has produces various influences on the condition of the Methylococcus concentration in chokocho River.
... pareados, que aquellos modelos con menos cantidad de parámetros. De todos modos, los modelos más complejos no siempre son los mejores en términos de precisión o exactitud debido a la pobre aplicabilidad en un medio con pocos datos disponibles (van der Perk, 1997;Wagenet et al., 1998). Además, esta disponibilidad de datos usualmente disminuye al ir a mayores escalas, el proceso de agregación y de obtener promedios puede conducir a serias mal interpretaciones de valores ( Stoorvogel et al., 1999). ...
Article
Full-text available
Introducción Los esfuerzos científicos para cubrir la creciente demanda de alimentos de la población, evitando el continuo deterioro del ambiente, precisan de un método que reconozca lo complejo del mundo real. Esta complejidad surge tanto de consideraciones de naturaleza físico-químico-biológicas, como también de factores socioeconómicos, culturales y políticos. Lo complicado e intrincado de la realidad ha contribuido a confundir, tanto a los responsables de tomar decisiones como a los científicos, esto ha resultado con frecuencia en una imposibilidad para definir claramente los problemas y buscar soluciones. Se ha logrado éxito al interior de algunas disciplinas específicas en el entendimiento de procesos y conceptos básicos, pero estos conocimientos dispersos no se han integrado y más aún, se ha avanzado muy poco en la generación de herramientas que permitan estimar las consecuencias del uso de tecnologías agrícolas en el medio ambiente o la productividad. Un método que incrementa la comprensión de los conceptos básicos y que al mismo tiempo organiza este conocimiento dentro de un marco dinámico y cuantitativo, es comúnmente conocido como Análisis de Sistemas o Investigación de Sistemas (Systems Analysis o Systems Research). Una parte de esta metodología, la cual es consecuencia de los avances tecnológicos de la computación y de la ciencia de la informática, son las herramientas de apoyo para la integración del conocimiento adquirido en el ámbito disciplinario. Estas herramientas incluyen los modelos de simulación del crecimiento de las plantas y de los procesos del suelo, los modelos de sistemas sociales y económicos, los Sistemas de Información Geográfica (GIS), y los sistemas de manejo de base de datos. Cuando todos estos medios, basados en la computación, son usados para auxiliar a los responsables de tomar decisiones, con frecuencia se les denomina Sistemas de Apoyo para la Toma de Decisiones.
... Overparameterisation leads to underdetermination as is well known (e.g. van der Perk, 1997). Oversimplified models may beg the question, and are sometimes harder to apply because much needs to be specified . ...
Article
Full-text available
From an outsider's perspective, hydrology combines field work with modelling, but mostly ignores the potential for gaining understanding and conceiving new hypotheses from controlled laboratory experiments. Sivapalan (2009) pleaded for a question- and hypothesis-driven hydrology where data analysis and top-down modelling approaches lead to general explanations and understanding of general trends and patterns. We discuss why and how such understanding is gained very effectively from controlled experimentation in comparison to field work and modelling. We argue that many major issues in hydrology are open to experimental investigations. Though experiments may have scale problems, these are of similar gravity as the well-known problems of fieldwork and modelling and have not impeded spectacular progress through experimentation in other geosciences.
... Also, the definition evolved from many years of research (Janssen et al., 1990;Van Asselt and Rotmans, 1996;Harremoës and Madsen, 1999;Walker, 2000). Following Beck (1987) and Van der Perk (1997), uncertainty consists of both inaccuracy and imprecision (which Van der Perk referred to as uncertainty). Inaccuracy is defined as the difference between a simulated value and an observation, while imprecision refers to the possible variation around the average simulated and observed values. ...
Article
Full-text available
Flooding is a serious threat in many regions in the world and is a problem of international interest. Hydrodynamic models are used for the prediction of flood water levels to support flood safety and are often applied in a deterministic way. However, the modelling of river processes involves numerous uncertainties. Previous research has shown that the hydraulic roughness is one of the main sources of in hydrodynamic computations. Knowledge of the type and magnitude of uncertainties is crucial for a meaningful interpretation of the model outcomes and the usefulness of model outcomes in decision making. The objective of this thesis is to quantify the uncertainties in the hydraulic roughness that contribute most to the uncertainty in the water levels and quantify their contribution to the uncertainty for the 2D hydrodynamic WAQUA model for the river Waal under design conditions. This research showed that the uncertainty of a complex model factor, such as the hydraulic roughness, can be quantified explicitly. The hydraulic roughness has been unravelled in separate components, which have been quantified separately and then combined and propagated through the model. In chapter 2, a method is presented to identify the sources of uncertainty in an environmental model. In chapter 3, expert opinion is used to determine the sources of uncertainty that contributed most to the uncertainty in the design water levels. Chapter 4 describes the quantification of the uncertainty in the bedform roughness and in chapter 5 the uncertainty in bedform roughness is combined with the uncertainty in the vegetation roughness. The results show a best estimate of the uncertainty range under design conditions, due to roughness, given that we did not account for the effect of calibration. The final uncertainty range is significant in view of Dutch river management practise. The research demonstrates that the uncertainties in a modelling study can be made explicit. The process of uncertainty analysis helps in raising the awareness of the uncertainties and enhances communication about the uncertainties among both scientists and decision makers.
... There are many different definitions of uncertainty (e.g. Walker et al., 2003; Refsgaard et al., 2007) Here, uncertainty is assumed to consist of inaccuracy and imprecision following Van der Perk (1997). Inaccuracy is defined as the difference between a simulated value and an observation, while imprecision refers to the possible variation around the average simulated values and observed values. ...
Conference Paper
Full-text available
This study investigates the selection of an appropriate low flow forecast model for the Meuse River based on the comparison of output uncertainties of different models. For this purpose, three data driven models have been developed for the Meuse River: a multivariate ARMAX model, a linear regression model and an Artificial Neural Network (ANN) model. The uncertainty in these three models is assumed to be represented by the difference between observed and simulated discharge. The results show that the ANN low flow forecast model with one or two input variables(s) performed slightly better than the other statistical models when forecasting low flows for a lead time of seven days. The approach for the selection of an appropriate low flow forecast model adopted in this study can be used for other lead times and river basins as well.
... Measuring the robustness of the model is the target of sensitivity testing methods by testing whether the model response to changes significantly after changing the model parameters or/and structural formulation of the model. Roselle (1994), Sistla (1991), Vieux (1993) and Vanderperk (1997) used this approach for different applications, such as sensitivity analysis. ...
... Roselle [173] used this approach to study the effect of biogenic emission uncertainties by performing model simulations at three emission biogenic estimate levels. Vieux et al. [207] and Vanderperk [206] have used a similar approach in water quality modeling. ...
... The application of Akaike's IC is more common in hydrology-related studies, as in, for example, bank-full discharge modeling (e.g., Wilkerson, 2008) and in the assessment of numerous artificial neural network models for runoff and flood prediction (see Chapter 2.6). The method has also been applied by Van der Perk (1997) to evaluate eight different models of water quality along a river profile, identifying the best balance between complexity and explanatory power for a model limited to just three parameters. Similarly, Cox et al. (2006) found, using IC, the best performance of a plant nutrient-uptake model with some simplified components. ...
Chapter
Full-text available
In this chapter, three broad categories of geomorphological models are considered: (1) traditional physically based computer models; (2) cellular-automata models; and (3) statistical models of observations or simulated data. Nine considerations for constructing and running geomorphological models within these categories are then explored: (1) suitability of the model for the question and observational data at hand; (2) model parsimony; (3) dimensional analysis; (4) benchmarks; (5) sensitivity analysis; (6) calibration; (7) observation and model data exploration; (8) uncertainty assessment; and (9) alternative models, data, and questions. For each consideration, good practices within the context of the literature are highlighted.
... The first six measures of ''goodness-of-fit'' were calculated for the training, testing, and cross-validation data sets and assessed to judge the candidate model's performance. One would expect that a model with more parameters would match data better than a model with fewer parameters; however, increasing model complexity does not necessarily lead to proportionate increases in model accuracy (VanderPerk 1997). Therefore, the last two performance measures were used because they penalize the models with more parameters and can thus provide a good evaluation of model parsimony when models are to be compared. ...
Article
Full-text available
This study is an effort to incorporate low-cost time-variant remote sensing (RS) information in watershed-scale total phosphorus (TP) modelling. Four watershed subdivisions were delineated to assess the impact of watershed subdivision on the prediction accuracy of TP concentration in stream water. Four TP artificial neural network (ANN) models were designed to incorporate RS data into a semi-distributed approach. The remotely derived enhanced vegetation index and the normalized difference water index were successful in representing vegetation dynamics in the devised models. The models were applied to a 15.6 km ² watershed in the Canadian Boreal Plain. Eight measures of goodness-of-fit statistics were used for model evaluation. Although statistical model evaluation did favour the finest resolution in this case study, the differences in performance indicators among the four models were insignificant for any practical application. The encouraging results from this exercise demonstrate the applicability of the ANN semi-distributed modelling approach and the usefulness of RS data in simulating TP dynamics. Such models can potentially serve as valuable tools for watershed-scale forest management.
... Readings from the referenced stations were then averaged to develop a regional flow, and flow was used to convert decay rates to distances traveled downstream. The fate of TP was modeled using Equation (8) to represent the flow rate and sinks, but more complex one-dimensional phosphorous models are available (Van Der Perk, 1997). ...
Article
Recent developments in water quality monitoring have generated interest in combining non-probability and probability data to improve water quality assessment. The Interagency Task Force on Water Quality Monitoring has taken the lead in exploring data combination possibilities. In this paper we take a developed statistical algorithm for combining the two data types and present an efficient process for implementing the desired data augmentation. In a case study simulated Environmental Protection Agency (EPA) Environmental Monitoring and Assessment Program (EMAP) probability data are combined with auxiliary monitoring station data. Auxiliary stations were identified on the STORET water quality database. The sampling frame is constructed using ARC/INFO and EPA's Reach File-3 (RF3) hydrography data. The procedures for locating auxiliary stations, constructing an EMAP-SWS sampling frame, simulating pollutant exposure, and combining EMAP and auxiliary stations were developed as a decision support system (DSS). In the case study with EMAP, the DSS was used to quantify the expected increases in estimate precision. The benefit of using auxiliary stations in EMAP estimates was measured as the decrease in standard error of the estimate.
... Later, modelling studies analysed the effects of model structure in more detail (e.g., Van der Perk, 1997;Butts et al., 2004;Sieber and Uhlenbrock, 2005) and the implied uncertainty followed by model comparison projects such as the Distributed Model Intercomparison Project (DMIP; Smith et al., 2004). Recently, the methods have been developed further to explicitly analyse uncertainties that are caused by model structure and model parameters (e.g., GLUE; Beven and Binley, 1992). ...
Article
Full-text available
This paper presents modelling of the effects of input data resolution and classification of a regionally applied soil-vegetation-atmosphere-transfer (SVAT) scheme. Most SVAT schemes were developed at local scales but often are applied at regional scales to simulate regional water balances and to predict effects of environmental changes on catchment hydrology. Applying models at different scales requires investigating sensitivity to the available input data. In this study, investigated input data include soil maps, vegetation classifications, topographic information and weather data of varying temporal and spatial resolutions. Target quantities are simulated water fluxes such as evapotranspiration rates, groundwater recharge and runoff generation rates. Model sensitivity is estimated with respect to water balances and water flows, focusing on different time periods (months, years). The soil vegetation atmosphere transfer scheme SIMULAT is applied to two different catchments representing different environments where data sets of varying data quality and resolution are available. Results show that, on an annual time scale, SIMULAT is most sensitive to aggregation of soil information and mis-classification in vegetation data. On the monthly time scale, SIMULAT is also very sensitive to disaggregation of precipitation data. The sensitivity to spatial distribution of land-use data and spatio-temporal resolution of weather data is low. Based on the investigations, a ranking of the sensitivity of the model to resolution and classification of different input data sets is proposed. Minimum requirements concerning data resolution for regional scale SVAT applications are derived.
... Also, the definition evolved from many years of research ( Janssen et al., 1990;Van Asselt and Rotmans, 1996;Harremoës and Madsen, 1999;Walker, 2000). Following Beck (1987) and Van der Perk (1997), uncertainty consists of both inaccuracy and imprecision (which Van der Perk (1997) refers to as uncertainty). Inaccuracy is defined as the difference between a simulated value and an observation, while imprecision refers to the possible variation around the average simulated and observed values. ...
... This was enhanced by the fact that only three pesticide profiles were available for evaluation, so that the identifiability of model parameters was limited. Future studies on comparison of model concepts could use input mapping (Rose et al., 1991) and consider the effect of model complexity on parameter identifiability (Van der Perk, 1997). Evaluation of model concepts themselves is bound to be most successful when using one model in combination with a library of modules with different complexity of the included process descriptions (Tiktak et al., 1994a). ...
Article
Full-text available
The performance of nine deterministic, one-dimensional, dynamic pesticide leaching models with different complexity was evaluated using a field experiment with bentazone and ethoprophos on a humic sandy soil with a shallow groundwater table. All modelers received an extensive description of the experimental data. Despite this fact, the interpretation of the experimental data was ambiguous, leading to tremendous user-dependent variability of selected model inputs. Together with the fact that most modelers calibrated at least part of their model, the possibility for evaluating model concepts was limited. In the case of bentazone, most model predictions were within the 95% confidence intervals of the observations. In the case of ethoprophos, model performance was often poor due to the ignorance of volatilization, kinetic sorption and adaptation of the microbial population. Most models were calibrated using on-site measured data, limiting the possibility for extrapolation for policy-oriented applications.
... (3)Human resource constraints often preclude intricate, dataintensive, modelling exercises (Reckhow, 1994). (4)Results of complex models are more easily misinterpreted, while not necessarily being any more reliable (Gardner et al., 1980;Van der Perk, 1997). Therefore, there remains a need to promote uncertainty estimation, and to continue to develop models and analytical tools which permit such analysis, and which reflect the resource constraints of users. ...
Article
A model of phytoplankton, dissolved oxygen and nutrients is presented and applied to the Charles River, Massachusetts within a framework of Monte Carlo simulation. The model parameters are conditioned using data from eight sampling stations along a 40 km stretch of the Charles River, during a (supposed) steady-state period in the summer of 1996, and the conditioned model is evaluated using data from later in the same year. Regional multi-objective sensitivity analysis is used to identify the parameters and pollution sources most affecting the various model outputs under the conditions observed during that summer. The effects of Monte Carlo sampling error are included in this analysis, and the observations which have least contributed to model conditioning are indicated. It is shown that the sensitivity analysis can be used to speculate about the factors responsible for undesirable levels of eutrophication, and to speculate about the risk of failure of nutrient reduction interventions at a number of strategic control sections. The analysis indicates that phosphorus stripping at the CRPCD wastewater treatment plant on the Charles River would be a high-risk intervention, especially for controlling eutrophication at the control sections further downstream. However, as the risk reflects the perceived scope for model error, it can only be recommended that more resources are invested in data collection and model evaluation. Furthermore, as the risk is based solely on water quality criteria, rather than broader environmental and economic objectives, the results need to be supported by detailed and extensive knowledge of the Charles River problem.
... Indeterminacy, heterogeneity and extremes in water quality processes represent some of the most important and challenging environmental research topics ( [1,13,14,20,22,32,40], among others). The importance of uncertainty has been widely recognized and documented (e.g., [2][3][4]8,17,18,33,34,36,41,43]). ...
... Issues of indeterminacy, heterogeneity, extremes, and fractal processes represent some of the most important and challenging environmental research topics [Anderson et al., 2000; Kirchner et al., 2000; Medina et al., 2002; Kirchner et al., 2004; Neal and Heathwaite, 2005]. [4] The importance of uncertainty in the water quality area is widely recognized and documented [e.g., Beck, 1987; Van der Perk, 1997; Beven and Freer, 2001; Beven, 2002; Vrugt et al., 2002; Harris and Heathwaite, 2005; Zheng and Keller, 2006]. Several common problems, such as the incompatibility of observations and the need for implicit value judgments, are hard to solve. ...
Article
Water quality evaluation entails both randomness and fuzziness. Two hybrid models are developed, based on the principle of maximum entropy (POME) and engineering fuzzy set theory (EFST). Generalized weighted distances are defined for considering both randomness and fuzziness. The models are applied to 12 lakes and reservoirs in China, and their eutrophic level is determined. The results show that the proposed models are effective tools for generating a set of realistic and flexible optimal solutions for complicated water quality evaluation issues. In addition, the proposed models are flexible and adaptable for diagnosing the eutrophic status. An edited version of this paper was published by AGU. Copyright 2007 American Geophysical Union. This project was supported by the Nanjing University Talent Development Foundation.
... Overparameterisation leads to underdetermination as is well known (e.g. van der Perk, 1997). Oversimplified models may beg the question, and are sometimes harder to apply because much needs to be specified . ...
Article
Full-text available
Phosphorus (P) pollution of surface waters remains a challenge for protecting and improving water quality. Central to the challenge is understanding what regulates P concentrations in streams. This quantitative review synthesizes the literature on a major control of P concentrations in streams at baseflow—the sediment P buffer—to better understand streamwater–sediment P interactions. We conducted a global meta‐analysis of sediment equilibrium phosphate concentrations at net zero sorption (EPC0), which is the dissolved reactive P (DRP) concentration toward which sediments buffer solution DRP. Our analysis of 45 studies and >900 paired observations of DRP and EPC0 showed that sediments often have potential to remove or release P to the streamwater (83% of observations), meaning that “equilibrium” between sediment and streamwater is rare. This potential for P exchange is moderated by sediment and stream characteristics, including sorption affinity, stream pH, exchangeable P concentration, and particle sizes. The potential for sediments to modify streamwater DRP concentrations is often not realized owing to other factors (e.g., hydrologic interactions). Sediment surface chemistry, hyporheic exchange, and biota can also influence the potential exchange of P between sediments and the streamwater. Methodological choices significantly influenced EPC0 determination and thus the estimated potential for P exchange; we therefore discuss how to measure and report EPC0 to best suit research objectives and aid in interstudy comparison. Our results enhance understanding of the sediment P buffer and inform how EPC0 can be effectively applied to improve management of aquatic P pollution and eutrophication.
Article
By utilizing functional relationships based on observations at plot or field scales, water quality models first compute surface runoff and then use it as the primary governing variable to estimate sediment and nutrient transport. When these models are applied at watershed scales, this serial model structure, coupling a surface runoff sub-model with a water quality sub-model, may be inappropriate because dominant hydrological processes differ among scales. A parallel modeling approach is proposed to evaluate how best to combine dominant hydrological processes for predicting water quality at watershed scales. In the parallel scheme, dominant variables of water quality models are identified based entirely on their statistical significance using time series analysis. Four surface runoff models of different model complexity were assessed using both the serial and parallel approaches to quantify the uncertainty on forcing variables used to predict water quality. The eight alternative model structures were tested against a 25-year high-resolution data set of streamflow, suspended sediment discharge, and phosphorous discharge at weekly time steps. Models using the parallel approach consistently performed better than serial-based models, by having less error in predictions of watershed scale streamflow, sediment and phosphorus, which suggests model structures of water quantity and quality models at watershed scales should be reformulated by incorporating the dominant variables. The implication is that hydrological models should be constructed in a way that avoids stacking one sub-model with one set of scale assumptions onto the front end of another sub-model with a different set of scale assumptions.
Thesis
Full-text available
[ Reference available in liked data and at: http://sebina.iamb.it/opac/resource/continuous-land-useland-cover-changes-impacts-on-stream-flow-discharge-modeling-and-driving-factors-/CIH0023031?locale=eng ] Given the complexity of the land use/land cover changes problem in which are involved different parameters (Environmental, Social, Economical…etc.), a multidisciplinary approach is mandatory to reach the state-of-the-art understanding. The DREAM model: Distributed model for Runoff, Evaporation and Antecedent soil Moisture simulation, is optimised and applied to model the effect of (LULCC) on water budget and to estimate stream flow discharges. With particular emphasis for the model parameters optimization and the spatio-temporal changes pertaining to LULCC. Processed LANDSAT images are used to analyse the changes in the land use, and MODIS Leaf Area Index images are used for model input to simulate the effect of land cover dynamic. In addition, a thorough investigation for the driving factors leading to LULCC. It includes Physical (climatic) and Human (economics and governance) driving factors . The study is conducted in the Celone at San Vincenzo sub-basin of Candelaro watershed in the Apulia region, Southern Italy. An area, as expected, offers great conditions for such study. Indeed, the watershed soil components demonstrate a high dynamicity. Synthesise linking and causals relations were mad for the different components involving within the Celone to drawn recommendations for the future governance and management of the area.
Article
Full-text available
The one‐dimensional advection dispersion equation (1D ADE) is commonly used in practice to simulate pollutant transport processes for assessment and improvement of water quality conditions in rivers. Various studies have shown that the longitudinal dispersion coefficient used within the 1D ADE is influenced by a range of hydraulic and geomorphological conditions. This study aims to quantify the impact and importance of the parameter uncertainty associated with the longitudinal dispersion coefficient on modeled pollutant time‐concentration profiles and its implications for meeting compliance with water quality regulations. Six regression equations for estimating longitudinal dispersion coefficients are evaluated, and commonly used evaluation criteria were assessed for their suitability. A statistical evaluation of the regression equations based on their original calibration data sets resulted in percent bias (PBIAS) values between −47.01% and 20.78%. For a case study, uncertainty associated with the longitudinal dispersion coefficient was propagated to time‐concentration profiles using 1D ADE and Monte Carlo simulations, and 75% confidence interval bands of the pollutant concentration versus time profiles were derived. For two studied equations, the measured peak concentration values were above the simulated 87.5th percentile, and for the other four equations it was close to the 87.5th percentile. Subsequent uncertainty propagation analysis of four diverse rivers show the potential considerable impact on concentration‐duration‐frequency‐based water quality studies, with 1D ADE modeling producing predictions of quality standard compliance which varied over hundreds of kilometers.
Thesis
Full-text available
This thesis focuses on computer modelling issues such as i) uncertainty, including uncertainty in parameters, data input and model structure, ii) model complexity and how it affects uncertainty, iii) scale, as it pertains to scaling calibrated and validated models up or down to different spatial and temporal resolutions, and iv) transferability of a model to a site of the same scale. The discussion of these issues is well established in the fields of hydrology and hydrogeology but has found less application in river water quality modelling. This thesis contributes to transferring these ideas to river modelling and to discuss their utilization when simulating river water quality. In order to provide a theoretical framework for the discussion of these topics several hypotheses have been adapted and extended. The basic principle is that model error decreases and sensitivity increases as a model becomes more complex. This behaviour is modified depending if the model is being upscaled or downscaled or is being transferred to a different application site. A modelling exercise of the middle and lower Saale River in Germany provides a case study to test these hypotheses. The Saale is ideal since it has gained much attention as a test case for river basin management. It is heavily modified and regulated, has been overly polluted in the past and contains many contaminated sites. High demands are also placed on its water resources. To provide discussion of some important water management issues pertaining to the Saale River, modelling scenarios using the Saale models have been included to investigate the impact of a reduction in non-point nutrient loading and the removal and implementation of lock-and-weir systems on the river.
Article
Physically-based river water quality models are valuable tools for river basin management and planning. However, their long computational times pose many difficulties for applications that involve a large number of model iterations. This paper addresses this problem by developing a faster, surrogate conceptual model based on the detailed reference models. The hydrodynamic information and water quality process equations from different detailed models are considered as ensembles in the developed model. The model conceptualizes rivers using cascades of reservoirs and lumps the advection-diffusion and physico-biochemical processes. We tested the model by comparing its performance for the Molse Neet river, Belgium, with two popular reference models, namely, MIKE 11 and InfoWorks RS. Results show that the conceptual model performs equally well as the reference models, but with simulation time 10⁴ times faster. The successful testing of this model opens a development avenue towards problem solving in the context of water quality control and management.
Conference Paper
Full-text available
This study investigates the selection of an appropriate low flow forecast model for the Meuse River based on the comparison of output uncertainties of different models. For this purpose, three data driven models have been developed for the Meuse River: a multivariate ARMAX model, a linear regression model and an Artificial Neural Network (ANN) model. The uncertainty in these three models is assumed to be represented by the difference between observed and simulated discharge. The results show that the ANN low flow forecast model with one or two input variables(s) performed slightly better than the other statistical models when forecasting low flows for a lead time of seven days. The approach for the selection of an appropriate low flow forecast model adopted in this study can be used for other lead times and river basins as well.
Article
This paper presents a review of computational uncertainties in scientific computing, as well as quantification of these uncertainties in the context of numerical simulations for thermo-fluid problems. The need for defining a measure of the numerical error that takes into account errors arising from different numerical building blocks of the simulation methods is discussed. In the above context, the effects of grid resolution, initial and boundary conditions, numerical discretization, and physical modeling constraints are presented.
Article
In this paper, sensitivity analysis (SA) has been used to assess model sensitivities to input parameter values in a water quality model. The water quality model incorporates a rainfall-runoff sub-model and a sediment load estimation sub-model, and is calibrated against hydrologic and water quality data from the Moruya River catchment in southeast Australia. The tested methods, One-at-A-Time (OAT), Morris Method (MM) and Regional SA (RSA) are found to be complementary, and help to characterise the behaviour of the water quality model. The most important parameters are plant stress threshold (f), coefficient of evapotranspiration (e), catchment moisture threshold (d), in decreasing order, indicating that sediment and nutrient loads are more sensitive to parameters that affect the magnitude of flows than those (vs, τq, τs) that control the timing and shape of the peak in a time series. But this application shows a need to be flexible in the use of different SA techniques. RSA is more appropriate for complex models where system nonlinearities and parameter interactions are more likely to be important. The RSA suggests that f and vs have strong interactions in the influence on nitrogen estimation. This study is also valuable for future uncertainty analysis, by separating the source of uncertainty of model parameters from the uncertainty in the model inputs.
Article
The identifiability of model parameters of a steady state water quality model of the Biebrza River and the resulting variation in model results was examined by applying the Monte Carlo method which combines calibration, identifiability analysis, uncertainty analysis, and sensitivity analysis. The water quality model simulates the steady state concentration profiles of chloride, phosphate, ammonium, and nitrate as a function of distance along a river. The water quality model with the best combination of parameter values simulates the observed concentrations very well. However, the range of possible modelled concentrations obtained for other more or less equally eligible combinations of parameter values is rather wide. This range in model outcomes reflects possible errors in the model parameters. Discrepancies between the range in model outcomes and the validation data set are only caused by errors in model structure, or (measurement) errors in boundary conditions or input variables. In this sense the validation procedure is a test of model capability, where the effects of calibration errors are filtered out. It is concluded that, despite some slight deviations between model outcome and observations, the model is successful in simulating the spatial pattern of nutrient concentrations in the Biebrza River.
Article
From an outsider's perspective, hydrology combines field work with modelling, but mostly ignores the potential for gaining understanding and conceiving new hypotheses from controlled laboratory experiments. Sivapalan (2009) pleaded for a question- and hypothesis-driven hydrology where data analysis and top-down modelling approaches lead to general explanations and understanding of general trends and patterns. We discuss why and how such understanding is gained very effectively from controlled experimentation in comparison to field work and modelling. We argue that many major issues in hydrology are open to experimental investigations. Though experiments may have scale problems, these are of similar gravity as the well-known problems of fieldwork and modelling and have not impeded spectacular progress through experimentation in other geosciences. Bibtex entry for this abstract Preferred format for this abstract (see Preferences) Find Similar Abstracts: Use: Authors Title Abstract Text Return: Query Results Return items starting with number Query Form Database: Astronomy Physics arXiv e-prints
Article
Sensitivity and uncertainty analysis investigate the robustness of numerical model predictions and provide information about the factors that contribute most to the variability of model output, identifying the most important parameters for model calibration. This paper presents a sensitivity and uncertainty analysis of a 2D depth-averaged water quality model applied to a shallow estuary. The model solves the mass transport equation for Escherichia Coli, including the effects of water temperature, salinity, solar radiation, turbulent diffusion and short wave dispersion. The sensitivity of the concentration of E. Coli in the estuary to input parameters and the different sources of uncertainty are studied using Global Sensitivity Analysis based on Monte Carlo simulation methods and sensitivity measures based on linear and non-linear regression analysis, in order to aid modellers in the calibration process and in the interpretation of model output. The extinction coefficient of light in water and the depth of the vertical layer over which the E. coli spread were found to be the most relevant parameters of the model. In the shallowest regions of the estuary errors in the bathymetry are also an important source of uncertainty on model output. Globally, the combination of these three parameters was found to be very effective for calibration purposes in the whole estuary.
Article
Full-text available
The paper describes the results of a study based on the integration of remote sensing and geographical information system techniques to evaluate a distributed unit hydrograph model linked to an excess rainfall model for estimating the streamflow response at the outlet of a watershed. Travel time computation, based on the definition of a distributed unit hydrograph, has been performed, implementing a procedure using (1) a cell-to-cell flow path through the landscape determined from a digital elevation model (DEM); and (2) roughness parameters obtained from remote sensing data. This procedure allows the taking into account of the differences, in terms of velocity, between the hillslopes and the stream system. The proposed procedure has been applied to two watersheds in Sicily, in order to establish the level of agreement between the estimated and recorded hydrographs, using as a tool to calculate the excess rainfall a simplified version of the probability distributed model.
Article
A method that combines calibration and identifiability analysis of a dynamic water quality model to evaluate the relative importance of various processes affecting the dynamic aspects of water composition is illustrated by a study of the response of suspended sediment and dissolved nutrients to a flood hydrograph in a rural catchment area in the Netherlands. Since the water quality model simulates the observed concentrations of suspended sediment and dissolved nutrients reasonably well, the most important processes during the observed flood hydrograph could be determined. These were erosion, exchange between dissolved phase and bed sediments and denitrification. It is concluded that the method is very useful for identifying the most significant model parameters and processes that are essential for water quality modelling. © 1998 John Wiley & Sons, Ltd.
Article
In order to model complex environmental systems, one needs to find a balance between the model complexity and the quality of the data available needed to run and validate the model. This paper describes a method to find this balance. Four models of different complexity were applied to describe the transfer of nitrogen and phosphorus from pollution sources to river outlets in two large European river basins (Rhine and Elbe). A comparison of the predictive capability of these four models tells us something about the added value of the added model complexity. We also quantified the errors in the data that were used to run and validate the models and analysed to what extent the model validation errors could be attributed to data errors, and to what extent to shortcomings of the model. We conclude that although the addition of more process description is interesting from a theoretical point of view, it does not necessarily improve the predictive capability. Although our analysis is based on an extensive pollution-sources–river-load database it appeared that the information content of this database was sufficient only to support models of a limited complexity. Our analysis also illustrates that for a proper justification of a model's degree of complexity one should compare the model to simplified versions of the model. Copyright © 2001 John Wiley & Sons, Ltd.
Article
There are many sources of uncertainty in modelling systems including uncertainty due to parameter estimations, input data and structure of the system. In this paper, modelling system structure is understood to be the algorithms and equations used to describe and calculate processes. Specific attention is given to the uncertainty in the equations, which link models interactively in a modelling system and algorithms, which pass information between the models. This paper places emphasis on structural uncertainty in modelling systems since this topic has received very little attention in research compared to the wealth of literature on parameter and input data uncertainty. This imbalance is partly due to the difficulty in quantifying structural uncertainty and an example is given showing how this may be done.
Article
Modelling is an indispensable tool in geochemical engineering in predicting the outcome of our intended interferences in geochemical systems. Because such systems are highly complex, investigation by means of designed experiment can only apply to subsystems and processes. Models too can only capture a partial, simplified image of the true system. Conceptual models play a vital role. Advantages of the use of models are, a.o., better testing of our understanding of geochemical processes, better formulation of this understanding, and better prediction of the outcome of our conjectures. Examples from hydrogeochemistry are discussed to demonstrate this.
Article
Full-text available
A quantitative model comparison methodology based on deviance information criterion, a Bayesian measure of the trade-off between model complexity and goodness of fit, is developed and demonstrated by comparing semiempirical transpiration models. This methodology accounts for parameter and prediction uncertainties associated with such models and facilitates objective selection of the simplest model, out of available alternatives, which does not significantly compromise the ability to accurately model observations. We use this methodology to compare various Jarvis canopy conductance model configurations, embedded within a larger transpiration model, against canopy transpiration measured by sap flux. The results indicate that descriptions of the dependence of stomatal conductance on vapor pressure deficit, photosynthetic radiation, and temperature, as well as the gradual variation in canopy conductance through the season are essential in the transpiration model. Use of soil moisture was moderately significant, but only when used with a hyperbolic vapor pressure deficit relationship. Subtle differences in model quality could be clearly associated with small structural changes through the use of this methodology. The results also indicate that increments in model complexity are not always accompanied by improvements in model quality and that such improvements are conditional on model structure. Possible application of this methodology to compare complex semiempirical models of natural systems in general is also discussed.
Article
Full-text available
From an outsider's perspective, hydrology com- bines field work with modelling, but mostly ignores the po- tential for gaining understanding and conceiving new hy- potheses from controlled laboratory experiments. Sivapalan (2009) pleaded for a question- and hypothesis-driven hydrol- ogy where data analysis and top-down modelling approaches lead to general explanations and understanding of general trends and patterns. We discuss why and how such under- standing is gained very effectively from controlled experi- mentation in comparison to field work and modelling. We argue that many major issues in hydrology are open to exper- imental investigations. Though experiments may have scale problems, these are of similar gravity as the well-known problems of fieldwork and modelling and have not impeded spectacular progress through experimentation in other geo- sciences.
Article
Full-text available
From an outsider's perspective, hydrology combines field work with modelling, but mostly ignores the potential for gaining understanding and conceiving new hypotheses from controlled laboratory experiments. Sivapalan (2009) pleaded for a question- and hypothesis-driven hydrology where data analysis and top-down modelling approaches lead to general explanations and understanding of general trends and patterns. We discuss why and how such understanding is gained very effectively from controlled experimentation in comparison to field work and modelling. We argue that many major issues in hydrology are open to experimental investigations. Though experiments may have scale problems, these are of similar gravity as the well-known problems of fieldwork and modelling and have not impeded spectacular progress through experimentation in other geosciences.
Article
Full-text available
Uncertainties afe unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis' are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time—by two orders of magnitude—regardless of the probability distributions assumed for the uncertain model parameters.
Article
Full-text available
A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates Sand gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2-3 times more reliable than estimates based on temporal data for all parame-ters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contami-nant transport. The model is applied to analyze the trends, variability, and interrelationship of parame-ters in a mouvtain stream in northern California.
Article
This paper outlines a procedure to determine the range of predictions of catchments which would arise from alternative calibrations of a model. The range of catchments is used to identify zones of certainty and uncertainty, leading to alternative definitions of the protection zone for differing purposes. An example is presented, based on Bestwood Pumping Station, Nottinghamshire, UK. -from Authors
Article
Environmental models are often highly nonlinear, and parameters have to be estimated from noisy data. The standard approach of locally linearizing the model, which leads to ellipsoid confidence regions, is inappropriate in this situation. A straightforward technique to characterize arbitrary-shaped confidence regions is to calculate model output on a grid of parameter values. Each parameter value P results in a goodness of fit G(P), which allows delineation of the set of parameters corresponding to G(P) < G[sub c], with G[sub c] some threshold level (e.g., 5% probability). This approach is impractical and time-consuming for complex models, however. This article aims at finding an efficient alternative. It is first shown that the most general approach is to generate parameter values uniformly covering the set G(P) < G[sub c] rather than finding the boundary G(P) = G[sub c]. It is argued that the most efficient method of generating a uniform cover is by a (theoretical) algorithm known as pure adaptive search (PAS); the presently proposed method (uniform covering by probabilistic rejection; UCPR) is shown to be a good approximation to PAS. The UCPR is compared with alternative methods for a number of test problems. It is illustrated that for complex models (where model run time dominates total computer time) UCPR is considerably faster and its cover of G[sub c] more uniform than existing alternatives. An intrinsic problem common to all methods is that the amount of work increases at least quadratically with the number of parameters considered, making them of limited use for high-dimensional problems.
Article
The papers published in the volume examine aspects of systems from the broader long-term matters of planning to the more detailed shorter term considerations of the operational policies that will be necessary to satisfy and maintain the planned objectives. Special reference is made to acid rain. The potential of information technology is also considered, together with a reassessment of long held conventional views that may be provoked eventually.
Article
This paper reviews the role of uncertainty in the identification of mathematical models of water quality and in the application of these models to problems of prediction. More specifically, four problem areas are examined in detail: uncertainty about model structure, uncertainty in the estimated model parameter values, the propagation of prediction errors, and the design of experiments in order to reduce the critical uncertainties associated with a model. Enclosed is the main body of the review dealing in turn with (1) identifiability and experimental design, (2) the generation of preliminary model hypotheses under conditions of sparse, grossly uncertain field data, (3) the selection and evaluation of model structure, (4) parameter estimation (model calibration), (5) checks and balances on the identified model, i. e. , model 'verification' and model discrimination, and (6) prediction error propagation.
Article
Estimating a uniquely best set of values for the parameters of conceptual hydrological models has long been a problem of considerable concern. More recent interest in using these models to determine the flow paths of water passing through catchments experiencing acidification has sharpened the focus of such concern on the role of tracer observations in model and parameter identifiability. The paper examines the question of a priori identifiability in both deterministic and stochastic frameworks. Working with relatively simple linear and nonlinear two-store models, the deterministic analysis involves merely algebraic manipulation of the model's state space description. It is apparent that, while this form of analysis is of limited applicability (even with the assistance of systems of computer algebra), the availability of tracer observations enhances model identifiability in all the cases examined. More complex model structures, and the effects of model and observation uncertainty, can be explored within a stochastic framework based on filtering theory. It is found that the availability of two tracer signals does not necessarily improve identifiability beyond what is possible with just a single tracer measurement. There is also evidence of a basis for the cross referencing of identifiability results between the deterministic and stochastic frameworks.
Article
A sophisticated and one-dimensional model to simulate the concentration of PO4;P (x,i) in a shallow and polluted river is presented; the model incorporates, other than the convection and dispersion, various physico-chemical and biochemical reactions of phosphorus sinks and sources.With reference to the field data on Tama-gawa, which penetrates through the metropolitan area of Tokyo, the model is confirmed to represent the concentration of PO4;P fairly well in the mid-region except for the mountainous origin and its estuary.A material balance of PO4;P, the calculation of which is made possible from the model, reveals that about 15% of the daily phosphorus input into the region is fixed by algae on the river bottom, while about 54% of the input flows downstream without being fixed. The balance also discloses that phosphorus decrease due to adsorption on to suspended solid and the increase attributable to hydrolysis of condensed phosphates in the water can be disregarded, respectively. The rest of PO4P balance amounting to about 30% is composed of adsorption on to the river-bed, seepage into the groundwater and release from the non-viable algal decomposition.The model is also used to simulate the PO4;P concentration in the region under various conditions. Though naive, the most effective means to decrease the concentration is found, from the model, to be to curtail as much as possible the phosphorus inflow from tributaries.
Article
Current practice for the verification of water-quality simulation models is to use a combination of modeler judgment and graphical analysis to assess the adequacy of a model. Statistical testing of goodness-of-fit is sometimes undertaken, but usually with a null hypothesis that does not allow distinction between acceptable fit and highly variable data. In this paper, statistical methods are proposed to augment, but not replace, this conventional approach with a quantitative expression of goodness-of-fit. Model verification is expressed as a problem in hypothesis testing that may be conducted using a variety of statistical methods. Guidance is provided on the appropriate structure of the null hypothesis so that good model fit is not confounded with highly variable predictions and observations. In addition, consequences and corrective measures associated with assumption violations are examined. The Mest, the Wilcoxon test, regression analysis, and the Kolmogorov-Smirnov test are extensively discussed, and applications of each are presented for the verification of a mechanistic water-quality model.
Article
Protection zones for boreholes are defined through the use of pathline tracing in groundwater flow models. Traditional approaches to groundwater flow modelling focus on obtaining a single best model, occasionally supplemented by the use of sensitivity analyses. In most situations this approach is inappropriate because the groundwater flow model is so poorly determined that a variety of different boundary conditions and parameter values would give similar predictions of head. However, the range of feasible models may well give radically different predictions of the variable for which the model has been built, namely the borehole catchment. This paper outlines a procedure to determine the range of predictions of catchments which would arise from alternative calibrations of a model. The range of catchments is used to identify zones of certainty and uncertainty, leading to alternative definitions of the protection zone for differing purposes. An example is presented, based on Bestwood Pumping Station, Nottinghamshire, UK.
Article
Environmental models are often highly nonlinear, and parameters have to be estimated from noisy data. The standard approach of locally linearizing the model, which leads to ellipsoid confidence regions, is inappropriate in this situation. A straightforward technique to characterize arbitrary-shaped confidence regions is to calculate model output on a grid of parameter values. Each parameter value P results in a goodness of fit G(P), which allows delineation of the set of parameters corresponding to G(P) < Gc, with Gc some threshold level (e.g., 5% probability). This approach is impractical and time-consuming for complex models, however. This article aims at finding an efficient alternative. It is first shown that the most general approach is to generate parameter values uniformly covering the set G(P) < Gc rather than finding the boundary G(P) = Gc. It is argued that the most efficient method of generating a uniform cover is by a (theoretical) algorithm known as pure adaptive search (PAS); the presently proposed method (uniform covering by probabilistic rejection; UCPR) is shown to be a good approximation to PAS. The UCPR is compared with alternative methods for a number of test problems. It is illustrated that for complex models (where model run time dominates total computer time) UCPR is considerably faster and its cover of Gc more uniform than existing alternatives. An intrinsic problem common to all methods is that the amount of work increases at least quadratically with the number of parameters considered, making them of limited use for high-dimensional problems.
Article
A problem in the application of geostatistics to soil is to find satisfactory models for variograms of soil properties. It is usually solved by fitting plausible models to the sample variogram by weighted least squares approximation. The residual sum of squares can always be diminished, and the fit improved in that sense, by adding parameters to the model. A satisfactory compromise between goodness of fit and parsimony can be achieved by applying the Akaike Information Criterion (AIC). For a given set of data the variable part of the AIC is estimated by image where n is the number of experimental points on the variogram, R is the residual sum of squares and p is the number of parameters in the model. The model to choose is the one for which  is least. The AIC is closely related to Akaike's earlier final prediction error and the Schwarz criterion. It is also equivalent to an F test when adding parameters in nested models.
Article
The development and testing of coupled hydrological and chemical models for describing the impact of acid deposition on soil water and surface water chemistry are reviewed. Two problems fundamental to the modelling of environmental systems are identified. First, calibration data generally do not contain enough information uniquely to determine model parameters; this leads to an apparently good fit between observations and predictions, but provides only a weak test of the hypothesized processes. Second, state variables contained within the model are often difficult to relate to field observations, because of spatial heterogeneity or a ‘conceptual’ model structure being imposed on the system. This difficulty can prevent application of the scientific method to model development. Within hydrochemistry, more testable and thus better posed models can be built by using chemical signals to constrain the hydrological structure. More generally for environmental systems, the use of synthetic data analysis is suggested as a means to determine the minimal field observations necessary to identify the model parameters and to test the model. Still, given the measurements that can be performed, there may be fundamental limitations to the modelling of environmental systems that cannot be overcome. Probing such questions is vital to the future of environmental modelling.
Article
The Nepean River receives effluent containing phosphorus from a sewage treatment plant at Camden, N.S.W. Phosphorus concentration, suspended solids, sediments and aquatic plants downstream of the outfall, were examined to determine the rate and pathways of phosphorus loss from the waterway.The phosphorus added was found to follow first order kinetics in its removal from the waterway. Two reaction pathways were discernible with over 90% of the added phosphorus being removed from the water column in 11 days, and the remainder in a further 70 days. The process remained constant from summer to winter and could be modelled for dry weather flows representing 68% of river flows.Soluble phosphorus was first incorporated into particles before being removed from the water column. Evidence is presented to show that the particles were predominantly phytoplankton and they were largely removed from the water column by littoral zone filtration. The second pathway appears to be sedimentation of nutrient laden particles.
Article
This book is an outgrowth of research contributions and teaching experiences by all the authors in applying modern fluid mechanics to problems of pollutant transport and mixing in the water environment. It should be suitable for use in first year graduate level courses for engineering and science students, although more material is contained than can reasonably be taught in a one-year course, and most instructors will probably wish to cover only selected potions. The book should also be useful as a reference for practicing hydraulic and environmental engineers, as well as anyone involved in engineering studies for disposal of wastes into the environment. The practicing consulting or design engineer will find a thorough explanation of the fundamental processes, as well as many references to the current technical literature, the student should gain a deep enough understanding of basics to be able to read with understanding the future technical literature evolving in this evolving field.
Article
A text on error propagation in quantitative spatial modelling and GIS. Major themes are; GIS data quality and error processes; stochastic error model for spatial attributes; a theory of error propagation with local GIS operations, followed by applications in lead consumption, DEMs, soil moisture and soil suitability analysis; global GIS operations and error propagation using multidimensional simulation; combining soil maps with point interpolations; implementation of error propagation techniques in GIS. Future research issues are identified. -after Author
IJking van Grondwatermodellen met Monte Carlo en mathematische optimalisatie
  • Olsthoorn T. N.
Olsthoorn, T. N. 1995. `IJking van Grondwatermodellen met Monte Carlo en mathematische optimalisatie', H 2 O, 28, 310±315, [in Dutch with English summary].
Grondwaterstandsverlagingen ten gevolge van de Duitse Bruinkoolwinningen in de Roerdalslenk Kwantitatieve analyse van de verschillen tussen de Duitse en de Nederlandse modelstudie
  • T N Olsthoorn
Olsthoorn, T. N. 1989. `Grondwaterstandsverlagingen ten gevolge van de Duitse Bruinkoolwinningen in de Roerdalslenk, Kwantitatieve analyse van de verschillen tussen de Duitse en de Nederlandse modelstudie', Technical Report 728610001 RIVM. The Netherlands [in Dutch].
Complex confining layers a stochastic analysis of hydraulic properties at various scales
  • M F P Bierkens
Bierkens, M. F. P. 1994. `Complex con®ning layers, a stochastic analysis of hydraulic properties at various scales. PhD Thesis, Utrecht University, Utrecht. 263 pp.
Statistical evaluation of mechanistic water-quality modelsRate and pathways of phosphorus assimilation in the Nepean River at Camden
  • K H Reckow
  • J T Clements
  • R C Dodd
Reckow, K. H., Clements, J. T. and Dodd, R. C. 1990. `Statistical evaluation of mechanistic water-quality models', J. Environ. Engrg., 116, 250±268. Simmons, B. L. and Cheng, D. M. H. 1985. `Rate and pathways of phosphorus assimilation in the Nepean River at Camden, New South Wales', Wat. Res., 19, 1089±1095.
ADAM, An Error Propagation Tool for Geographical Information Systems (User Manual). Department of Physical Geography
  • C G Wesseling
  • G B M Heuvelink
Wesseling, C. G. and Heuvelink, G. B. M. 1993. ADAM, An Error Propagation Tool for Geographical Information Systems (User Manual). Department of Physical Geography, Utrecht University, Utrecht. 52 pp.
Information theory and an extension of maximum likelihood principleSurface water chemistry of the Biebrza River with special emphasis on nutrient ¯ow and vegetationWater quality modeling: a review of the analysis of uncertaintyConstruction and evaluation of models of environmental systems
  • S Aiba
  • H A Ohtake
  • M J Wassen
Aiba, S. and Ohtake, H. 1977. `Simulation of PO 4 -P balance in a shallow and polluted river', Wat. Res., 11, 159±164. Akaike, H. 1973. `Information theory and an extension of maximum likelihood principle', in Petrov, B. N. and Csa ki, F. (Eds), Second International Symposium on Information Theory. Akade mia KiadoÂ. pp. 267±281. Barendregt, A. and Wassen, M. J. 1994. `Surface water chemistry of the Biebrza River with special emphasis on nutrient ¯ow and vegetation', in Wassen, M. J. and Okruszko, H. (Eds), Towards Protection and Sustainable Use of the Biebrza Wetlands: Exchange and Integration of Research Results for the Bene®t of a Polish±Dutch Joint Research Plan, Report 2. Utrecht University, Utrecht. pp. 133±146. Beck, M. B. 1987. `Water quality modeling: a review of the analysis of uncertainty', Wat. Resour. Res., 23, 1393±1441. Beck, M. B., Jakeman, A. J. and McAleer, M. J. 1993. `Construction and evaluation of models of environmental systems', in Jakeman, A. J., Beck, M. B. and McAleer, M. J. (Eds), Modelling Change in Environmental Systems. Wiley, New York.
Towards Protection and Sustainable Use of the Biebrza Wetlands: Exchange and Integration of Research Results for the Benefit of a Polish
  • A Barendregt