ArticlePDF Available

Defining Uncertainty: A Conceptual Basis for Uncertainty Management in Model-Based Decision Support

Authors:

Abstract and Figures

The aim of this paper is to provide a conceptual basis for the systematic treatment of uncertainty in model-based decision support activities such as policy analysis, integrated assessment and risk assessment. It focuses on the uncertainty perceived from the point of view of those providing information to support policy decisions (i.e., the modellers’ view on uncertainty) – uncertainty regarding the analytical outcomes and conclusions of the decision support exercise. Within the regulatory and management sciences, there is neither commonly shared terminology nor full agreement on a typology of uncertainties. Our aim is to synthesise a wide variety of contributions on uncertainty in model-based decision support in order to provide an interdisciplinary theoretical framework for systematic uncertainty analysis. To that end we adopt a general definition of uncertainty as being any deviation from the unachievable ideal of completely deterministic knowledge of the relevant system. We further propose to discriminate among three dimensions of uncertainty: location, level and nature of uncertainty, and we harmonise existing typologies to further detail the concepts behind these three dimensions of uncertainty.We propose an uncertainty matrix as a heuristic tool to classify and report the various dimensions of uncertainty, thereby providing a conceptual framework for better communication among analysts as well as between them and policymakers and stakeholders. Understanding the various dimensions of uncertainty helps in identifying, articulating, and prioritising critical uncertainties, which is a crucial step to more adequate acknowledgement and treatment of uncertainty in decision support endeavours and more focused research on complex, inherently uncertain, policy issues.
Content may be subject to copyright.
Integrated Assessment 1389-5176/03/0401-005$16.00
2003, Vol. 4, No. 1, pp. 5–17 #Swets & Zeitlinger
Defining Uncertainty
A Conceptual Basis for Uncertainty Management
in Model-Based Decision Support
W.E. WALKER
1
, P. HARREMO
EES
2
, J. ROTMANS
3
, J.P. VAN DER SLUIJS
5
, M.B.A. VAN ASSELT
4
,
P. JANSSEN
6
AND M.P. KRAYER VON KRAUSS
2
1
Faculty of Technology, Policy and Management, Delft University of Technology, The Netherlands,
2
Environment & Resources DTU,
Technical University of Denmark, Denmark,
3
International Centre for Integrative Studies (ICIS), Maastricht University, The Netherlands,
4
Faculty of Arts and Culture, Maastricht University, The Netherlands,
5
Copernicus Institute for Sustainable Development and Innovations,
Utrecht University, The Netherlands, and
6
Netherlands Environmental Assessment Agency, National Institute of Public Health
and the Environment (RIVM), The Netherlands
ABSTRACT
The aim of this paper is to provide a conceptual basis for the systematic treatment of uncertainty in model-based decision support
activities such as policy analysis, integrated assessment and risk assessment. It focuses on the uncertainty perceived from the point of
view of those providing information to support policy decisions (i.e., the modellers’ view on uncertainty) – uncertainty regarding the
analytical outcomes and conclusions of the decision support exercise. Within the regulatory and management sciences, there is
neither commonly shared terminology nor full agreement on a typology of uncertainties. Our aim is to synthesise a wide variety of
contributions on uncertainty in model-based decision support in order to provide an interdisciplinary theoretical framework for
systematic uncertainty analysis. To that end we adopt a general definition of uncertainty as being any deviation from the
unachievable ideal of completely deterministic knowledge of the relevant system. We further propose to discriminate among three
dimensions of uncertainty: location, level and nature of uncertainty, and we harmonise existing typologies to further detail the
concepts behind these three dimensions of uncertainty. We propose an uncertainty matrix as a heuristic tool to classify and report the
various dimensions of uncertainty, thereby providing a conceptual framework for better communication among analysts as well as
between them and policymakers and stakeholders. Understanding the various dimensions of uncertainty helps in identifying,
articulating, and prioritising critical uncertainties, which is a crucial step to more adequate acknowledgement and treatment of
uncertainty in decision support endeavours and more focused research on complex, inherently uncertain, policy issues.
Keywords: uncertainty, ignorance, model-based decision support, policy analysis, integrated assessment, risk assessment, uncertainty
management.
1. INTRODUCTION
The world is undergoing rapid changes. The future is
uncertain. Even with respect to understanding existing
natural, economic and social systems, many uncertainties
have to be dealt with. Furthermore, because of the
globalisation of issues and the interrelationships among
systems, the consequences of making wrong policy decisions
have become more serious and global – potentially even
catastrophic. Nevertheless, in spite of the profound and
partially irreducible uncertainties and serious potential
consequences, policy decisions have to be made. Scientific
decision support aims to provide assistance to policymakers
in developing and choosing a course of action, given all of the
uncertainties surrounding the choice.
That uncertainties exist in practically all policymaking
situations is generally understood by most policymakers, as
well as by the scientists providing decision support. But
there is little appreciation for the fact that there are many
different dimensions of uncertainty, and there is a lack of
understanding about their different characteristics, relative
magnitudes, and available means of dealing with them. Even
within the different fields of decision support (policy
analysis, integrated assessment, environmental and human
risk assessment, environmental impact assessment, engi-
neering risk analysis, cost-benefit analysis, etc.), there is
Address correspondence to: Prof. Warren Walker, Faculty of Technology, Policy and Management, Delft University of Technology, P.O. Box 5015, 2600 GA
Delft, The Netherlands. Tel.: þ31 15 2785122; Fax: þ31 15 2786439; E-mail: warrenw@tbm.tudelft.nl
neither a commonly shared terminology nor agreement on a
generic typology of uncertainties.
The need for more constructive approaches to account-
ability about uncertainty and ignorance in regulatory
decisions has grown with the increasing attention to the
‘‘precautionary principle.’’ The principle has put uncertainty
even more rmly and explicitly on the political agenda,
because the principle deals with situations where uncertainty
prevails regarding decisions about activities potentially
generating harm. The key questions are: What level of
certainty is demanded to curtail or even ban an activity that
might be harmful?Who should bear the burden of proof?
Who should run the risks associated with making the
wrong decision?These, and similar questions are high-
lighted in a recent publication from the European Environ-
ment Agency [1].
The aim of this paper is to provide a conceptual
framework for the systematic treatment of uncertainty in
decision support in order to improve the management of
uncertainty in decisionmaking processes.
There are many good reasons to develop a typology of
uncertainties for model-based decision support. First and
foremost, it will provide for better communication among
policy analysts. In the current situation, different analysts
use different terms for the same kinds of uncertainty, and
some use the same term to refer to different kinds. This
makes it extremely difcult for those who have not
participated in the actual work to understand what has been
done. Dening uncertainty through a typology will also
provide for better communication among policy analysts,
policymakers and stakeholders. It is widely held that
policymakers expect scientists to provide certainties and
hence dislike uncertainty in the scientic knowledge base.
But, uncertainty is a fact of life and a better understanding of
the different dimensions of uncertainty and their implica-
tions for policy choices would be likely to lead to more trust
in the scientists providing decision support, and ultimately to
better policies. Finally, a better understanding of the
different dimensions of uncertainty and their potential
impact on the relevant policy issues at hand would help in
identifying and prioritising effective and efcient research
and development activities for decision support. For
example, it would help at the beginning of a project to
decide on the allocation of project resources. Knowing about
the relative differences in outcomes from better parameter
estimates for an assumed model, a more appropriate model
or better information on inputs might reveal the most
resource-effective strategy for carrying out the analysis.
2. MODEL-BASED POLICY ANALYSIS
2.1. The Policymaking Process
Policymaking processes involve policymakers, stakeholders
and scientists. The stakeholders communicate their goals,
objectives, and preferences to the policymakers who must
then decide on the policies to be adopted. Policies are the set
of forces within the control of the policymakers that affect
the structure and performance of the system of interest.
Loosely speaking, a policy is a set of actions taken by an
administration to control the system, to help solve problems
within it or caused by it, or to obtain benets from it. In
public policy, the problems and benets generally relate to
broad international, national or regional goals for example,
tradeoffs among national environmental, social, and eco-
nomic goals. A goal is a generalized policy objective
(frequently non-quantitative, e.g., ‘‘reduce air pollution’’ or
‘‘ensure trafc safety,’’ and more rarely quantied, e.g., 80%
reduction of nutrient discharge). Policies are intended to
help achieve the goals.
To aid in the decisionmaking process, applied scientists
are frequently called upon to assess the outcomes of
alternative policies. In this paper, the scientists acting in
this capacity will be referred to as policy analysts and the
task they perform will be referred to as decision support. A
common approach to decision support is to create a model of
the system of interest that denes the boundaries of the
system and its structure i.e., the elements, and the links,
ows, and relationships among these elements [2]. In this
case the analysis is referred to as being model based. The
system model is usually, but not necessarily, a computer-
based model. This paper will focus on model-based decision
support.
Each policy goes through its own unique process of
development and implementation. In practice, the involve-
ment of policy analysts, stakeholders and policymakers in
the process can take different forms. However, the simplied
and idealized multi-stage iterative process shown in Figure 1
captures many of the important elements of the interactions
between the policy analysis process and the policymaking
process and is a sufcient conceptual basis for the purposes
of this paper.
The rst stage of the process is the problem identification
and framing stage. This stage is ideally conducted in the
form of a dialogue among policymakers, stakeholders, and
Fig. 1. The policymaking process viewed as a multi-stage iterative process.
6W.E. WALKER ET AL.
scientists. It may be that it is not possible to agree on a single
denition of the problem and that rival problem framings
must be explored in the analysis. In model-based policy
analysis, criteria are used to measure the degree to which
alternative policy actions can help to reach the goals. These
criteria are used to determine the outputs that should be
produced by the model. Those model outcomes that are
related to the policy goals and objectives are termed
outcomes of interest. The problem identication and framing
stage is instrumental in determining the structure of the
system model and identifying the outcomes of interest.
In the second stage, policy analysts assess the information
available to produce the knowledge required to support a
policy decision according to a range of plausible circum-
stances and developments, and according to the uncertainty
involved. In principle, both expert and lay knowledge should
be included in the analysis and the assessment process. This
process should ideally be accompanied by a quality control
stage, e.g., in terms of a peer review or a critical self-
reection, which makes explicit the underlying assumptions,
underpinnings, and quality of the performed analysis,
thereby increasing condence in the obtained results.
In the next stage the results of the analysis are discussed
by the policymakers and stakeholders. If the information
provided does not adequately match the information needs
agreed upon in the problem identication and framing stage,
or if the review by peers or stakeholders indicates that the
assessment is inadequately framed, too uncertain, too un-
reliable, too biased, or excessively value laden, the process
can be returned to the problem framing stage. If the model
structure and the results of the peer review are acceptable,
the policymakers and stakeholders can develop their
perspective on the results based on their values and interests.
Although a policy action may be designed with a single goal
in mind, it will seldom have an effect on only one outcome of
interest. Policy choices, therefore, depend not only on
estimating the outcomes of interest relative to the policy
goals and objectives, but identifying the preferences of the
various stakeholders, and identifying tradeoffs among the
outcomes of interest given these various sets of preferences.
In the nal stages of the policy process, a policy is chosen,
implemented, and communicated to the public. The impacts
of the policy can then be monitored in order to see whether
the objectives are being achieved or not, to identify new
problems, and to assess whether identied uncertainties have
been resolved or new ones are emerging.
2.2. The System Model
Decision support activities must often explore the effect of
alternative policies on the full range of outcomes of interest
under a variety of scenarios, and examine the tradeoffs
among different policies. This exploration requires a struc-
tured analytical process. Because of the complexity of the
system being studied and the wide range of scenarios to be
considered, a system model is a useful and often indis-
pensable tool in this process.
A system model is an abstraction of the system of
interest either the system as it currently exists, or as it is
envisioned to exist for purposes of evaluating policies in a
different (e.g., future) context. Here it is important to note
that we employ a broad interpretation of the term ‘‘model,’’
including both a conceptual formulation and=or a mathe-
matical model (algorithm), frequently found in the form of a
computer programme. A conceptual model may be as simple
as a line and box diagram of the structure of the system, with
lines representing more or less well known relationships,
varying from facts to beliefs. A widespread conceptual
model is that of modelling the concept of risk as a function of
probability and consequence. This conceptual model has
been adapted to t the various elds of risk assessment, for
example the expression of risk as a function of exposure and
effect in human and environmental risk assessment.
The system model represents the cause-effect relation-
ships characteristic of the system. In a mathematical model,
the relationships among the various components of the
system are expressed as functions. Although formulated in
mathematical terms, these models usually contain inherent
components of subjectivity. Subjectivity manifests itself
already in the conceptual phase when decisions are made
concerning which elements will be included in the analysis
and which will be left out. Subjectivity affects the manner in
which modellers translate the conceptual model into
mathematical equations.
A computer program is a translation of the mathematical
model into computer code. Typically the resulting system
model represents a compromise between desired function-
ality, plausibility, and tractability, given the resources at
hand (data, time, money, expertise, etc.).
In decision support activities, the focus of a modelling
exercise is typically on the response of a system to outside
forces (external changes or policy changes) and the systems
performance (i.e., the resulting values of the outcomes of
interest) in these future contexts. A much-used analytical
tool to deal with the deep uncertainties of the unknown (and
unknowable) future is to use scenarios as plausible
descriptions of how the system and its driving forces may
develop. A scenario is based on a coherent and internally
consistent set of assumptions about key relationships and
driving forces (e.g., technology changes, prices, etc.).
Different scenarios reect the variety of alternative
economic, environmental, social, and technological condi-
tions that may be present in reality, including variations in
the behaviour of people. These conditions act on the system,
which leads to changes in the system and, ultimately,
changes in the outcomes of interest. Within the decision
support exercise, alternative scenarios may manifest them-
selves as alternative model formulations, as alternative sets
of input data, or as both. Policies represent the alternative
mechanisms for affecting the system that are under the
DEFINING UNCERTAINTY 7
control of the policymakers (e.g., changes in prices,
regulations, infrastructure, etc.). Although the policies
themselves may be well dened and not uncertain, the ways
the system actually changes in response to the policy
changes is often highly uncertain.
The role of the system model within the policymaking
process is illustrated in Figure 2.
3. UNCERTAINTY
Uncertainty is not simply the absence of knowledge.
Funtowicz and Ravetz [3] describe uncertainty as a situation
of inadequate information, which can be of three sorts:
inexactness, unreliability, and border with ignorance.
However, uncertainty can prevail in situations where a lot
of information is available [4]. Furthermore, new informa-
tion can either decrease or increase uncertainty. New
knowledge on complex processes may reveal the presence
of uncertainties that were previously unknown or were
understated. In this way, more knowledge illuminates that
our understanding is more limited or that the processes are
more complex than thought before [5].
As will be elaborated further on in the paper, we distinguish
between uncertainty due to lack of knowledge and uncertainty
due to variability inherent to the system under consideration.
In order to encompass all dimensions of uncertainty, we adopt
ageneraldenition of uncertainty as being any departure from
the unachievable ideal of complete determinism.
There have been many uncertainty typologies developed
for many purposes. Few have claimed to be comprehensive,
and even fewer have had model-based decision support as
their point of departure.
1
Our framework for uncertainty in
model-based decision support is consistent with most of
them, but is comprehensive within its context. Others are
more general, not targeted specically on model-based
decision support (such as [3,11]) or apply to a specic
context, such as water management (e.g., [8]). Classica-
tions that are model-oriented either focus on a single
dimension of uncertainty (e.g., Alcamo and Bartnicki [7],
who focus on the location of uncertainty), reduce uncertainty
to error (e.g., [6]), or do not discriminate explicitly between
the level and the nature of uncertainty [10]. Within the
context of model-based decision support, therefore, it can
easily be concluded that there is neither a commonly shared
terminology nor agreement on a generic typology of
uncertainties. The aim of this paper is to highlight the
agreements in order to provide a conceptual basis for the
systematic treatment of uncertainty in policy analysis and
integrated assessment.
Our major challenge was to nd a categorization such that
all of the different kinds of uncertainty found in the literature
can be mapped into the categories that we propose. In doing
that, the resulting synthesis should then be comprehensive
and complete. A second challenge was to be specicto
model-based decision support removing categories
unrelated to this context and clustering the remaining
notions. Finally, labels had to be found for our categories
that were at least supported by our group of authors.
Those discussing uncertainty in scholarly fora (journals,
conferences), referred to in this paper as uncertainty experts,
agree that it is important to distinguish between what can be
called the modellersview of uncertainty and the decision-
makers=policymakersview of uncertainty. The modellers
view focuses on the accumulated uncertainties associated
with the outcomes of the model and the (robustness of)
conclusions of the decision support exercise; the policy-
makersview includes uncertainty about how to value the
outcomes in view of his=her portfolio of goals and possibly
conicting objectives, priorities, and interests. For example,
what are the current or future societal values related to
environmental impacts versus economic costs and benets?
This paper focuses on the uncertainty perceived from the
point of view of those providing information to support
policy decisions (i.e., the modellersview on uncertainty)
uncertainty regarding the analytical outcomes and conclu-
sions of the decision support exercise.
Uncertainty experts agree that there are different dimen-
sions of uncertainty related to model-based decision support
exercises. Through a process of consultation and discussion,
the authors of this paper have chosen to distinguish three
dimensions of uncertainty (see Fig. 3):
(i) the location of uncertainty where the uncertainty
manifests itself within the model complex;
(ii) the level of uncertainty where the uncertainty
manifests itself along the spectrum between determin-
istic knowledge and total ignorance;
(iii) the nature of uncertainty whether the uncertainty is due
to the imperfection of our knowledge or is due to the
inherent variability of the phenomena being described.
Fig. 2. The role of the system model within the policymaking process.
1
Among recent papers and books directly or indirectly addressing the issue
of characterizing uncertainty in model-based decision support are: [315].
8W.E. WALKER ET AL.
In the following sections, we present the three dimensions of
uncertainty in more detail.
4. THE LOCATION OF UNCERTAINTY:
IDENTIFIED BY THE LOGIC OF THE MODEL
FORMULATION
Location of uncertainty is an identication of where
uncertainty manifests itself within the whole model com-
plex. This dimension refers to the logical structure of a
generic system model within which it is possible to pinpoint
the various sources of uncertainty in the estimation of the
outcomes of interest.
The description of the model locations will vary
according to the system model in question. Ideally, the
location should be characterised in a way that is operation-
ally benecial to understanding where in the model the
uncertainty associated with the outcome is generated. To this
end, we identify the following generic locations with respect
to the model (see Fig. 4):
Context is an identication of the boundaries of the
system to be modelled, and thus the portions of the real
world that are inside the system, the portions that are
outside, and the completeness of its representation. The
model context is typically determined in the problem
framing stage and is crucial to the decision support
exercise as it claries the issues to be addressed and the
selection of the outcomes of interest to be estimated by the
model.
Model uncertainty is associated with both the conceptual
model (i.e., the variables and their relationships that are
chosen to describe the system located within the
boundaries and thus constituting the model complex)
and the computer model. Model uncertainty can, there-
fore, be further divided into two parts: model structure
uncertainty, which is uncertainty about the form of the
model itself, and model technical uncertainty, which is
uncertainty arising from the computer implementation of
the model.
Inputs to the model are associated with the description of
the reference system, which is often the current system,
and the external forces that are driving changes in
the reference system. It is sometimes useful to divide
the inputs into controllable and uncontrollable inputs,
depending on whether the decisionmaker has the
capability to inuence the values of the specic input
variables.
Parameter uncertainty is associated with the data and
the methods used to calibrate the model parameters.
Model outcome uncertainty is the accumulated uncer-
tainty associated with the model outcomes of interest to
the decisionmaker.
The following paragraphs describe each of the locations in
more detail.
4.1. Context
The ‘‘context’’ refers to the conditions and circumstances
(and even the stakeholder values and interests) that underlie
the choice of the boundaries of the system, and the framing
of the issues and formulation of the problems to be addressed
within the connes of those boundaries.
Context uncertainty includes uncertainty about the external
economic, environmental, political, social, and technological
situation that forms the context for the problem being
examined. The context could fall within the past, the present,
or the future. Uncertainties are often introduced in framing a
decision situation because the context of the decision support
is unclear. Actors in a decision situation often have different
perceptions of reality, which are related to their different
frames of reference or views of the world (see [16, 17]). That
is why it is important to involve all stakeholders from the very
beginning of the process of dening what the issue is. In
recent years, expert groups have been accused increasingly of
framing problems such that the context ts the tacit values of
the experts and=or ts the tools, which the experts can use to
provide a ‘‘solution’’ to the problem. The public is better
educated today and may identify such ‘‘decision support’’ as
biased and manipulative. Deciding on a proper framing of
context is a signicant part of the problem and should be given
attention to such an extent that reasonable alternative
framings are incorporated in the analysis. The concept and
methodology of context validation proposed by Dunn [18]
can help to avoid problems arising from incorrect problem
framing.
4.2. Model
There are two major categories of uncertainty within this
location of uncertainty: (1) model structure uncertainty, and
(2) model technical uncertainty.
Model structure uncertainty arises from a lack of suf-
cient understanding of the system (past, present, or future)
that is the subject of the policy analysis, including the
behaviour of the system and the interrelationships among its
elements. Uncertainty about the structure of the system that
we are trying to model implies that any one of many model
formulations might be a plausible representation of the
Fig. 3. Uncertainty: a three-dimensional concept.
DEFINING UNCERTAINTY 9
system, or that none of the proposed system models is an
adequate representation of the real system. We may be
uncertain about the current behaviour of a system, the future
evolution of the system, or both. Model structure uncertainty
involves uncertainty associated with the relationships
between inputs and variables, among variables, and between
variables and output, and pertains to the system boundary,
functional forms, denitions of variables and parameters,
equations, assumptions and mathematical algorithms.
Model technical uncertainty is the uncertainty generated
by software or hardware errors, i.e., hidden aws in the
technical equipment. Software errors arise from bugs in
software, design errors in algorithms and typing errors in
model source code. Hardware errors arise from bugs, such as
the bug in the early version of the Pentium processor, which
gave rise to numerical error in a broad range of oating-point
calculations performed on the processor [5].
4.3. Input
Input is associated primarily with data that describe the
reference (base case) system and the external driving forces
that have an inuence on the system and its performance. The
‘‘input’’ location, therefore, includes two sub-categories:
1. Uncertainty about the external driving forces that
produce changes within the system (the relevant scenario
variables and policy variables) and the magnitude of the
forces (the values of the scenario and policy variables).
The external forces driving system change (FDSCs) that
are not under the control of the policymakers are of
particular importance to policy analyses, especially if
they affect the outcomes of interest. Not only is there
great uncertainty in the FDSCs and their magnitudes,
there is also great uncertainty in the system response to
these forces. This is one of the factors that may lead to
signicant model structure uncertainty (see above).
2. Uncertainty about the system data that drivethe model and
typically quantify relevant features of the reference system
and its behaviour (e.g., land-use maps, data on infrastruc-
ture (roads, houses)). Uncertainty about system data is
generated by a lack of knowledge of the properties
(including both the deterministic and the stochastic
properties) of the underlying system and deciencies in
the description of the variability that can be an inherent
feature of some of the phenomena under observation. These
uncertainties are discussed in the naturedimension below.
Fig. 4. The Location of Uncertainty. Figures 4a and 4b illustrate the concept of context uncertainty, where ambiguity in the problem formulation leads to the
wrong question being answered. Figures 4c and 4d illustrate the concept of model structure uncertainty, where competing interpretations of the cause-
effect relationships exist, and it is probable that neither of them is entirely correct. Input is illustrated as that which crosses the boundaries of the
system.
10 W.E. WALKER ET AL.
4.4. Parameters
Parameters are constants in the model, supposedly invariant
within the chosen context and scenario. There are the
following types of parameters:
Exact parameters, which are universal constants, such as
the mathematical constants and e.
Fixed parameters, which are parameters that are so well
determined by previous investigations that they can be
considered exact, such as the acceleration of gravity (g) at
a particular location in earth.
A priori chosen parameters, which are parameters that
may be difcult to identify by calibration and are chosen
to be xed to a certain value that is considered invariant.
However, the values of such parameters are associated
with uncertainty that must be estimated on the basis of a
priori experience.
Calibrated parameters, which are parameters that are
essentially unknown from previous investigations or that
cannot be transferred from previous investigations due to
lack of similarity of circumstances. They must be
determined by calibration, which is performed by compar-
ison of model outcomes for historical data series regarding
both input and outcome. The parameters are generally
chosen to minimise the difference between model outcomes
and measured data on the same outcomes.
There is a relationship between model structure uncertainty
and calibrated parameter uncertainty. A simple model with
few parameters that does not simulate reality well may be
calibrated with data obtained for both input and output under
well-known conditions. In this case, model structure
uncertainty will most likely dominate the result. In the case
of a more complicated model with many parameters, the
parameters may be manipulated to t the calibration data
beautifully, but the result may be dominated by parameter
uncertainty. This would happen if the calibration data did not
contain sufcient information to allow for the calibration of
some parameters with an adequate degree of certainty. This
could be revealed by attempting to validate the model using
a different set of data. There is in principle an optimum
combination of model complexity and number of parameters
as a function of the data available for calibration and the
information contained in the data set used for calibration.
Increased model complexity with an increased number of
parameters to be calibrated may in fact increase the
uncertainty of the model outcomes for a given set of
calibration data. This has been described in detail (see [19]).
The calibration data must contain variations of information
t to deal with all parameters chosen for calibration.
Otherwise the parameter estimates become very uncertain
and the model outcomes become uncertain accordingly.
Finally, even when the parameters are well calibrated, a
residual uncertainty will often remain, and is usually treated
as a parameter in itself.
4.5. Model Outcome Uncertainty
This is the accumulated uncertainty caused by the uncer-
tainties in all of the above locations (context, model, inputs,
and parameters) that are propagated through the model and
are reected in the resulting estimates of the outcomes of
interest. It is sometimes called prediction error, since it is the
discrepancy between the true value of an outcome and the
models predicted value. If the true values are known (which
is rare, even for scientic models), a formal validation
exercise can be carried out to compare the true and predicted
values in order to establish the prediction error. However,
practically all policy analysis models are used to extrapolate
beyond known situations to estimate outcomes for situations
that do not yet exist. For example, the model may be used to
explore how a policy would perform in the future or in
several different futures. In this case, in order for the model
to be useful in practice, it is necessary to (1) build the
credibility of the model with its users and with consumers of
its results (see, for example, [20]), and (2) describe the
uncertainty in the model outcomes using the typology of
uncertainties presented in this paper.
5. LEVELS OF UNCERTAINTY: A PROGRESSION
FROM ‘‘KNOW’’ TO ‘‘NO-KNOW’’
Contrary to the common perception, an entire spectrum of
different levels of knowledge exists, ranging from the
unachievable ideal of complete deterministic understanding
at one end of the scale to total ignorance at the other. In many
cases, decisions must be taken when there is not only a lack
of certainty about the future situation or about the outcomes
from policy changes, but also when some of the possible
changes themselves remain unknown. Here, decisionmaking
is faced with the continual prospect of surprise. It is in this
grey area between the well known and what is not known
that the degree of uncertainty and ignorance ought to affect
the approach to decisionmaking. The ultimate goal of
decisionmaking in the face of uncertainty should be to
reduce the undesired impacts from surprises, rather than
hoping or expecting to eliminate them [21]. Many different
approaches are used in practice to cope with uncertainty. It
is useful to try to match the approach to the level of
uncertainty. For example, Schlesinger [22] distinguishes
between Captain Cooks tour planning for circumnavigating
the globe and Lewis and Clarks tour planning for exploring
the previously unexplored western United States. In Cooks
case, the future was sufciently certain that one could chart a
straight course years in advance. By contrast, Lewis and
Clarks planning ‘‘acknowledges that many alternative
course of action and forks in the road will appear, but their
precise character and timing cannot be anticipated.’’ Thus,
very uncertain situations call for robust plans (which will
succeed in a variety of situations) [23] or adaptive plans
DEFINING UNCERTAINTY 11
(which can be easily modied to t the situations
encountered) [24]. For example, in the case of applying
the precautionary principle, the level of uncertainty and
ignorance should be accounted for by deciding on an
appropriate level of proof as the basis for decisions to act or
not act, if there is potential for large-scale and=or
irreversible harm from an activity or a chemical [1].
To distinguish between the various levels of uncertainty,
we employ the following terminology: determinism, statis-
tical uncertainty, scenario uncertainty, recognised ignorance
and total ignorance. This is illustrated in Figure 5.
Determinism is the ideal situation in which we know
everything precisely. It is not attainable, but acts as a limiting
characteristic at one end of the spectrum.
Statistical uncertainty is any uncertainty that can be
described adequately in statistical terms. Statistical uncer-
tainty can apply to any location in the model, even to model
structure uncertainties, as long as the deviation from the true
value can be characterised statistically.
Statistical uncertainty is what is usually referred to as
‘‘uncertainty’’ in the natural sciences. An exclusive focus on
statistical uncertainty, however, implicitly assumes that the
functional relationships in the given model are reasonably
good descriptions of the phenomena being simulated, and
the data used to calibrate the model are representative of
circumstances to which the model will be applied. If this
is not the case, deeper forms of uncertainty supersede
statistical uncertainty, and statistical uncertainty should not
be accorded as much attention as other levels of uncertainty
in the uncertainty analysis.
The most obvious example of statistical uncertainty is the
measurement uncertainty associated with all data. Measure-
ment uncertainty stems from the fact that measurements can
practically never precisely represent the ‘‘true’’ value of that
which is being measured. Measurement uncertainty in data
can be due to sampling error,orinaccuracy or imprecision in
the measurements.
Sampling error is the error associated with the degree to
which the sample is representative. The location, the time
and the circumstances at which the sample has been taken
may not be completely representative of those of the ‘‘true’’
value. Inaccuracy is the deviation from the ‘‘true’’ value;
i.e., it refers to how close a measured value is to the value
considered ‘‘true.’’ Imprecision reects variation of mea-
surements around a mean value, which may or may not be
the ‘‘true’’ value because of sampling error or inaccuracy.
This is in fact a measure of the reproducibility of the result.
These terms belong to a well-established vocabulary that
can be found in most textbooks on physical and chemical
experimentation. A good primer on measurement uncer-
tainty is [25].
‘‘Statistical uncertainty’’ may also relate to uncertainty in
measuring the probabilities in a stochastic model (see
section on variability below).
5.1. Scenario Uncertainty
The use of scenarios is one approach used in policy analysis
to deal with uncertainty related to the external environment
of a system (usually its future environment) and its effects
on the system (see, for example, [26, 27]). A scenario is a
plausible description of how the system and=or its driving
forces may develop in the future. To be plausible, it should
be based on a coherent and internally consistent set of
assumptions about key relationships and driving forces (e.g.,
technology changes, prices). Scenarios do not forecast what
will happen in the future; rather they indicate what might
happen (i.e., they are plausible futures). Because the use of
scenarios implies making assumptions that in most cases are
not veriable, the use of scenarios is associated with
uncertainty at a level beyond statistical uncertainty.
Contrary to statistical uncertainty, where the functional
relationships are well described and a statistical expression of
the uncertainty present can be formulated, scenario uncer-
tainty implies that there is a range of possible outcomes, but
the mechanisms leading to these outcomes are not well
understood and it is, therefore, not possible to formulate the
probability of any one particular outcome occurring. There is
a demarcation in the transition from statistical uncertainty to
scenario uncertainty at the point where a change occurs from a
consistent continuum of outcomes expressed stochastically to
a range of discrete possibilities, where choices must be made
with respect to the options to analyze without allocation of
likelihood.
Scenario uncertainty can manifest itself in various ways
for example, (a) as a range in the outcomes of an analysis
due to different underlying assumptions, (b) as uncertainty
about which changes and developments (e.g., in driving
forces or in system characteristics) are relevant for the
outcomes of interest, or (c) as uncertainty about the levels of
these relevant changes.
Recognised ignorance is fundamental uncertainty about
the mechanisms and functional relationships being studied.
We know neither the functional relationships nor the
statistical properties and the scientic basis for developing
scenarios is weak.
Fig. 5. The progressive transition between determinism and total ignorance.
12 W.E. WALKER ET AL.
Uncertainty due to ignorance can further be divided into
reducible ignorance and irreducible ignorance. Reducible
ignorance may be resolved by conducting further research,
which implies that it might be possible to somehow achieve a
better understanding. Irreducible ignorance applies when
neither research nor development can provide sufcient
knowledge about the essential relationships. Irreducible
ignorance is also called indeterminacy.
Total ignorance is the other extreme from determinism
on the scale of uncertainty, which implies a deep level of
uncertainty, to the extent that we do not even know that we
do not know. In Figure 5, the continuing arrow at this end of
the scale is used to indicate that we have no way of knowing
the full extent of our ignorance.
The rationale for our categorisation of the levels of
uncertainty we have presented is to establish a scale of
graduation from determinism to total ignorance. We argue
that this characterisation scheme provides a complete logical
structure of the level of uncertainty for uncertainty analysis.
6. THE NATURE OF UNCERTAINTY: INHERENT
VARIABILITY OR LACK OF KNOWLEDGE?
In the above we have focused on locations within models
where uncertainty may manifest itself. We have also dis-
cussed the level of uncertainty as being an expression of
the scale of the uncertainty we are faced with. We would
now like to introduce the third dimension of the concept of
uncertainty: the nature of uncertainty. An important feature
of the nature of uncertainty is the distinction between two
extremes:
Epistemic uncertainty: The uncertainty due to the
imperfection of our knowledge, which may be reduced
by more research and empirical efforts.
Variability uncertainty: The uncertainty due to inherent
variability, which is especially applicable in human and
natural systems and concerning social, economic, and
technological developments.
Assessing the nature of uncertainty may help to understand
how specic uncertainties can be addressed. In the case of
epistemic uncertainty, additional research may improve the
quality of our knowledge and thereby improve the quality of
the output. However, in the case of variability uncertainty,
additional research may not yield an improvement in the
quality of the output.
Although the terminology used may differ, the above
distinction in the nature of uncertainty is well recognised in
the literature about uncertainty. For example, the terms
epistemic or epistemological uncertainty have been used to
refer to imperfection of knowledge, while the terms ontic or
ontological uncertainty, derived from philosophy, or alea-
tory uncertainty, derived from physical science, have been
used to describe uncertainty due to variability. An overview
of terms used to characterise the nature of uncertainty is
given in [28]. They stipulate that it is not always easy to
clearly distinguish between these categories of uncertainty;
it often remains a matter of convenience and judgement
linked up to features of the problem under study as well as to
the current state of knowledge or ignorance.
6.1. Epistemic Uncertainty
This form of uncertainty is related to many aspects of
modelling and policy analysis e.g., limited and inaccurate
data, measurement error, incomplete knowledge, limited
understanding, imperfect models, subjective judgement,
ambiguities, etc. With their NUSAP method, Funtowicz
and Ravetz [3] have introduced the concept of pedigree to
systematically assess the imperfection in the knowledge
base, thereby providing an indication of the degree to which
uncertainty may be reducible. Pedigree conveys an evalua-
tive account of the production process of information, and
indicates different aspects of the underpinning of the
numbers and scientic status of the knowledge used.
Assessment of pedigree involves qualitative expert judge-
ment. It should be noted that pedigree and degree of
reducibility of uncertainty do not necessarily correspond to
each other in a one-to-one fashion: increasing the pedigree
by more research may either reduce or increase uncertainty.
The latter can be the case if, for instance, unforeseen
complexities are revealed by the research. Examples of
pedigree analysis can be found in [29] and on the website
www.nusap.net.
Related to the NUSAP method are methods being
developed to rate the strength of scientic evidence that
are grouped under the heading of ‘‘ evidence-based practice’’
(see, for example, [30]). These methods, which are primarily
used in the health care eld, are designed to protect against
the use of study results in individual and policy-level health
care decisions that contain selection, measurement, and
confounding biases.
6.2. Variability Uncertainty
Many empirical quantities (measurable properties of the
real-world systems being modelled) vary over space or time
in a manner that is beyond control, simply due to the nature
of the phenomena involved. Variability uncertainty is
dened here as the inherent uncertainty or randomness
induced by variation associated with external input data,
input functions, parameters, and certain model structures.
Different sources of variability uncertainty can be
distinguished (see Fig. 6):
2
Inherent randomness of nature: the chaotic and unpre-
dictable nature of natural processes see also [16];
2
See [4] or [15].
DEFINING UNCERTAINTY 13
Human behaviour (behavioural variability): non-rational
behaviour, discrepancies between what people say and
what they actually do (cognitive dissonance), or devia-
tions of standardbehavioural patterns (micro-level
behaviour);
Social, economic, and cultural dynamics (societal vari-
ability): the chaotic and unpredictable nature of societal
processes (macro-level behaviour). The need to consider
societal and institutional processes as a major contributor
to uncertainty due to variability can be inferred from
various papers of Funtowicz, Ravetz, and de Marchi (see,
for example, [31, 32]).
Technological surprise: New developments or break-
throughs in technology or unexpected consequences
(side-effects) of technologies.
These sources may contribute to variability uncertainty, but it
may be difcult to identify precisely what is reducible through
investigations and research, and what is irreducible because it
is an inherent property of the phenomena of concern. However,
it is important to make an assessment, because the information
may be essential to the political process.
Models may use frequency distributions to represent
variability uncertainty in case the property falls into the level
of statistical uncertainty. From [10]: ‘‘It is possible to have a
high degree of certainty about a frequency distribution. For
example, it is not hard to imagine obtaining the statistics on
the weights of all newborns in Washington, D.C. during 2000
and compiling a precise frequency distribution for the weight
of newborn infants in Washington, D.C. during 2000. On the
other hand, one may be quite uncertain about a frequency
distribution, for example, the frequency distribution for
newborn infants in Washington, D.C. during 2020.’’
Uncertainty about a frequency distribution may be repre-
sented by probability distributions about its various param-
eters, such as its mean, standard deviation, or median.
A common mistake is failure to distinguish between the
uncertainty inherent in sampling from a known frequency
distribution (variability uncertainty), and the uncertainty that
arises from incomplete scientic or technical knowledge
(epistemic uncertainty). For example, in throwing a fair coin,
one knows that the outcome will be heads 1/2 the time, but
one cannot predict what specic value the next throw will
have (variability uncertainty). In case that the coin is not fair,
there will also be epistemic uncertainty, concerning the
frequency of the heads.
Similarly, input functions can exhibit variability that can
be described as a mathematical relationship with an asso-
ciated uncertainty. Such functions may be considered part of
the model structure or separate as an external input function.
An example is seasonal variation, which can be described
functionally [15] or the variation in time and space of
extreme rainfall, giving rise to ooding [33]. The location of
this form of variability is in either the model structure or in
input data. Input data can exhibit variability with an
associated uncertainty. As with all locations of uncertainty,
the uncertainty associated with variability of input data or
model structure can fall into all four levels: Statistical
uncertainty, scenario uncertainty, recognised ignorance, or
total ignorance. If the model is used for extrapolation (e.g.,
projection into the future), the uncertainty associated with
variability is also due to the application of the model to
circumstances different from those associated with the
experience upon which the model and data were developed.
7. THE UNCERTAINTY MATRIX
The purpose of an uncertainty matrix is to provide a tool by
which to get a systematic and graphical overview of the
essential features of uncertainty in relation to the use of
models in decision support activities. The idea is to identify
the location, level, and nature of the uncertainty associated
with models, so that model developers and users will become
aware of and address all of the important elements of
uncertainty. The location, level, and nature of uncertainty
can be combined to obtain an uncertainty matrix, as shown in
Figure 7.
The vertical axis identies the location of uncertainty
i.e., where the uncertainty is located in the framework shown
in Figure 2. The rst three columns of the horizontal axis
cover the level of uncertainty in relation to all locations;
the next two columns indicate the nature of uncertainty for
each location. In both cases the columns can be interpreted
as bracketsof characterisation:
Level: statistical uncertainty, scenario uncertainty, and
recognised ignorance.
Nature: Epistemic and variability uncertainty.
The rst three columns may also be interpreted as a
continuum of uncertainty (based on the progressive transi-
tion from determinism to total ignorance depicted in Fig. 5).
Applying the matrix is a means to make a complete
inventory of where the uncertainties are located and how
they can be typied in terms of uncertainty level and nature.
In lling in the matrix, one should be aware that the level and
nature of the uncertainty that occurs at any location can
manifest itself in various forms simultaneously. For
example, in a specic model input or driving force, part of
the uncertainty can be due to statistical uncertainty, while
another part can only be described by scenario uncertainty or
Fig. 6. Detailed typology of sources of variability uncertainty.
14 W.E. WALKER ET AL.
recognized ignorance. Similar divisions and overlaps hold
with respect to the naturedimension part of the
uncertainty can be of an epistemic character, and part due
to variability. Attribution of the parts will not always be clear
or unambiguous. In lling in the matrix, one can keep track
of the various forms in which uncertainty in a certain box
manifests itself by using indexes (e.g., index 1 might refer to
one specic model input, index 2 to a different input, etc.).
Note that it is not necessarily true that the uncertainties
located in a particular part of the matrix are more important
than uncertainties in other parts of the matrix. Ignorance may
be irrelevant in case it pertains to minor components, while
in other cases ignorance may supersede statistical uncer-
tainty. Further analysis is therefore necessary to assess the
size of the various uncertainties and their inuence on the
outcomes of interest. Either a quantitative [10, 34] or a
qualitative uncertainty or sensitivity analysis [5, 15] can be
used to identify the uncertainty in the outcomes of interest
induced by uncertainties in its inputs, as well as which
uncertainties have the greatest effects on the outcomes of
interest. The insights derived from the use of such techniques
can help determine how best to allocate project resources to
reduce uncertainty in the estimates of the outcomes of
interest e.g., would it be more worthwhile to focus on the
structure of the model or to gather more information to
estimate the models parameters?
The matrix looks conveniently small and handy as it is
depicted in Figure 7, but in reality the level and nature of
uncertainty have to be estimated for each location in the
model structure. That can be a considerable endeavour in
practise, if uncertainty has to be identied and estimated in
detail. However, the effort invested to achieve this insight
should vary according to the purpose of each particular
exercise. The amount of effort to invest should therefore be
chosen with care in order to provide an adequate combina-
tion of overview and detail. As well, in lling in the matrix,
one should be aware of differences in the levels of quality
and underpinnings of the information about the various
uncertainties, as well as the presence of values and biases in
the choices involved (e.g., concerning the way the scientic
questions are framed, data are selected, interpreted, and
rejected, methodologies and models are devised and used,
and explanations and conclusions are formulated [29, 35]).
These aspects will have important inuences on the resulting
uncertainties.
It should be noted that such a matrix may characterise
the uncertainty associated with a particular issue only at
a particular point in time. The matrix will change with
more information and with the development of new
circumstances.
The purpose of the matrix is to inspire model developers
and users of models to make an explicit effort to identify,
estimate, assess and prioritise all important contributions to
uncertainty associated with the outcomes of interest in a
systematic manner.
The uncertainty matrix can be applied at different stages
in the decision support endeavour:
as a heuristic during the preparatory pre-analysis phase
(i.e., problem-framing, determining system boundaries,
and model-building);
as a checklist during the analysis phase (i.e., model use,
assessment of the results, reporting and communication);
as a quality control checklist, used in peer review or for
self-evaluation.
The uncertainty matrix has to be re-applied during the peer
review because those performing the policy analysis may
have overlooked some relevant uncertainties [15, 29]. In this
way it can be tested whether personal and institutional lack
of knowledge or overcondence are associated with the
uncertainty treatment. For example, a team of analysts may
not be aware of the incompleteness of their model structure,
which may be surfaced in a peer review or a self-evaluation.
Finally, the matrix can be included in the reporting process,
in order to make the results of the assessment more
transparent to stakeholders and decisionmakers.
Fig. 7. Uncertainty matrix.
DEFINING UNCERTAINTY 15
8. CONCLUSION
It is increasingly a requirement in model-based decision
support that uncertainty has to be communicated in the
science-engineering=policy-management interface. During
the past decade several signicant contributions to concepts,
terminology, and typology have been proposed. However,
there is no generally accepted approach to communication
about uncertainty. The result is confusion and frequent lack
of mutual understanding. This paper has attempted to
condense and harmonise the terminology and typology as
well as propose a tool the uncertainty matrix for
identifying and characterising the potential uncertainty in
model-based decision support. It suggests that uncertainty is
a three dimensional concept dened by: the location in the
analysis, the level of uncertainty, and the nature of the
uncertainty. The uncertainty matrix can be combined with
other tools for example, sensitivity analysis and pedigree
analysis so that the most important locations of uncertainty
can be identied and their inuence on the results of the use
of models in decision support can be identied, estimated,
and assessed qualitatively or quantitatively. The intention is
that such an approach be applied on a routine basis when
communicating the results of decision support exercises to
decisionmakers. We argue that harmonised terminology and
a systematic use of the uncertainty matrix to identify,
prioritise, and communicate uncertainty can substantially
improve the quality of model-based decision support. Our
rst step has been to agree on dimensions of uncertainty and
how to refer to them. This was a theoretical endeavour,
which was, nonetheless, fed by our experiences with
modelling and model-based decision-support. A next step
would be to apply and test the uncertainty matrix in
examples and case studies. The various authors intend to do
that in their respective research, but a sensible application of
the matrix was beyond the scope of our current effort. Note
that we have focused ourselves on the modellers perspective
of uncertainty in model-based decision-support, being aware
that there is a decisionmakers perspective at the other end of
decision-support. With our typology, we think modellers are
better equipped to address and treat uncertainty in their part
of the job (although we are well aware that it does not
prevent uncertainty from being politicised in the decision-
making arena).
REFERENCES
1. European Environment Agency: In: Harremo
ees et al. (eds.): The
Precautionary Principle Late Lessons from Early Warnings, 2001.
ISBN 92-9167-232-4.
2. Walker, W.E.: Policy Analysis: A Systematic Approach to Supporting
Policymaking in the Public Sector. J. Multi-Criteria Decision Anal. 9
(2000), pp. 1127.
3. Funtowicz, S.O. and Ravetz, J.R.: Uncertainty and Quality in Science
for Policy. Kluwer Academic Publishers, Dordrecht, 1990.
4. Van Asselt, M.B.A. and Rotmans, J.: Uncertainty in Integrated
Assessment Modelling: from Positivism to Pluralism. Clim. Change
54 (2002), pp. 75105.
5. Van der Sluis, J.P.: Anchoring Amid Uncertainty: On the Management
of Uncertainties in Risk Assessment of Anthropogenic Climate Change,
Ph.D. dissertation, University of Utrecht, Netherlands, 1997.
6. Environmental Resources: Handling Uncertainty in Environmental
Impact Assessment. Environmental Resources, Ltd., London, 1985.
7. Alcamo, J. and Bartnicki, J.: A Framework for Error Analysis of a
Long-Range Transport Model with Emphasis on Parameter Uncer-
tainty. Atmospheric Environment 21(10) (1987), pp. 21212131.
8. Beck, M.B.: Water Quality Modelling: A Review of the Analysis of
Uncertainty. Water Resour. Res. 23(8) (1987), pp. 13931442.
9. Hodges, J.S.: Uncertainty, Policy Analysis and Statistics. Statist. Sci.
2(3) (1987), pp. 259291.
10. Morgan, M.G. and Henrion, M.: Uncertainty: A Guide to Dealing With
Uncertainty in Quantitative Risk and Policy Analysis. Cambridge
University Press, Cambridge, 1990.
11. Rowe, W.D.: Understanding Uncertainty. Risk Anal. 14(5) (1994), pp.
743750.
12. National Research Council: Understanding Risk: Informing Decisions
in a Democratic Society. National Academy of Sciences, Washington,
DC, 1996.
13. Shrader-Frechette, K.: Methodological Rules for Four Classes of
Uncertainty. In: J. Lemons (ed.), Scientic Uncertainty and Environ-
mental Problem Solving, pp. 1239. Blackwell Science, Cambridge,
1996.
14. Davis, P.K. and Hillestad, R.: Exploratory Analysis for Strategy
Problems With Massive Uncertainty (draft). RAND, Santa Monica,
California, 2000.
15. Van Asselt, M.B.A.: Perspectives on Uncertainty and Risk. Kluwer
Academic Publishers, Dordrecht, 2000.
16. Sch
oon, D. and Rein, M.: Frame Reection: Toward the Resolution of
Intractable Policy Controversies. Basic Books, New York, 1995.
17. Van de Riet, O.A.W.T.: Policy Analysis in Multi-Actor Policy Settings.
Eburon Publishers, Delft, 2003.
18. Dunn, W.N.: Using the Method of Context Validation to Mitigate Type
III Errors in Environmental Policy Analysis. In: M. Hisschemoller, R.
Hoppe, W.N. Dunn, J. Ravetz (eds.): Knowledge, Power and
Participation in Environmental Policy. Policy Studies Review Annual,
Vol. 12, 2001.
19. Harremo
ees, P. and Madsen, H.: Fiction and Reality in the Modelling
World Balance Between Simplicity and Complexity, Calibration and
Identiability, Verication and Falsication. Water Sci. Technol. 39(9)
(1999), pp. 18.
20. Bankes, S.: Exploratory Modeling for Policy Analysis. Oper. Res. 43(3)
(1993), pp. 435449.
21. Dewar, J.A.: Assumption-Based Planning: A Tool for Reducing
Avoidable Surprises. Cambridge University Press, Cambridge, 2002.
22. Schlesinger, J.R.: Organizational Structures and Planning, P-3316.
RAND, Santa Monica, CA, 1996.
23. Lempert, R.J. and Schlesinger, M.E.: Robust Strategies for Abating
Climate Change. Clim. Change. 45 (2000), pp. 387401.
24. Walker, W.E., Cave, J. and Rahman, S.A.: Adaptive Policies, Policy
Analysis, and Policymaking. Eur. J. Oper. Res. 128(2) (2001), pp. 282
289.
25. Kimothi, S.K.: The Uncertainty of Measurements, Physical and
Chemical Metrology: Impact and Analysis. ASQ Quality Press,
Milwaukee, WI, 2002.
26. RAND Europe : Scenarios for Examining Civil Aviation Infrastructure
Options in the Netherlands, DRU-1513VW=VROM=EZ. RAND,
Santa Monica, CA, 1997.
27. Van der Heijden, K.: Scenarios: The Art of Strategic Conversation.
Wiley, Chichester, 1996.
28. Baecher, G.B. and Christian, J.T.: Natural Variation, Limited Knowl-
edge, and the Nature of Uncertainty in Risk Analysis. Presented at
16 W.E. WALKER ET AL.
Risk-Based Decisionmaking in Water Resources IX, Oct. 1520, 2000,
Santa Barbara. http:==www.glue.umd.edu=gbaecher=papers.d=
Baecher_&_Christian Sta Barbara 2000.pdf
29. Van der Sluijs, J.P., Potting, J., Risbey, J., van Vuuren, D., de Vries, B.,
Beusen, A., Heuberger, P., Corral Quintana, S., Funtowicz, S., Kloprogge,
P., Nuijten, D., Petersen, A. and Ravetz, J.: Uncertainty Assessment of the
IMAGE=TIMER B1 CO2 Emissions Scenario, Using the NUSAP Method.
Dutch National Research Program on Climate Change, Report No. 410
200 104, Bilthoven, Netherlands (available from www.nusap.net), 2002.
30. Research Triangle Institute : Systems to Rate the Strength of Scientic
Evidence, AHRQ Publication No. 02-E-016. Agency for Healthcare
Research and Quality, U.S. Department of Health and Human Services,
Washington, DC, 2002.
31. De Marchi, B., Funtowicz, S.O. and Ravetz, J.: The Management of
Uncertainty in the Communication of Major Hazards. Joint Research
Centre, Ispra, Italy, 1993.
32. De Marchi, B.: Uncertainty in Environmental Emergencies: A
Diagnostic Tool. J. Contingencies Crises Manage. 3(2) (1995),
pp. 103112.
33. Mikkelsen, P.S., Madsen, H., Arnbjerg-Nielsen, K., Rosbjerg, D. and
Harremo
ees, P.: On the Selection of Historical Rainfall Series for
Simulation of Urban Drainage. In: I.B. Joliffe, J.E. Ball (eds.):
Proceedings of the 8th International Conference on Urban Storm
Drainage, Sydney, Australia, August 30September 3, Vol. 2, pp.
982989. The Institution of Engineers Australia, Sydney, 1999.
34. Saltelli, A., Chan, K. and Scott, E.M. (eds.): Sensitivity Analysis. Wiley,
Chichester, 2000.
35. Van der Sluijs, J., Risbey, J., Kloprogge, P., Ravetz, J., Funtowicz, S.,
Corral Quintana, S., Pereira, A., De Marchi, B., Petersen, A., Janssen,
P., Hoppe, R. and Huijs, S.: RIVM=MNP Guidance for Uncertainty
Assessment and Communication. RIVM & University of Utrecht, The
Netherlands, 2003.
DEFINING UNCERTAINTY 17
... The uncertainty is everywhere in information and can take the form of knowledge deltas between data sources, according to [29], we adopt this definition of uncertainty in this survey. We distinguish two types of uncertainty: epistemic, i.e., knowledge about a piece of information is incomplete or unknown; and ontic, i.e., uncertainty is inherent in the information [139]. The possible causes of uncertainty are [1,139]: (i) a lack of knowledge; (ii) a semantic mismatch or a lack of semantic precision and (iii) a lack of machine precision. ...
... We distinguish two types of uncertainty: epistemic, i.e., knowledge about a piece of information is incomplete or unknown; and ontic, i.e., uncertainty is inherent in the information [139]. The possible causes of uncertainty are [1,139]: (i) a lack of knowledge; (ii) a semantic mismatch or a lack of semantic precision and (iii) a lack of machine precision. ...
Preprint
Full-text available
Knowledge Graphs (KGs) are a major asset for companies thanks to their great flexibility in data representation and their numerous applications, e.g., vocabulary sharing, Q/A or recommendation systems. To build a KG it is a common practice to rely on automatic methods for extracting knowledge from various heterogeneous sources. But in a noisy and uncertain world, knowledge may not be reliable and conflicts between data sources may occur. Integrating unreliable data would directly impact the use of the KG, therefore such conflicts must be resolved. This could be done manually by selecting the best data to integrate. This first approach is highly accurate, but costly and time-consuming. That is why recent efforts focus on automatic approaches, which represents a challenging task since it requires handling the uncertainty of extracted knowledge throughout its integration into the KG. We survey state-of-the-art approaches in this direction and present constructions of both open and enterprise KGs and how their quality is maintained. We then describe different knowledge extraction methods, introducing additional uncertainty. We also discuss downstream tasks after knowledge acquisition, including KG completion using embedding models, knowledge alignment, and knowledge fusion in order to address the problem of knowledge uncertainty in KG construction. We conclude with a discussion on the remaining challenges and perspectives when constructing a KG taking into account uncertainty.
... Derived from a combination of two normal distributions, WSN is a customised skew normal distribution which aims at decomposing uncertainty into epistemic and ontological components. Ontological uncertainty is assumed to be complete randomness formed by public knowledge whereas epistemic uncertainty indicates the uncertainty based on expert knowledge (Walker et al., 2003). To illustrate the distribution, the inflation uncertainty measured by forecast errors is denoted as U, omitting the subscripts, t, h, for simplicity. ...
Article
Full-text available
The unprecedented policy responses during the Global Financial Crisis and European debt crisis may have increased uncertainty about inflation and strengthen the transmission of inflation uncertainty shocks from one country to another. This paper examines empirical methodologies to measure the strength of the interdependence of inflation uncertainty between the UK and the euro area. First, I estimate inflation uncertainty by ex post forecast errors from a bivariate VAR GARCH model and find that the inflation uncertainty exhibits non-Gaussian properties. In such cases, correlations and copulas to measure the interdependence could suffer from bias if endogeneity is not properly addressed. To identify structural parameters in an endogeneity representation of interdependence, I exploit heteroskedasticity in the data across different regimes determined by the ratio of variances. The estimation results corroborate that the strength of the propagation of inflation uncertainty amplifies during the crisis while the interdependence significantly weakens in the post-crisis period.
... Uncertainty may broadly be defined as limited knowledge about the future, past, or present (Walker et al., 2003). Reducing uncertainty requires exploring the unknown (Le . ...
Article
Sustainability transitions are a significant challenge that requires established industries to adopt innovative ways of doing business. Research suggests that while this is possible through business model innovation (BMI), risk avoidance by regime actors and high levels of future uncertainty act as barriers to successful transitions. Specifically , we lack knowledge about how established companies innovate their business model (BM) to reduce uncertainty related to sustainability transitions. We explore the case of a large forest-based manufacturing company in the construction industry, Stora Enso. We find that, by pursuing transformative BMI and combining multiple value creation logics, a company can reduce different types of uncertainty while shaping its business ecosystem towards more sustainable opportunities. We show that the BM can serve as an organizational tool for collectively exploring new knowledge , reducing uncertainty and driving change in a business ecosystem.
... One of the challenges in defining hazard zones has been the limited understanding of risks involved (Walker et al., 2003), as was highlighted by the 2015 and 2017 events. Early approaches were primarily based on historical events and limited meteorological data such as precipitation and wind (Hestnes, 1994). ...
... Therefore, we did not go into a categorization of cognitive and normative uncertainties experienced by actors as described in literature (Brugnach et al., 2008;Dewulf & Biesbroek, 2018). Also, in the interviews we did not start with an uncertainty typology that is proposed by Walker et al. (2003) or Kwakkel et al. (2010). This was a deliberate choice, because we wanted to elicit the actor-dependent experience of uncertainty, in the form of tension and insecurity. ...
Chapter
“Real world” decision-making often involves complex problems that are riddled with incompatible and inconsistent performance objectives. These problems typically possess competing design requirements which are very difficult – if not impossible – to capture and quantify at the time that any supporting decision models are constructed. There are invariably unmodelled design issues, not apparent during the time of model construction, which can greatly impact the acceptability of the model's solutions. Consequently, when solving many practical mathematical programming applications, it is generally preferable to formulate numerous quantifiably good alternatives that provide very different perspectives to the problem. These alternatives should possess near-optimal objective measures with respect to all known modelled objectives, but be fundamentally different from each other in terms of the system structures characterized by their decision variables. This solution approach is referred to as modelling-to-generate-alternatives (MGA). This study demonstrates how the nature-inspired, Firefly Algorithm can be used to efficiently create multiple solution alternatives that both satisfy required system performance criteria and yet are maximally different in their decision spaces.
Article
Full-text available
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Book
Unwelcome surprises in the life of any organization can often be traced to the failure of an assumption that the organization's leadership didn't anticipate or had 'forgotten' it was making. Assumption-based planning (ABP) is a tool for identifying as many as possible of the assumptions underlying the plans of an organization and bringing those assumptions explicitly into the planning process. This book presents a variety of techniques for rooting out those vulnerable, crucial assumptions. The book also presents steps for monitoring all the vulnerable assumptions of a plan, for taking actions to control those vulnerable assumptions where possible, and for preparing the organization for the potential failure of those assumptions where control is not possible. The book provides a variety of examples and practical advice for those interested in carrying out an application of ABP in the fields of business, management, strategic planning, engineering, and in military applications.
Article
Where is the balance between simplicity and complexity in model prediction of urban drainage structures? The calibration/verification approach to testing of model performance gives an exaggerated sense of certainty. Frequently, the model structure and the parameters are not identifiable by calibration/verification on the basis of the data series available, which generates elements of sheer guessing - unless the universality of the model is be based on induction, i.e. experience from the sum of all previous investigations. There is a need to deal more explicitly with uncertainty and to incorporate that in the design, operation and control of urban drainage structures.
Chapter
Society is more and more confronted with complex issues. Decision-makers are ever more struggling with complexity. As argued in this thesis the features of today’s complexity are that: there is not one problem, but a tangled web of related problems (multi-problem). it lies across, or at the intersection of, many disciplines (multi-dimensional). the underlying processes interact on various geographical and temporal scales (multi-scale).
Article
This paper reviews the role of uncertainty in the identification of mathematical models of water quality and in the application of these models to problems of prediction. More specifically, four problem areas are examined in detail: uncertainty about model structure, uncertainty in the estimated model parameter values, the propagation of prediction errors, and the design of experiments in order to reduce the critical uncertainties associated with a model. Enclosed is the main body of the review dealing in turn with (1) identifiability and experimental design, (2) the generation of preliminary model hypotheses under conditions of sparse, grossly uncertain field data, (3) the selection and evaluation of model structure, (4) parameter estimation (model calibration), (5) checks and balances on the identified model, i. e. , model 'verification' and model discrimination, and (6) prediction error propagation.