ChapterPDF Available

Belief Revision

Authors:

Abstract

This is the penultimate version of a review article. Contents: 1 Introduction - 1.1 The problem of belief revision: An example - 1.2 The methodological problems of belief revision - 1.3 Belief revision in science - 1.4 Different kinds of belief change - 1.5 Two approaches to describing belief revisions - 1.6 Related areas -- 2 Representing belief states - 2.1 Preliminaries - 2.2 Belief sets - 2.3 Belief bases - 2.4 Justifications vs. coherence models - 2.5 Possible-worlds models -- 3 Rationality postulates for belief revisions - 3.1 The AGM postulates for revision - 3.2 The AGM postulates for contraction - 3.3 From contractions to revisions and vice versa - 3.4 Postulates for contraction -- 4 Constructive models and representation theorems - 4.1 Partial meet contraction - 4.2 Epistemic entrenchment - 4.3 Safe contraction - 4.4 Minimal changes of models - 4.5 Ordinal conditional functions -- 5 Base contractions and revisions - 5.1 Full meet contraction - 5.2 Partial meet contraction - 5.3 Maxichoice contraction - 5.4 Safe contraction - 5.5 Epistemic entrenchment for bases - 5.6 Base revisions - 5.7 Computational complexity -- 6 Connections with non-monotonic logic - 6.1 The basic idea - 6.2 Translating postulates for belief revision into non-monotonic logic - 6.3 Translating conditions on non-monotonic inference - 6.4 Comparing models of belief revision and models of non-monotonic logic -- 7 Truth maintenance systems - 7.1 Justification-based truth maintenance systems - 7.2 Other kinds of truth maintenance systems
Belief Revision
Peter G¨ardenfors and Hans Rott
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1 The problem of belief revision: An example . . . . . . . . 2
1.2 The methodological problems of belief revision . . . . . . 3
1.3 Belief revision in science . . . . . . . . . . . . . . . . . . 4
1.4 Different kinds of belief change . . . . . . . . . . . . . . . 6
1.5 Two approaches to describing belief revisions . . . . . . . 9
1.6 Related areas . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Representing belief states . . . . . . . . . . . . . . . . . . . . . 11
2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Belief sets . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Belief bases . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Justifications vs. coherence models . . . . . . . . . . . . 14
2.5 Possible worlds models . . . . . . . . . . . . . . . . . . . 17
3 Rationality postulates for belief revisions . . . . . . . . . . . . 17
3.1 The AGM postulates for revision . . . . . . . . . . . . . 18
3.2 The AGM postulates for contraction . . . . . . . . . . . 20
3.3 From contractions to revisions and vice versa . . . . . . . 22
3.4 Postulates for contractions and revisions of belief bases . 24
4 Constructive models and representation theorems . . . . . . . 26
4.1 Partial meet contraction . . . . . . . . . . . . . . . . . . 27
4.2 Epistemic entrenchment . . . . . . . . . . . . . . . . . . . 31
4.3 Safe contraction . . . . . . . . . . . . . . . . . . . . . . . 40
4.4 Minimal changes of models . . . . . . . . . . . . . . . . . 42
4.5 Ordinal conditional functions . . . . . . . . . . . . . . . . 46
5 Base contractions and revisions . . . . . . . . . . . . . . . . . . 50
5.1 Full meet contraction . . . . . . . . . . . . . . . . . . . . 52
5.2 Partial meet contraction . . . . . . . . . . . . . . . . . . 55
5.3 Maxichoice contraction . . . . . . . . . . . . . . . . . . . 59
5.4 Safe contraction . . . . . . . . . . . . . . . . . . . . . . . 59
5.5 Epistemic entrenchment for bases . . . . . . . . . . . . . 60
5.6 Base revisions . . . . . . . . . . . . . . . . . . . . . . . . 61
5.7 Computational Complexity . . . . . . . . . . . . . . . . . 63
6 Connections with nonmonotonic logic . . . . . . . . . . . . . . 65
1
This is the penultimate version of an article that appeared in:
Handbook of Logic in Artificial Intelligence and Logic Programming,
Volume 4: Epistemic and Temporal Reasoning,
eds. Dov M. Gabbay, C.J. Hogger and J.A. Robinson,
Oxford University Press 1995, pp. 35-132.
2Peter G¨ardenfors and Hans Rott
6.1 The basic idea . . . . . . . . . . . . . . . . . . . . . . . . 65
6.2 Translating postulates for belief revision into nonmono-
tonic logic . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.3 Translating conditions on nonmonotonic inference . . . . 69
6.4 Comparing models of belief revision and models of non-
monotonic logic . . . . . . . . . . . . . . . . . . . . . . . 71
7 Truth maintenance systems . . . . . . . . . . . . . . . . . . . . 76
7.1 Justification-based truth maintenance systems . . . . . . 78
7.2 Other kinds of truth maintenance systems . . . . . . . . 84
1Introduction
1.1 The problem of belief revision: An example
Suppose you have a database that contains, among other things, the fol-
lowing pieces of information (in some form of code):
The bird caught in the trap is a swan. A
The bird caught in the trap comes from Sweden. B0
Sweden is part of Europe. B0B
All European swans are white. ABC
If your database is coupled with a program that can compute logical
inferences in the given code, the following fact is derivable:
The bird caught in the trap is white. C
Now suppose that, as a matter of fact, the bird caught in the trap turns
out to be black. This means that you want to add the fact ¬C,i.e. the nega-
tion of C, to the database. But then the database becomes inconsistent.
If you want to keep the database consistent—which is normally a sound
methodology—you need to revise it. This means that some of the beliefs
in the original database must be retracted. You don’t want to give up all of
the beliefs since this would be an unnecessary loss of valuable information.
So you have to choose between retracting A,B0,B0B, or ABC.
When giving up a belief you have to decide which of the reasons for this
belief to retain and which to retract (this is the backward direction).
The problem of belief revision is that logical considerations alone do not
tell you which beliefs to give up, but this has to be decided by some other
means. What makes things more complicated is that beliefs in a database
have logical consequences. So when giving up a belief you have to decide
as well which of its consequences to retain and which to retract (this is the
foreward direction). For example, if you decide to retract ABCin the
situation described here, this sentence has as logical consequences, among
others, the following two:
All European swans except the one caught in the trap are white
Belief Revision 3
and
All European swans except for some of the Swedish are white.
Do you want to keep any of these weakened statements in the revised
database?
1.2 The methodological problems of belief revision
When trying to handle belief revisions in a computational setting, there
are three main methodological questions to settle:
1. How are the beliefs in the database represented?
Most databases work with elements like facts and rules as primitive forms
of representing information. The code used to represent the beliefs may be
more or less closely related to standard logical formalism. A mechanism for
belief revision is sensitive to the formalism chosen to represent the beliefs.
Our static picture determines, to some extent at least, our dynamic picture
of the field. However our representation of beliefs in a database (or simply:
our representation of belief states) will look like, we should try to obey to
the following principle of categorial matching:
(PCM) The representation of a belief state after a belief change has taken
place should be of the same format as the representation of the belief
state before the change.1
The next question is
2. What is the relation between the elements explicitly represented in
the database and the beliefs that may be derived from these elements?
This relation is to a large extent dependent on the application area of the
database. In some cases the elements explicitly formulated in the database
have a special status in comparison to the logical consequences of these
beliefs, that may be derived by some inference mechanism. In other cases,
the formulation of the beliefs in the database is immaterial so that any
representation that has the same logical consequences, i.e. the same set
of implicit beliefs, is equivalent. As will be seen below, the nature of the
relation between explicit and implicit beliefs is of crucial importance for
how the belief revision process is attacked.
3. How are the choices concerning what to retract made?
Logic alone is not sufficient to decide between which beliefs to give up and
which to retain when performing a belief revision. What are the extralogical
factors that determine the choices? Again, the methodological rules chosen
here are dependent on the application area.
1This principle is too obvious that we would claim to have invented it. It is for-
mulated, for instance, in (Dalal 1988a) and called there the principle of “adequacy of
representation”.
4Peter G¨ardenfors and Hans Rott
In answering the methodological questions, a small number of basic ra-
tionality postulates, or as we might equally call them, integrity constraints
can be considered as operative.
1. The beliefs in a database should be kept consistent whenever possible.
This constraint seems to be the dominating motive for the research done
under the title ‘belief revision’. In fact, (i) can be seen as distinguishing
the enterprise of belief revision from that of paraconsistent logic,2which
shares the aim of explicating rational deliberation in the face of contradic-
tory information. However, there is no sharp separating line. Some authors
(e.g. Rescher, de Kleer, Dubois and Prade, Brewka) argue against an out-
right elimination of inconsistencies from the database, but the disagreement
with the advocators of consistency appears to vanish if one properly dis-
tinguishes potential beliefs represented in the database and beliefs actually
entertained on the basis of the database. Other accounts amalgamate ideas
from paraconsistent logics with ideas from belief revision (Belnap 1977a,
Belnap 1977b, Cross and Thomason 1992).
The next integrity constraint concerns deductive closure:
1. If the beliefs in a database logically entail a sentence A, then Ashould
be included in the database.
Databases satisfying (ii) clearly involve a great deal of idealization.
They will be called belief sets in the following.
While the first two constraints are pertinent to the static picture, the
remaining posulates are specific for database dynamics. They are requiring
the changes be minimal in a certain sense.
1. The amount of information lost in a belief change should be kept
minimal.
2. In so far as some beliefs are considered more important or entrenched
than others, one should retract the least important ones.
Although constraints (iii) and (iv) are rather different at first sight we shall
find that there are close connections between them.
1.3 Belief revision in science
A scientific theory may be considered as the system of beliefs of a scientist
who holds it. It is then natural to ask how belief revision as conceived in
this chapter relates to theory change as encountered in the development of
the real sciences. In this section we give a very short and simplified account
of this relationship, relying mainly on the views expressed by Quine from
2A logic or inference operation is paraconsistent iff it does not satisfy the classical
ex falso quodlibet, i.e. if not everything is derivable from a classically inconsistent set of
premises. A comprehensive survey of paraconsistent logics is furnished in (Priest et al.
1989).
Belief Revision 5
his famous paper ‘Two Dogmas of Empiricism’ (1951) through The Web of
Belief (1978, written jointly with Ullian).
A scientific theory is tested against observations. If the observational
data are in conflict with the predictions of the theory, then they falsify the
theory. Some part or other of the all-embracing scientific theory must be
given up. But it is not clear, and in fact there usually is a great variety of
choice, which part of the theory has to give way. In Quine’s (1951, p. 40)
(1953, p. 43) view, ‘[a]ny statement can be held true come what may, if we
make drastic enough adjustments elsewhere in the system .. .Conversely,
by the same token, no statement is immune to revision.’ The only be-
liefs which have a standing independent from other beliefs are observation
sentences and self-evident truths. Every other item in the body of scien-
tific ‘knowledge’ has to be assessed holistically, i.e., in conjunction with
the totality of surrounding beliefs (Duhem-Quine thesis). The links which
establish coherence between beliefs are provided by logical implications.
In order to explain recaltricant observations scientists have to frame
new hypotheses. The choice of a hypothesis is not arbitrary. (Quine and
Ullian 1978) give a list of six virtues which constrain the selection of a new
hypothesis: conservativity, modesty, simplicity, generality, refutability, and
precision. Conservativity, which is sometimes called the maxim of mini-
mum mutilation, says that one should, in the light of wayward evidence,
retain as much as possible of the old theory. According to Quine, conserva-
tivity is—and should be—the dominant factor in scientific theory change.
From time to time, however, one can gain an enormous increase of simplic-
ity and generality by a radical departure from the scientific tradition. Then
the strategy of conservativity is given up. This is the turn from normal
science to scientific revolution as discussed in the writings of philosophers
of science like Kuhn and Feyerabend, Hanson and Toulmin.
The formal theory of belief revision we are going to expound in this
chapter is concerned solely with small changes like those occuring in normal
science. An essential feature of the present concept of belief revision is
conservativity. Conceptual discontinuities or paradigm shifts are beyond
the scope of our analyses. However, even in the case of scientific revolutions
a sort of conservativity can serve as a heuristic maxim. Quite frequently
new theories are required to include old, dislodged ones as ‘limiting cases’.
This can be construed, at least in some cases, as the idea that a minimal
revision of the new theory by certain counterfactual, idealizing assumptions
(e.g. by letting some parameter assume the value 0 or , or by neglecting
some ‘disturbing’ factor) will yield the old theory or at least an adequate
translation of it (Rott 1991b). Another suggestion to account for paradigm
shifts in the present framework is to think of them in terms of changes in
the degrees of epistemic entrenchment of certain pieces of science (Rescher
1976, p. 116) (G¨ardenfors 1988, Ch. 4).
6Peter G¨ardenfors and Hans Rott
1.4 Different kinds of belief change
Depending on how beliefs are represented and what kinds of inputs are
accepted, different typologies of belief change are possible. In the most
common case beliefs are represented by sentences in some code, and a be-
lief Acan only be accepted or be not accepted (which is not to say that the
belief is rejected, i.e. that ¬Ais accepted; cf. (Quine and Ullian 1978, p.
12), on the distinction between disbelief and nonbelief) in a belief system.
Prima facie, there are two basic types of belief change: Pieces of informa-
tion can be inserted into a belief system or deleted from a belief system.
In our terminology, a revision occurs when a new piece of information that
is inconsistent with the present belief system (or database) is inserted into
that system in such a way that the result is a new consistent belief sys-
tem. Warning: We follow the literature in using the term ‘revision’ with
three different senses in this chapter. In its wide meaning, ‘belief revision’
refers to the general problem of changing belief states; a narrower meaning
refers to inserting an arbitrary piece of information into a belief system;
still narrower is the sense just introduced, i.e., where that new piece of
information is inconsistent with the original belief system. We hope that
in the following the context is always sufficiently clear for disambiguation.
We shall see that there are two fundamentally different modes of per-
forming belief changes. The direct or immediate one just inserts and deletes
units of belief from the current database D, without bothering about any
integrity constraint. The two trivial change operations are then accompa-
nied by a sophisticated and in general paraconsistent and non-monotonic
inference operation3which tells us which beliefs are actually supported by
D, or conclusions from D. In a sense, the immediate mode reduces the
dynamics of belief to the statics of inference relations. It is instantiated,
for example, in the writings of Poole and Brewka and in truth maintenance
systems, and is treated—implicitly or explicitly—at considerable length in
Volume 3 of this Handbook.
The second mode which we call logic-constrained takes the integrity
constraints as constraints for the very process of belief change, so that the
change operations themselves will become definitely non-trivial (as we shall
shortly see). In reward for these pains the method profits in that it may
use standard, in most cases even classical, propositional logic as the under-
lying inference operation. Whereas immediate belief revision operates on
databases which are not logically closed, the logic-constrained belief revi-
sion may operate on either such databases (and thus at a certain level ignore
integrity constraint (ii), see Section 5) or on databases which take the form
of logically closed theories. The latter method of theory change is most em-
3An inference operation or logic Cn is monotonic if HH’ implies Cn(H)
Cn(H0); otherwise it is nonmonotonic . On nonmonotonic logics, see Volume 3 of this
Handbook and Section 6 below.
Belief Revision 7
Fig. 1. Direct belief revision (foundations theory, NMR approach)
inently instantiated by the Alchourr´on-G¨ardenfors-Makinson (AGM) tra-
dition and will be brought into the focus of the present Handbook Chapter.
Fig. 2. Logic-constrained belief revision (coherence theory, AGM ap-
proach)
In a catchy phrase, one could say that immediate belief revision allows
the reasoner to forget the old belief state while logic-constrained belief revi-
sion allows him to forget the old input. This, however, is to be taken with a
grain of salt. In the immediate mode, computing the belief state generated
by the new input may be more efficient when the prior state is taken as a
point of departure. In the logic-constrained mode, on the other hand, we
may—and as a rule will—find traces of the earlier inputs in the selection
mechanisms employed in the transition from belief state 1 to belief state 2.
8Peter G¨ardenfors and Hans Rott
In the AGM approach, where no degrees of belief are considered and
consistency and closure are regulative ideals, one can distinguish three main
kinds of belief changes:
1. Expansion: A new sentence is added to a belief system Kregardless of
the consequences of the larger set so formed. The belief system that
results from expanding Kby a sentence Atogether with the logical
consequences will be denoted K+A.
2. Revision: A new sentence that is (typically) inconsistent with a be-
lief system Kis added, but in order that the resulting belief system
be consistent—satisfies integrity constraint (i)—some of the old sen-
tences in Kare deleted. The result of revising Kby a sentence A
will be denoted KA.
3. Contraction: Some sentence in Kis retracted without adding any new
facts. In order that the resulting system satisfies integrity constraint
(ii) some other sentences from Kmust be given up. The result of
contracting Kwith respect to the sentence A will be denoted K.
A.
Expansions of belief systems can be handled comparatively easy. If we
aim at complying with integrity constraint (ii), the intuitive process of
expansion K+Acan simply be defined as the logical closure of Ktogether
with A:
(Def +) K+A={B:K∪ {A} ` B}
Clearly, K+Adefined in this way will be closed under logical conse-
quences and will be consistent when A is consistent with K.
It is not possible to give a similar explicit definition of revisions and
contractions in logical and set-theoretical notions only. The problems for
revisions were presented in the introductory example. In trying to accom-
modate ¬C, there has been no purely logical reason for making one choice
rather than the other among the sentences to be retracted, but we have to
rely on additional information about these sentences. Thus, from a logical
point of view, there are several ways of specifying the revision K∗¬C. What
is needed here is a well-defined, and preferably computationally tractable,
method of determining the revision while taking into account integrity con-
straints (i) – (iv). This will be handled technically by using the notion of
arevision function which has two arguments, a belief system Kand a
sentence A, and which has as value the revised belief system KA.
The contraction process faces parallel problems. To give a simple il-
lustration, we return to our initial example. Consider a belief system K
which contains the sentences A, B 0, B0B, A BCand their logical
consequences (among which is C). Suppose that we want to contract Kby
deleting C. Of course, Cmust be deleted from Kwhen forming K.
C,
but also at least one of the sentences A, B0, B0B, or ABCmust
be given up as well in order to be able to maintain deductive closure (in-
tegrity constraint (ii)). Again there is no purely logical reason for making
Belief Revision 9
one choice rather than the other. We see that adding ¬Cposes quite the
same problem as deleting Cfrom our database. Another concrete example
is provided by (Fagin et al. 1983, p. 353).
In parallel with revision we can introduce the concept of a contraction
function .
which has the same two arguments as before, i.e. a belief sys-
tem Kand a sentence A, and which produces as value the belief system
K.
A. Later in this chapter we shall show that the problems of revision
and contraction are closely related—being two sides of the same coin.
The common denominator to the questions raised by revision and con-
traction is that the database is not viewed merely as a collection of atomic
facts, but rather as a collection of facts from which other facts can be
derived. It is the interaction between the updated facts and the derived
facts that is the source of the problem. Integrity constraint (i) creates in-
determinacies and problems in revisions, integrity constraint (ii) does so in
contractions.
1.5 Two approaches to describing belief revisions
When tackling the problem of belief revision there are two general strategies
to follow, namely to present explicit constructions of the revision process
and to formulate postulates for such constructions. For a computer scientist
the ultimate solution to the problem about belief revision is to develop
algorithms for computing appropriate revision and contraction functions for
an arbitrary belief system. In the sequel, several proposals for constructions
of revision methods will be presented. These methods are not presented as
pure algorithms, but on a slightly more abstract level.
However, in order to know whether an algorithm is successful or not it
is necessary to determine what an ‘appropriate’ revision function is. Our
standards for revision and contraction functions will be various rationality
postulates. The formulations of these postulates are given in a more or
less equational form. One guiding idea is that the revision KAof K
with respect to Ashould represent the minimal change of Kneeded to
accommodate A consistently (integrity constraint (iii)). The consequences
and interrelations of the postulates will also be investigated.
After presenting postulates and proposals for revision methods, the two
approaches will be connected. This is done via a number of representation
theorems, which show that the revision methods that satisfy a particular
set of rationality postulates are exactly those that fall within some compu-
tationally well defined class of methods. 4
1.6 Related areas
One related area concerns changes of legal systems. A normative system, for
instance a legal code, does not only consist of the rules in the code, but also
4For further discussion of the two strategies cf. (Makinson 1985, pp. 350-351).
10 Peter G¨ardenfors and Hans Rott
of the logical consequences of this code. Normative systems are, in general,
not static but change over time. Most commonly new rules are added so
that the system is expanded. But sometimes norms are retracted—a kind of
process that corresponds to contraction. Among legal theorists this process
is known as a derogation of a code. Finally, a new rule may be added that
contradicts the earlier code. This process is called an amendment of the
code—corresponding to revision.
One encounters the same logical problems for both derogations and
amendments of a normative system as for contractions and revisions of be-
lief sets. These problems are presented in a precise way by (Bulygin and
Alchourr´on 1977, Lewis 1979) under the name of ‘a problem about per-
mission’, and as a translation of the belief revision process in (G¨ardenfors
1989). To give a simple illustration of the derogation problem, we borrow
an example from (Hilpinen 1981). Suppose that a father has commanded
that
1. The children may watch TV only if they eat their dinner.
2. The children may eat their dinner only if they do their homework.
A consequence of these norms is that
3. The children may watch TV only if they do their homework.
This norm is thus also included in the normative system. Now suppose
that the system is changed by the addition of the following permission by
the father:
4. Today, the children may watch TV without doing their homework.
In the updated normative system, (3) is of course no longer a valid com-
mand, but (1) and (2) cannot both be valid either. Which of these com-
mands is derogated by the father’s permission (4)? There is no purely
logical reason for selecting one or the other, but the decision has to be
grounded on other principles.
One thing to note is that a legal code constitutes a finite set of proposi-
tions. Thus derogations (contractions) and amendments (revisions) of legal
codes are most naturally analysed in terms of base revisions (in the sense of
Section 2.3) rather than revisions of belief sets. One idea for handling such
revisions is to introduce an ordering of the norms of a system. In support
of this idea, it can be noted that norms have different status, since some
are regarded as more basic than others and less prone to amendments and
derogations. A rudimentary form of such an ordering can be determined in
a legal code by its hierarchical systems of sections, including, for example,
constitutional laws, criminal laws, and local regulations.
Because the full body of a legal code often indicates conflicting verdicts,
an important task for jurists is to specify in this terminology the ordering
of normative importance in greater detail. Bulygin and Alchourr´on (1977)
discuss the application of a Normenordnung and the consequences of gaps
Belief Revision 11
in this ordering. In fact, the creation of precedents is an established way of
determining such an ordering. Closely related formal problems are investi-
gated by Alchourr´on and Makinson (1981) who also assume an ordering of
the underlying code, which they call a ‘hierarchy of regulations’. Section 5
will discuss a number of constructive ideas of how to apply such orderings.
In this chapter there is no room to discuss the connection between be-
lief revision and the following interesting themes: nonclassical logics for
AI and logic programming like epistemic and relevant logic (with the no-
table exception of nonmonotonic logics in Section 6), probabilistic belief
change (see (Jeffrey 1965, Pearl 1988, Lindstr¨om and Rabinowicz 1989),
(G¨ardenfors 1988, Ch. 5) and H. Kyburg’s Chapter in Volume 3 this Hand-
book), belief revisions in languages admitting quantifiers, conditionals and
the Ramsey test (see (G¨ardenfors 1986, G¨ardenfors 1987, Makinson 1990,
Grahne 1991)), reasoning about (changes of) knowledge and belief (see
Y. Moses’ Chapter in this volume and the TARK-volumes (Halpern et al.
1986 1988 1990)), reasoning about action (see Winslett’s Chapter in this
volume), the frame problem, dynamic logic, machine learning, and expert
systems. We take it that the very wealth of areas to which methods and
results of belief revision theory are relevant underscores its importance. A
comprehensive annotated bibliography to the earlier literature on belief re-
vision and its various applications and related fields is provided by (Doyle
and London 1980).
2Representing belief states
2.1 Preliminaries
Before we can start discussing representations of belief revision, we must
have a way of representing belief states since a revision function takes
one belief state into another. This section will be devoted to a discussion
of some possible representations for belief states and the methodological
problems connected with the choice of a representation.
The most common representations of belief states in computational con-
texts are sentential or propositional in the sense that the elements consti-
tuting the belief systems are coded as formulas representing sentences. This
kind of representation will be the focus of the present chapter, but at the
end of this section we shall briefly discuss some alternative types of repre-
sentations.
However, even if we stick to propositional representations of belief sys-
tems, there are many options open. First of all, we must choose an appro-
priate language to formulate the belief sentences. For example, databases
include some form of rules, and there are many ways of formalizing these: as
quantified sentences in first order logic, as PROLOG rules (corresponding
to Horn-clauses), as default statements (e.g. Reiter style), as probability
statements, etc.
12 Peter G¨ardenfors and Hans Rott
For the main part of the chapter we shall work with a language Lwhich
is based on propositional logic. The details of Lwill be left open for the
time being. It will be assumed that Lis closed under applications of the
boolean operators ¬(negation), (conjunction), (disjunction), and
(implica-tion). We will use A, B , C etc. as variables over sentences in L. It
is also convenient to introduce the symbols >and for the two sentential
constants ‘truth’ and ‘falsity’. These symbols will be treated as denoting
‘ideal points’ and they do not necessarily represent truth values. What
is accepted in a formal representation of a belief state are not only the
sentences that are explicitly put into the database, but also the logical con-
sequences of these beliefs. Hence, the second factor which has to decided
upon when representing a belief state is what logic governs the beliefs. In
practice this depends on which theorem-proving mechanism is used in com-
bination with the database. However, when doing a theoretical analysis,
one wants to abstract from the idiosyncracies of a particular algorithm for
theorem proving and start from a more abstract description of the logic. If
the logic is undecidable, further complications will arise, but we will ignore
these for the time being.
In general, we will assume that the underlying logic or consequence
operation Cn includes classical propositional logic, that it is monotonic and
compact and that it validates the cut rule and the deduction theorem.5If
His a set of sentences, we use the notation Cn(H) for the set of all logical
consequences of H. If Hlogically entails A, i.e. if ACn(H), we will write
this as H`A. So Cn(H) = {A:H`A}.Cn(A) abbreviates Cn({A}).
We note for further reference that it follows from our suppositions about
Cn that it satisfies ‘disjunction in the premises’, i.e. that H∪ {BC} ` A
whenever both H∪ {B} ` Aand H∪ {C} ` A.
2.2 Belief sets
The simplest way of representing a belief state is to represent it by a set of
sentences from L. Accordingly, we define a belief set as a set Kof sentences
in Lwhich is closed under logical consequences, i.e. which satisfies the
integrity constraint (ii) requiring that AKwhenever ACn(K).6The
interpretation of such a set is that it contains all the sentences that are
accepted in the represented belief state. Consequently, when AKwe
say that A is accepted in K and when ¬AKwe say that A is rejected in
K. It should be noted that a sentence being accepted does not imply that
5A logic Cn is compact iff whenever AC n(H), then there is a finite subset H0of
Hsuch that ACn(H0). It validates the (unit version of the) cut rule iff ACn(H)
and BCn(H∪ {A}) imply BC n(H). It validates the deduction theorem iff
BCn(H∪ {A}) implies ABC n(H).
6Belief sets were called knowledge sets in (G¨ardenfors and Makinson 1988).
Belief Revision 13
is has any form of justification or support.7A belief set can also be seen
as a theory which is a partial description of the world. ‘Partial’ because
in general there are sentences Asuch that neither A nor ¬Aare in K.
A belief is called logically finite if the background consequence operation
Cn partitions Kinto a finite number of equivalence classes. For weaker
conditions of finitude, see (Hansson 1994).
By classical logic, whenever Kis inconsistent, then K`Afor every
sentence Aof the language L. This means that there is exactly one incon-
sistent belief set under our definition, namely the set of all sentences of L.
We introduce the notation Kfor this belief set. Clearly, Kis useless
for information handling purposes and it blatantly violates integrity con-
straint (i), but for technical reasons we will include it as a limiting case in
the category of belief set. When we want to dismiss it from consideration,
we may simply speak of consistent belief sets.
The approach to modelling belief states based on belief sets that has
been presented here is propounded in (Fagin et al. 1983, Reiter 1984,
ardenfors 1984, G¨ardenfors 1988) among others. It has the advantages
of handling facts, logical integrity constraints and derivation rules in a
uniform way and it is a convenient way of representing partial information.
The integrity constraint (ii) that a belief set is supposed to be closed un-
der logical consequence is an idealization which will cause problems when it
comes to implementing a system, since there are in general infinitely many
logical consequences to take care of. We will return to implementation
problems in the connection with belief base revision in Section 5.
2.3 Belief bases
Against representing belief states as belief sets it has been argued (Al-
chourr´on and Makinson 1982, Makinson 1985, Hansson 1989, Hansson
1991a, Fuhrmann 1991)8that some of our beliefs have no independent
standing but arise only as inferences from our more basic beliefs. This dis-
tinction is not possible to express in a belief set since there are no markers
for which beliefs are basic and which are derived. Furthermore, when we
perform revisions or contractions, it seems that we never do it to the belief
set itself which contains an infinite number of elements, but rather on some
typically finite base for the belief set.
7For further discussion of the interpretation of belief sets cf. (G¨ardenfors 1988).
Pearce and Rautenberg (1991) introduce a finer structure on belief sets by not identifying
the acceptance of ¬Awith the rejection of A. Instead, they represent a belief state by
a pair of sets of sentences: A ’positive’ set containing those sentences that are accepted
and a ’negative’ set contining the rejected sentences. It is possible to have a sentence
¬Ain the positive set without having Ain the negative set.
8Curiously enough, in AI the situation is just the reverse as in the logico-philosophical
tradition. There people started with base revisions and were then criticized for this. See
e.g. Nebel’s (1989) insistence on a ‘knowledge level analysis’ of belief revision. Also cf.
(Fagin et al. 1983, Ginsberg 1986).
14 Peter G¨ardenfors and Hans Rott
Formally, this idea can be modelled by calling any arbitrary set Hof
sentences in Labelief base.His a base for a belief set K iff His a subset of
Kand Cn(H) = K. In most cases of application, Hwill be finite. Then,
instead of introducing revision and contraction functions that are abstractly
defined on belief sets, it is assumed that belief sets are associated with
belief bases and that these functions are defined on the bases—or at least
with essential reference to the bases. Such functions will be called base
revisions and base contractions respectively. This approach introduces a
more finegrained structure since we can have two bases Hand H0such
that Cn(H) = C n(H0) but H6=H0, and so presumably HA6=H0A.
We will present rationality postulates for base revisions and contractions
in Section 3.4 and explicit constructions in Section 5.
There is no general answer to the question of which representation is
the better, full belief sets or bases, but this depends on the particular
application area. Within computer science applications, bases seem easier
to handle since they are mostly finite structures. On the other hand, it has
been argued in (G¨ardenfors 1990) that much of the conceptal advantages of
bases for belief sets can be modelled by belief sets together with the notion
of epistemic entrenchment of beliefs. Changes of belief sets rather than
bases represent what an ideal reasoner would or should do when forced to
reorganize his beliefs. Belief set dynamics offers a competence model which
helps us to understand what people—and AI systems, for that matter—
should do if they were not bounded by limited logical reasoning capabilities.
2.4 Justifications vs. coherence models
Another question that has to be answered when representing a state of
belief is whether the justifications for the beliefs should be part of the rep-
resentation or not. With respect to this question there are two main ap-
proaches. One is the foundations theory which holds that one should keep
track of the justifica-tions for one’s beliefs: Propositions that have no justi-
fication should not be accepted as beliefs. The belief bases of Section 2.3 are
usually taken to consist of elements which all have an independent justifi-
cation and thus give rise to a moderate form of the foundations theory (see
Section 5). In a more principled fashion, the so-called truth maintenance
systems dealt with in Section 7 contain explicit records of justifications.
The foundations approach is compatible with both the immediate mode
(truth maintenance systems) and the logic-constrained mode (belief base
revisions) in the sense of Section 1.4. The other approach is the coherence
theory which holds that one need not consider the pedigree of one’s beliefs.
The focus is instead on the logical structure of the beliefs—what matters is
how a belief coheres with the other beliefs that are accepted in the present
state. The belief sets presented above clearly fall into the latter category.
The coherence approach makes sense only if based on the logic-constrained
mode of belief revision.
Belief Revision 15
It should be obvious that the foundations and the coherence theories
have very different implications for what should count as rational changes
of belief systems. According to the coherence theory the objectives are to
maintain consistency in the revised epistemic state and to make minimal
changes of the old state that guarantee sufficient overall coherence (that is,
the main objectives are to satisfy the chief methodological integrity con-
straints). The following general principle formulated by Harman (1986p.
39) is relevant.
Principle of Positive Undermining: One should stop believing
Pwhenever one positively believes one’s reasons for believing
Pare no good.
A truly coherentist reading is obtained if the latter part of the principle
is interpreted as “having positive reasons for not believing P”. Interpreted
in this way, the principle expresses a certain kind of inertia of a reasoner’s
beliefs. Sections 3 and 4 of this chapter are devoted to coherentist models
of belief revision.
On the other hand, according to the foundations theory, belief revision
should consist, first, in giving up all beliefs that do no longer have a sat-
isfactory justification and, second, in adding new beliefs that have become
justified. Here a different principle of undermining is operative:
Principle of Negative Undermining: One should stop believing
Pwhenever one does not associate one’s belief in Pwith an
adequate justification (either intrinsic or extrinsic). (Harman
1986, p. 39)
A consequence of this principle is that if a state of belief containing A
is revised so that the negation of Abecomes accepted, not only should
Abe given up in the revised state, but also all beliefs that depend on A
for their justification. If one believes that B, and all the justifications for
believing Bare given up, continued belief in Bis no longer justified, so it
should be rejected. This process has been termed ‘disbelief propagation’
in (Martins and Shapiro 1988).9A drawback of this principle, from an
implementational point of view, is that it may lead to chain reactions and
thus to severe bookkeeping problems.
Justifications come in two kinds. The standard kind of justification
is that a belief is justified by one or several other beliefs, but not jus-
tified by itself. However, since all beliefs should be justified and since
infinite regresses are disallowed, some beliefs must be justified by them-
selves. Harman (1986p. 31) calls such beliefs foundational. One finds
ideas like this, for example, in the epistemology of the positivists: Ob-
9Or ‘knowledge base garbage collection” in (Charniak et al. 1987). Also cf. Rao and
Foo’s (1989) ‘foundational equation’, according to which foundational belief revision is
coherentist belief revision plus disbelief propagation.
16 Peter G¨ardenfors and Hans Rott
servational statements need no further justification—they are self-evident.
Another requirement on the set of justifications is that it be grounded or
well-founded. In particular, justifications are supposed to be non-circular
so that we do not find a situation where a belief in Ajustifies B, a belief
in Bjustifies C, while Cin turn justifies A. Sosa (1980pp. 23-24) describes
the justificational structure of an epistemic state in the following way:
For the foundationalist, every piece of knowledge stands at the apex of
a pyramid that rests on stable and secure foundations whose stability and
security does not derive from the upper stories or sections.
Another feature of the justification relation is that a belief Amay be
justified by several independent beliefs, so that even if some of the justifica-
tions for Aare removed, the belief may be retained because it is supported
by other beliefs.
Probably the most common representations of epistemic states used
in the cognitive sciences are called semantic networks. A semantic net-
work typically consists of a set of nodes representing some objects of belief
and, connecting the nodes, a set of links representing relations between
the nodes. The networks are then complemented by some implicit or ex-
plicit interpretation rules which make it possible to extract beliefs and
epistemic attitudes (for detailed information, see Horty’s Chapter in Vol-
ume 3). Changing a semantic network consists in adding or deleting nodes
or links.
If nodes represent beliefs and links represent justifications, semantic net-
works seem to be ideal tools for representing epistemic states according to
the foundational theory. However, not all nodes in semantic networks rep-
resent beliefs and not all links represent justifications. Different networks
have different types of ob jects as nodes and different kinds of relations as
links. In fact, the diversity is so large that it is difficult to see what the
various networks have in common. It seems that any kind of object can
serve as a node in the networks and that any type of relation or connection
between nodes can be used as a link between nodes. This diversity is liable
to undermine claims that semantic networks represent epistemic states. In
his classical methodological article, Woods (1975p. 36) admitted that ‘we
must begin with the realization that there is currently no ’theory’ of se-
mantic networks’. However, in recent years this claim has been rebutted by
a formalism for truth maintenance systems initiated by Doyle (1979) and
developed by Goodwin (1982) and others. These procedural systems and
their associated formalism will be the topic of Section 7.
We will keep in mind that the two theories of belief revision are based
on conflicting ideas of what constitutes rational changes of belief. The
choice of underlying theory is, of course, also crucial for how a computer
scientist will attack the problem of implementing a belief revision system.
For a more detailed account of that distinction between justifications and
coherence models, see (Harman 1986).
Belief Revision 17
2.5 Possible worlds models
An obvious objection to using sets of sentences as representations of belief
states is that the objects of belief are normally not sentences but rather the
contents of sentences, that, is, propositions. The characterization of propo-
sitions that has been most popular among philosophers during recent years
is to identify them with sets of possible worlds. The basic semantic idea
connecting sentences with propositions is then that a sentence expresses a
given proposition if and only if it is true in exactly those possible worlds
that constitute the set of worlds representing the proposition.
By taking beliefs to be beliefs in propositions, we can then represent
a belief state by a set WKof possible worlds. The epistemic interpreta-
tion of WKis that it is the narrowest set of possible worlds in which the
individual in the represented belief state believes that the actual world is
located. This kind of representation of a belief state, which is familiar from
epistemic logic, has been used by Harper (1977), Grove (1988), Morreau
(1992) among others and in a generalized form by Spohn (1988, 1990) (also
cf. the comparisons in (G¨ardenfors 1978)).
However, there is a very close correspondence between belief sets and
possible worlds models. For any set Wof possible worlds, we can define a
corresponding belief set KWas the set of those sentences that are true in
all worlds in W. It is easy to verify that KWdefined in this way satisfies the
integrity constraint (ii) so that it is indeed a belief set. If Wis non-empty,
integrity constraint (i) is satisfied as well. Conversely, for any belief set K,
we can define a corresponding possible worlds model WKby identifying the
possible worlds in WKwith the maximal consistent extensions of K. Then
we say that a sentence A is true in such an extension wiff Aw. Again
it is easy to verify that this will generate an appropriate possible worlds
model (for details cf. (Grove 1988)).
From a computational point of view, belief sets are much more tractable
then possible worlds models. So even though possible worlds models are
popular among philosophical logicians, the considerations here show that
the two kinds of representations are basically equivalent.10 And if we want
to implement belief revision systems, sentential representations like belief
sets, and in particular bases for belief sets, are much easier to handle.
3Rationality postulates for belief revisions
In this section we present a paradigmatic set of postulates for belief changes.
They are supposed to be rationality criteria for revisions and contractions
of belief sets.
10We neglect technical problems created by the fact that not every proposition is
expressible by some sentence (in case our language has infinitely many propositional
letters), and by the fact that distinct worlds may satisfy exactly the same sentences.
18 Peter G¨ardenfors and Hans Rott
3.1 The AGM postulates for revision
In this section, it will be assumed that belief sets, that is sets of sen-
tences closed under logical consequences, are used as representations of
belief states. The goal is now to formulate postulates for rational revision
and expansion functions defined over such belief sets. The set of postulates
to be presented here has been developed jointly by the first author and Al-
chourr´on and Makinson (Alchourr´on and Makinson 1982, G¨ardenfors 1982,
Alchourr´on et al. 1985).
The underlying motivation for these postulates is that when we change
our beliefs, we want to retain as much as possible of our old beliefs—we
want to make a minimal change. Information is in general not gratuitous,
and unnecessary losses of information are therefore to be avoided (compare
(Hilpinen 1981)). This heuristic criterion which we already met in the
form of integrity constraints (iii) and (iv) may be called the criterion of
informational economy.
However, it turns out to be difficult to give a precise quantitative defi-
nition of the loss of information (see e.g. the results in Section 4.2 and the
discussion of minimality in (G¨ardenfors 1988, pp. 66-68)). Instead we shall
follow other lines of specifying ’minimal change’. For instance, we may as-
sume that the sentences in a belief set have different degrees of epistemic
entrenchment, and when we give up sentences when forming a revision or
a contraction, we give up those with the lowest degree of entrenchment.
The idea of epistemic entrenchment will be presented in greater detail in
Section 4.2.
It is assumed that for every belief set Kand every sentence A in L, there
is a unique belief set KArepresenting the revision of Kwith respect to A.
In other words is a function taking a belief set and a sentence as arguments
and giving a belief set as a result. This is admittedly a strong assumption,
since in many cases, the information available is not sufficient to determine
a unique revision. However, the agent must reach a single new belief state,
and from a computational point of view this is a gratifying assumption
anyway. For a relaxation of this assumption and an investigation of revision
relations producing several belief sets as potential revisions of a given belief
set and a given sentence, see (Lindstr¨om and Rabinowicz 1989, Lindstr¨om
and Rabinowicz 1991, Doyle 1991, Rott 1992b).
The first postulate is a special case of the principle of categorial match-
ing and requires that the outputs of the revision function indeed be belief
sets:
(K1) For any sentence Aand any belief set K, K Ais a belief set.
(‘Closure’)
The second postulate guarantees that the input sentence Ais accepted
in KA:
(K2)AKA. (‘Success’)
Belief Revision 19
This ‘success’ postulate expresses the fact that the incoming information
is given absolute priority over the beliefs originally entertained. Taken
together with the postulate of consistency preservation (K5) below, it
makes clear that two things are not desiderata of AGM belief revision:
symmetry between old and new information, (Cn(A)) B= (Cn(B)) A,
and commutativity of revisions, (KA)B= (KB)A. The violation
of these conditions is obvious when we substitute ¬Afor B.
The normal application area of a revision process is when the input A
contradicts what is already in K, that is ¬AK. However, in order to
have the revision function defined for all arguments, we can easily extend
it to cover the case when ¬A6∈ K. In this case, revision is identified with
expansion. For technical reasons, this identification is divided into two
parts:
(K3) KAK+A. (‘Expansion 1’)
(K4) If ¬A6∈ K, then K+AKA. (‘Expansion 2’)
The proviso that Ais consistent with K, that is ¬A6∈ K, is not needed
in (K3) because if ¬AK, then K+A=Kand (K3) is trivially
fulfilled. When (K1) and (K2) are present and Cn is a monotonic
consequence operation, (K4) is equivalent to a preservation principle
stating that if Ais consistent with K, then all elements of Kare preserved
in KA, i.e., KKA. (K3) and (K4) taken together say that in
the consistent case, logic is sufficient to guide belief revision.
The purpose of a revision is to produce a new consistent belief set. Thus
KAshould be consistent, unless Ais logically impossible:
(K5) KA=Konly if ` ¬A. (‘Consistency preservation’)
It should be the content of the input sentence A rather than its partic-
ular linguistic formulation that determines the revision. This means that
logically equivalent sentences should lead to identical revisions:
(K6) If `AB, then KA=KB.(“Extensionality”)
The postulates (K1) – (K6) are elementary requirements that
connect K, A and KA. This set will be called the basic set of postulates.
The final two conditions concern composite belief revisions of the form
KAB. The idea is that, if Kis to be changed minimally so as to
include two sentences Aand B, such a change should be possible by first
revising Kwith respect to Aand then expanding KAby B—provided
that Bdoes not contradict the beliefs in KA. For technical reasons the
precise formulation is split into two postulates:
(K7) KAB(KA) + B. (‘Conjunction 1’)
(K8) If ¬B6∈ KA, then (KA) + BKAB(‘Conjunction 2,
rational monotony’)
When ¬BKA, then (KA) + Bis K, which is why the proviso
is needed in (K8) but not in (K7).
We next turn to some consequence of the postulates. For this purpose
we assume the basic set of postulates as given. Then it can be shown
20 Peter G¨ardenfors and Hans Rott
(G¨ardenfors 1988, p. 57) that (K7) is equivalent to
1. KAKBKAB.
Similarly, (K8) is equivalent to
2. If ¬A6∈ KAB, then KABKA.
which in turn implies
(K8r)KABCn(KAKB).(‘Disjunction’)
Another principle that is useful is the following ’factoring’ condition:
3. KAB=KAor KAB=KBor KAB=KAKB.
It can be shown that, given the basic postulates, (3) is in fact equivalent
to the conjunction of (K7) and (K8).
Furthermore (K7) and (K8) together entail the following identity
criterion:
4. KA=KBif and only if BKAand AKB.(‘Reciprocity’)
In the presence of (K1) and (K2),(4) is equivalent to the conjunction
of a weakened form of (K7) and another weakened form of (K8), viz.
(K7c) If BKAthen KABKA. (‘Cut’)
(K8c) If BKAthen KAKAB. (‘Cautious monotony’)
That (4) entails (K7c) and (K8c) follows from the facts that A
KABand that BKAimplies ABKA, by (K1) and
(K2). The converse direction is clear after noting that AKBand
BKAentail KB=KAB=KA, by (K7c) and (K8c).
The postulates (K1) – (K8) do not uniquely characterize the revision
KAin terms of only Kand A. This is, however, as it should be. We
believe it would be a mistake to expect that only logical properties are
sufficient to characterize the revision process. This would be like requiring
that the standard Kolmogorov axioms for probability functions should pick
out a unique rational probability function.
3.2 The AGM postulates for contraction
The postulates for the contraction function .
will to an even larger extent
than for revisions be motivated by the principle of informational economy.
As for revisions, the rationality postulates are not sufficient to determine
uniquely a contraction function.
The first postulate is of a familiar kind:
(K.
1) For any sentence Aand any belief set K, K .
Ais a belief set.(‘Clo-
sure’)
Because K.
Ais formed from Kby giving up some beliefs, it should be
required that no new beliefs occur in K.
A:
(K.
2)K.
AK. (‘Inclusion’)
When A6∈ K, the criterion of informational economy requires that
nothing be retracted from K:
Belief Revision 21
(K.
3) If A6∈ K, then K.
A=K. (‘Vacuity’)
We also postulate that the sentence to be contracted not be a logical
consequence of the beliefs retained in K.
A(unless A is logically valid in
which case it can never be retracted because of the integrity constraint (I)):
(K.
4) If not `A, then A6∈ K.
A. (‘Success’)
From (K.
1) to (K.
4) it follows that
5. If AK, then (K.
A) + AK.
In other words, if we first retract A and then add A again to the resulting
belief set K.
A, no beliefs are accepted that were not accepted in the
original belief set. The criterion of informational economy demands that as
many beliefs as possible should be kept in K.
A. One way of guaranteeing
this is to require that if we expand K.
Aby Awe should be back in exactly
the same state as before the contraction, that is K:
(K.
5) K(K.
A) + A. (‘Recovery’)
This recovery postulate enables us to ‘undo’ belief contractions, because
the converse inclusion (K.
A) + AKfollows from the remaining pos-
tulates, provided that AK.(K.
5) has turned out to be the most con-
troversial among the AGM postulates for contraction. It is closely related
to what Doyle (1979p. 235) calls the ‘reasoned retraction of assumptions.’
We will return to a discussion of this postulate in next section.
The sixth postulate is analogous to (K6) :
(K.
6) If `AB, then K.
A=K.
B. (‘Extensionality’)
Postulates (K.
1) – (K.
6) are called the basic set of postulates for
contractions. Again, two further postulates for contractions with respect
to conjunctions will be added. The motivations for these postulates are
much the same as for (K7) and (K8).
(K.
7) K.
AK.
BK.
AB. (‘Conjunction 1’)
(K.
8) If A6∈ K.
AB, then K.
ABK.
A. (‘Conjunction 2’)
It is interesting to note that (K.
7) is in fact equivalent, given the basic
postulates, to the seemingly weaker
6. K.
ACn({A})K.
AB
In parallel with (3) it can be shown that (K.
7) and (K.
8) are jointly
equivalent to the following condition:
7. K.
AB=K.
Aor K.
AB=K.
Bor K.
AB=K.
AK.
B.
A useful consequence of (K.
4) and (K.
8) is the following condition which
says that K.
ABis ’covered’ either by K.
Aor by K.
B:
8. Either K.
ABK.
Aor K.
ABK.
B.
These postulates for revision and contraction and their consequences are
discussed in (G¨ardenfors 1988, Ch. 3). Further postulates of interest are
the following weakenings of (K.
7) and (K.
8) :
(K.
7c) If BK.
ABthen K.
AK.
AB,
22 Peter G¨ardenfors and Hans Rott
(K.
8c) If BK.
ABthen K.
ABK.
A,
(K.
8r)K.
ABCn(K.
AK.
B).
Both (K.
8c) and (K.
8r) are weaker than (K.
8), but they are logi-
cally independent even in the case of a belief set Kwhich is finite modulo
Cn and in the presence of (K.
1) (K.
7). (Rott 1993)
3.3 From contractions to revisions and vice versa
We next turn to a study of the connections between revision and contraction
functions. In the previous two sections they have been charac-terized by
two sets of postulates. These postulates are independent in the sense that
the postulates for revisions do not refer to contractions and vice versa.
A natural question is now whether either contraction or revision can be
defined in terms of the other. Here we shall present two positive answers
to this question.
A revision of a knowledge set can be seen as a composition of a contrac-
tion and an expansion. More precisely: In order to construct the revision
KA, one first contracts Kwith respect to ¬Aand then expands K.
−¬A
by A. Formally, we have the following definition which is called the Levi
identity:
(Def*) KA= (K.
−¬A) + A
Let us call the revision function obtained from a contraction function .
with the help of the Levi identity R(.
). That this definition is appropriate
is shown by the following result:
Theorem 3.3.1. If a contraction function .
satisfies (K.
1) to (K.
4)
and (K.
6), then R(.
)satisfies (K1) (K6). Furthermore, if .
also
satisfies (K.
7),(K.
7c),(K.
8),(K.
8c)or (K.
8r), then R(.
)satisfies
(K7),(K7c),(K8),(K8c)or (K8r), respectively.
This result supports (Def )as an appropriate definition of a revision
function. Note that the controversial recovery postulate (K.
5) is not used
in the theorem.
Conversely, contractions can be defined in terms of revisions. The idea
is that a sentence Bis accepted in the contraction K.
Aif and only if Bis
accepted both in Kand in K∗ ¬A. Formally this amounts to the following
definition which has been called the Harper identity:
(Def .
)K.
A=KK∗ ¬A.
Let us call the contraction function obtained from a revision function
with the help of the Harper identity C(). Again, this definition is
supported by the following result:
Theorem 3.3.2. If a revision function satisfies (K1)to (K6), then
C()satisfies (K.
1)(K.
6). Furthermore, if also satisfies (K7),(K
7c),(K8),(K8c)or (K8r), then C()satisfies (K.
7),(K.
7c),(K.
8),(K.
8c)
or (K.
8r), respectively.
Belief Revision 23
Again this result supports the appropriateness of (Def .
). The two the-
orems show that the defined revision and contraction functions have the
right properties. But we also want the two definitions to be interchange-
able in the sense that if we start with one definition to construct a new
contraction (or revision) function and after that use the other definition
to obtain a revision (or contraction) function, then we ought to get the
original function back. Briefly, we would like to have that R(C()) =
and C(R(.
)) = .
, for arbitrary revision and contraction functions and
.
. But this is immediate from the fact that the identity
8. KA= (KKA) + A.
can be proved to be a consequence of the basic postulates for revisions, and
conversely that the equation
9. K.
A=K((K.
A) + ¬A).
can be proved from the basic set of postulates for contractions (including
the recovery postulate).
In summary, Theorems 3.3.1 and 3.3.2 together with (8) and (9) show
that the two sets of postulates for revision and contraction functions are
interchangeable and a method for constructing one of the functions would
automatically, via (Def*) or (Def .
), yield a construction of the other
function satisfying the desired set of postulates. Moreover, we have the
desired identities C(R(.
)) = .
and R(C()) = .
As was mentioned earlier, the contraction postulate (K.
5) may not be
valid in all contexts. It is also this postulate that has been most severely
criticized of the AGM postulates. It is therefore interesting to ask what
happens if this postulate is dropped. Makinson (1987) has shown that we
lose something; that is, (K.
5) is not derivable from the postulates (K.
1)
– (K.
4) and (K.
6), but we don’t lose much.
To make this more precise, let us, following Makinson, call an operation
that satisfies (K.
1) – (K.
4) and (K.
6) a withdrawal function. Because
(K.
5) was not used in 3.3.1, we know that each withdrawal function,
by means of the Levi identity, generates a revision function that satisfies
(K1) – (K6). If 1and 2are two withdrawal functions such that
R(1) = R(2) we say that 1and 2are revision equivalent. We write
[] for the class of all withdrawal functions that are revision equivalent to
. Let us say that the withdrawal function 1is greater than 2on Kif
K2AK1Afor all A. Makinson proves the following:
Theorem 3.3.3. Let Kbe a belief set. Then for each withdrawal function
on K, there is a unique contraction function .
on Kthat is revision
equivalent to , and this .
is the greatest element of [].
The upshot is that if one’s main interest is in revision, then, although
a given revision operation on a belief set Kmay be generated by several
withdrawal functions, there is a unique function also satisfying the recovery
24 Peter G¨ardenfors and Hans Rott
postulate (K.
5). This function is also the unique withdrawal function that
eliminates from Kas little as possible. Since R() is a revision function
by 3.3.1, we get from the above observations that C(R()) satisfies the re-
covery postulate and R(C(R())) = R(), so we find that C(R()) is the
contraction function .
mentioned in 3.3.3. This makes clearer the specific
role of recovery and also lends some support to its intuitive acceptability
for the idealized belief sets used in the AGM approach.
3.4 Postulates for contractions and revisions of belief
bases
The above-mentioned postulates for changes of belief sets are not the only
ones discussed in the literature. Depending on special extensions, modifi-
cations and reinterpretations of the model of belief change, different sets
of postulates have been introduced and investigated. In the literature we
find special postulates for
belief revision by sets of sentences, so-called ‘multiple’ contractions
and revisions (Fuhrmann 1988, Fuhrmann and Hansson 1994, Hans-
son 1989, Hansson 1991a, Nieder´ee 1991, Lindstr¨om 1991)11;
relational belief revision (Lindstr¨om and Rabinowicz 1989, Lindstr¨om
and Rabinowicz 1991);
revisions of varying theories (Alchourr´on and Makinson 1985, Hans-
son 1991a);
revisions of belief states of a more complicated nature, e.g. of proba-
bility distributions ((G¨ardenfors 1988, Ch. 5) and (Lindstr¨om and
Rabinowicz 1989)) or possibility distributions (Dubois and Prade
1992);
change-recording updates, as contrasted with knowledge-adding re-
visions (see (Katsuno and Mendelzon 1992), and Winslett’s Chapter
in this Volume ).
At the time of writing, there seems to be no discussion of postulates
for ‘selective fact-finding’ (vs. ‘suppositional’) belief revision (Cross and
Thomason 1992), for revisions of ordinal conditional functions (Spohn
1988) or for revisions by ‘weighted’ inputs (Spohn 1988, Dubois and Prade
1992).
In this section we shall focus on changes of belief bases. First, we can
reformulate the AGM postulates as postulates for contractions performed
on bases rather than on full belief sets:
11A constructive model is supplied in (Fagin et al. 1986).
Belief Revision 25
(H.
1) For any sentence A and any belief set H,
H.
Ais a belief set.
(H.
2) H.
AH.
(H.
3) If A6∈ Cn(H) then H.
A=H.
(H.
4) If not `A, then A6∈ Cn(H.
A).
(H.
5) H(H.
A) + A.
(H.
6) If `AB, then H.
A=H.
B.
(H.
7) H.
AH.
BH.
AB.
(H.
8) If A6∈ H.
AB, then H.
ABH.
A.
(H.
1) is restricted to belief sets already; Hansson (1991b) has sug-
gested to replace it by the stronger , but equally plausible postulate of
relative closure
(H.
10)Cn(H.
A)HH.
A.
The antecedent of (K.
3) has been changed from A6∈ Hto A6∈
Cn(H), and the consequent of (K.
4) has been changed from A6∈ H.
Ato
A6∈Cn(H.
A). By the same motivating considerations as for belief sets,
all axioms with the exception of (H.
5) look reasonable for belief bases as
well, although (Alchourr´on et al. 1985) for instance do not strengthen the
supplementary postulates (K.
7) and (K.
8) to (H.
7) and (H.
8). To
see that (H.
5) is at least dubitable, consider the belief base H={A, B}.
In order to retract AB,we have to give up both A and B, so that we
are left with the empty base H.
A=. After adding ABagain (and
possibly closing under Cn), however, we do not get back A and B, but
only AB(and perhaps its consequences). For a discussion of the recovery
postulate in the context of base revision, see (Hansson 1991b).
Hansson (1989, 1993a) presents a different set of postulates tailored
especially to the problem of contracting belief bases:12
(HH .
1) H.
AH. (‘Inclusion’)
(HH .
2) If not `A, then A6∈ Cn(H.
A).(‘Success’)
(HH .
3) If BH, but B6∈ H.
A, then there is some H0
such that H.
AH0Hand A6∈ Cn(H’),
but ACn(H0∪ {B}). (‘Relevance’)
(HH .
4) If it holds for all subsets H0of H
that ACn(H0) iff BC n(H0),
then H.
A=H.
B. (‘Uniformity’)
(HH .
5) If not `A, and each sentence in Glogically entails A,
then H.
A= (HG).
A. (‘Redundancy’)
(HH .
1) and (HH .
2) are the AGM postulates (H.
2) and (H.
4),
and (HH .
4) is a stronger counterpart to (H.
6). The intuition behind
(HH .
3) is that if a sentence Bis retracted from Hwhen Ais rejected,
then Bplays some role for the fact that Hbut not H.
Alogically entails
12His original formulation is for ‘multiple’ contractions, i.e., contractions by sets of
sentences. We would like to thank Sven Ove Hansson for checking that they are suitable
for singleton operations as well.
26 Peter G¨ardenfors and Hans Rott
A. The motivation for (HH .
5) is that the prior addition to Hof one
or more sentences that imply Ashould not change the outcome of the
contraction of Hby A. Relative closure (H.
10) and vacuity (H.
3) follow
from relevance, extensionality (H.
6) follows from uniformity, but (H.
5)
does of course not follow from Hansson’s postulates. (H.
7) holds for
‘maximizing’ transitively relational partial meet contractions. As Hansson
shows, the postulate
(HH .
6) For all G, if for all subsets H0of H, A Cn(H’) iff
ACn(H0G), then (HG).
A= (H.
A)G. (‘Expansion’)
holds for all ‘reducing’ relational partial meet base contractions. The max-
imizing and reducing properties are explained in Section 5.2. Note that
(HH .
5) and (HH .
6) are postulates concerning contractions of different
bases, and as such they have no counterparts in the AGM framework which
does not deal with changes of varying belief sets.
Hansson (1993a) also presents postulates for base revisions.
(HH 0) HAis consistent if not ` ¬A. (‘Consistency’)
(HH 1) HAH∪ {A}. (‘Inclusion’)
(HH 2) If BH, but B6∈ HA, then there is some H0
such that HAH0H∪ {A}and ⊥ 6∈ Cn(H0),
but ⊥ ∈ Cn(H0∪ {B}). (‘Relevance’)
(HH 3) AHA. (‘Success’)
(HH 4) If it holds for all subsets H0of Hthat ¬ACn(H0) iff
¬BCn(H0), then HA=HB. (‘Uniformity’)
(HH 5) If not ` ¬A, and each sentence in Gis logically
inconsistent with A, then
HA= (HG)A. (‘Redundancy’)
The motivation behind these postulates is similar to the motivation of
the contraction postulates (HH .
1) – (HH .
5). In addition, the following
two postulates will turn out relevant.
(HH 4w) If Aand Bare in Hand it holds for all subsets H0of H
that ¬ACn(H’) iff ¬BCn(H0), then HA=HB.
(‘Weak uniformity’)
(HH 6) HA= (H∪ {A})A(‘Pre-expansion’)
We shall report on Hansson’s explicit constructions and representation
theorems in Sections 5.2 and 5.6.
4Constructive models and representation theorems
As emphasized in the introductory sections, logic alone cannot tell us how
to revise a given database or belief set. We need some additional informa-
tion, a selection mechanism, in order to be able to decide rationally which
sentences to give up and which to keep. In this section we investigate five
possibilities of supplying the missing information and using it in a concrete
construction process for changes of a belief set K. In partial meet con-
Belief Revision 27
tractions (Section 4.1) we utilize a selection function or a relation on the
powerset of K. In epistemic entrenchment contractions (Section 4.2), we
have a relation on Kwhich respects logical structure in a certain manner.
In safe contraction (Section 4.3), we have relations on Kwhich need not
satisfy the rather demanding conditions for epistemic entrenchment. In the
model-theoretic approach (Section 4.4) and ordinal conditional functions
(Section 4.5) we have orderings or rankings of interpretations or possible
worlds for L. In Section 5, we shall recognize basically the same methods,
but then the representation of belief states will include the specification of
abase Hfor the belief state K, with the understanding that the structure
of His a substantial part of the selection mechanism that governs changes
of belief.
The selection mechanism can be interpreted either as something extra-
neously given (e.g. determined by logical and/or pragmatic factors, spec-
ified by the user of a database) or as an integral part of the belief state.
(see (Hansson 1992)). In the former case one can (but need not) think
of the selection mechanism as independent from the current belief state
(cf. Alchourr´on and Makinson’s ‘hierarchies’, Schlechta’s ‘preference rela-
tions’, Hansson’s ‘superselectors’, Rott’s ‘generalized epistemic entrench-
ment’). In the latter case the principle of categorial matching dictates that
a change operation should give us both a revised belief set and a revised
selection mechanism. With few exceptions (cf. Spohn’s ‘ordinal condi-
tional functions’, Dubois and Prade’s ‘possibility measures’, full meet base
contraction) the principle is violated by the existing approaches to belief
revision.
4.1 Partial meet contraction
This section will introduce a first kind of explicit modelling of a contraction
function for belief sets. It is designed to conform to the minimal change
constraint (iii) of Section 1.2 measuring minimality of change by the subset
relation . Via the Levi identity (Def ) and 3.3.1, such a model can be
used to define a revision function as well.
The problem in focus is how to define the contraction K.
Awith respect
to a belief set Kand a proposition A. A general idea is to start from K
and then give some recipe for choosing which propositions to delete from
Kso that K.
Adoes not contain Aas a logical consequence. According to
the criterion of informational economy we should look at as large a subset
of Kas possible.
The following notion is useful: A set Mis a maximal subset of Kthat
fails to imply Aif and only if (i) MK, (ii) A6∈ Cn(M), and (iii) for
any M0such that MM0K, A C n(M0). As is easily shown, when
Kis a belief set then Mis a belief set, and for any sentence Bthat is in
Kbut not in M, B Ais in M. Clause (iii) means that if Mwere to be
expanded by some sentence Bfrom KMthen it would entail A. The
28 Peter G¨ardenfors and Hans Rott
set of all maximal subsets of Kthat fail to imply Awill be denoted KA.
Using the assumption that `is compact it is easy to show that this set is
nonempty, unless Ais logically valid.
A first tentative solution to the problem of constructing a contraction
function is to identify K.
Awith one of the maximal subsets in KA.
Technically, this can be done with the aid of a selection function Sthat
picks out an element S(KA) of KAfor any Kand any Awhenever
KAis nonempty. We then define K.
Aby the following rule:
(Maxichoice)K.
A=S(KA) when not `A, and K.
A=Kotherwise.
Contraction functions determined by some such selection function were
called choice contraction functions in (Alchourr´on and Makinson 1982) and
maxichoice contraction functions in (Alchourr´on et al. 1985).
A first test for this construction is whether it has the desirable proper-
ties. It is easy to show that any maxichoice contraction function satisfies
(K.
1) – (K.
6). But they also satisfy the following fullness condition:
(K.
F) If BKand B6∈ K.
A, then BAK.
Afor any belief set
K.
We can now show that (K.
1) – (K.
6) and (K.
F) characterizes maxi-
choice contraction functions in the sense of the following representation
theorem. Let us say that a contraction function .
can be generated by a
maxichoice contraction function iff there is some selection function Ssuch
that .
is identical with the function obtained from Sby the maxichoice
rule above.
Theorem 4.1.1. Any contraction function that satisfies (K.
1) (K.
6)
and (K.
F)can be generated by a maxichoice contraction function.
However maxichoice contraction functions in general produce contrac-
tions that are too large. A result from (Alchourr´on and Makinson 1982)
is applicable here: Let us say that a belief set Kis maximal iff for every
sentence B, either BKor ¬BK. One can now show the following
discomforting result:
Theorem 4.1.2. If a revision function is defined from a maxichoice
contraction function .
by means of the Levi identity, then, for any Asuch
that ¬AK, K Awill be maximal.
In a sense, maxichoice contraction functions create maximal belief sets.
So a second tentative idea is to assume that K.
Acontains only the propo-
sitions that are common to all of the maximal subsets in KA:
(Meet)K.
A=T(KA) whenever KAis nonempty and K.
A=K
otherwise.
Thus a sentence Bis in K.
Aiff it is contained in all maximal subsets of
Kthat fail to imply A. This kind of function was called meet contraction
Belief Revision 29
function in (Alchourr´on and Makinson 1982), and full meet contraction
function in (Alchourr´on et al. 1985).
Again, it is easy to show that any full meet contraction function satisfies
(K.
1) – (K.
6). They also satisfy the following intersection condition:
(K.
I) For all Aand B, K .
AB=K.
AK.
B.
We have the following representation theorem:
Theorem 4.1.3. A contraction function satisfies (K.
1) (K.
6) and
(K.
I)iff it can be generated as a full meet contraction function.
The drawback of of full meet contraction is the opposite of maxichoice
contraction—in general it results in contracted belief sets that are far too
small. The following result is proved in (Alchourr´on and Makinson 1982):
Theorem 4.1.4. If a revision function is defined from a full meet con-
traction function .
by means of the Levi identity, then, for any A such
that ¬AK, K A=Cn({A}).
In other words, the revision will contain only Aand its logical conse-
quences.
So using only one of the maximal subsets in KAwhen defining the
contraction K.
Ayields a contraction set that is too large and that using
all, in the sense of full meet contraction, yields a contraction set that is too
small. Given this it is natural to investigate the consequences of using only
some of the maximal subsets in KAwhen defining K.
A. Technically, a
selection function Scan be used to pick out a nonempty13 subset S(KA)
of KA, if the latter is nonempty, and that puts S(KA) = Kin the
limiting case when KAis empty. The contraction function can then be
defined as follows:
(Partial meet)K.
A=TS(KA).
Such a contraction function was called a partial meet contraction func-
tion in (Alchourr´on et al. 1985). The intuitive idea is that Spicks out
the ‘best’ maximal subsets in KA. The concept of a partial meet con-
traction function includes maxichoice contraction as the special case when
S(KA) is a singleton, and full meet contraction as the special case when
S(KA) = KA.
The following representation theorem shows that (K.
1)(K.
6) indeed
characterizes the class of partial meet contraction functions:
Theorem 4.1.5. For every belief set K, .
satisfies postulates (K.
1)
(K.
6) iff .
is a partial meet contraction function.
13As has been pointed out by (Lindstr¨om 1991), the requirement that S(KA) be
non-empty whenever KAis corresponds to postulate (K5), or—via the Levi and
Harper identities—essentially to (K.
4).
30 Peter G¨ardenfors and Hans Rott
So far we have put no constraints on the selection function S. The idea
of Spicking out the ‘best’ elements of KAcan be made more precise by
assuming that there is a preference relation over the maximal subsets in
KAthat can be used to pick out the top elements. This relation should
be independent of which sentence we are retracting. Technically, we do
this by introducing the notation M(K) for the union of the family of all
the sets KA, where Ais any proposition in Kthat is not logically valid,
or equivalently, the set of all maximal proper subtheories of K. Then it
is assumed that there exists a non-strict (reflexive) relation over M(K).
When KAis nonempty, this relation can be used to define a selection
function that picks out the top elements in the ordering:
(Def S)S(KA) = {MKA:M0Mfor all M0KA}.
A similar, and in fact equivalent, idea is to define Swith the help of a
strict (irreflexive) relation <over M(K) in the following way:
(Def0S)S(KA) = {MKA:M < M 0for no M0KA}.
While (Def S) requires greatest elements of KA, (Def0S) is content
with maximal ones. For future reference, we call the method employed in
the first definition strong and in the second definition weak maximization.
The connection between strong and weak maximization is obvious: Simply
define M < M0iff not M0M, and (Def S) and (Def0S) are the same.
Notice, however, that the properties of get a different intuitive status
when turned into properties of <. For example, connectivity of is asym-
metry of <, and transitivity of is virtual connectivity of <(if M < M 0
then either M < M 00 or M00 < M0) (for more on this, cf. (Rott 1992b)).
A contraction function .
that is determined from via the selection
function Sgiven by (Def S) will be called a relational partial meet contrac-
tion function. This way of defining the selection function constrains the
class of partial meet contraction functions that can be generated, as the
following representation theorem due to (Rott 1993)14 shows:
Theorem 4.1.6. For any belief set K, if .
satisfies (K.
1) (K.
7)
and (K.
8r), then it is a relational partial meet contraction function. Con-
versely, if .
is a relational partial meet contraction, then it satisfies (K.
1)
(K.
7), and if Kis logically finite, then .
also satisfies (K.
8r).
If .
is determined from a transitive relation via the selection function
Sgiven by (Def S), it will be called a transitively relational partial meet
contraction function. Alchourr´on, G¨ardenfors, and Makinson (1985) prove
the following representation theorem:
Theorem 4.1.7. For any belief set K, .
satisfies (K.
1) (K.
8) iff .
is a transitively relational partial meet contraction function.
14Similar results for nonmonotonic inference relations and revision functions have been
obtained independently by (Katsuno and Mendelzon 1991) and (Lindstr¨om 1991).
Belief Revision 31
Surprisingly, 4.1.7 is still valid if we replace transitive relationality by
relationality with respect to a transitive and connected relation .
Thus we have found a way of connecting the rationality postulates with
a general way of modelling contraction functions. Special cases of relational
partial meet contractions, with relations over M(K) derived from some
primitive relation over (some subset of ) K, may be found in (Alchourr´on
and Makinson 1986, Nebel 1989, Nebel 1992, Rott 1991c). Partial meet
contractions offer an attractive model for the belief change of an ideal
reasoner. The drawback of the partial meet construction, however, is that
the computational costs involved in determining the content of the relevant
maximal subsets of a belief set Kare so overwhelming that we had better
look at some other possible solutions to the problem of constructing belief
revisions and contractions.
4.2 Epistemic entrenchment
Our second kind of explicit modelling of a contraction function for belief
sets is designed to conform to integrity constraint (iv) of Section 1.2. Even if
all sentences in a belief set are accepted or considered as facts (so that they
are assigned maximal probability), this does not mean that all sentences are
are of equal value for reasoning in planning and problem-solving. Certain
pieces of our knowledge and beliefs about the world are more valuable than
others when planning future actions, conducting scientific investigations, or
reasoning in general. We will say that some sentences in a belief system
have a higher degree of epistemic entrenchment than others. This degree of
entrenchment will, intuitively, have a bearing on what is abandoned from a
belief set, and what is retained, when a contraction or a revision is carried
out.
From an epistemological point of view, some may see the notion of epis-
temic entrenchment as more fundamental than that of contraction. Some
may, conversely, see contraction as being more fundamental, and some, fi-
nally, may remain sceptical of any such priorization. From a computational
point of view, the most promising direction is that which takes the relation
of epistemic entrenchment as basic. Accordingly, we begin this section by
presenting a set of postulates for epistemic entrenchment which will serve as
a basis for a constructive definition of appropriate revision and contraction
functions.
The guiding idea for the construction is that when a belief set Kis re-
vised or contracted, the sentences in Kthat are given up are those having
the lowest degrees of epistemic entrenchment. Fagin, Ullman and Vardi
(1983p. 358), introduce the notion of ‘database priorities’ which is some-
what similar to the idea of epistemic entrenchment. However, database
priorities need not respect the logical relationship between the items in a
database, and they are used in a rather different way to update belief sets
(cf. Section 5.2).
32 Peter G¨ardenfors and Hans Rott
We will not assume that one can quantitatively measure degrees of
epistemic entrenchment, but only work with qualitative properties of this
notion. One reason for this is that we want to emphasize that the problem
of uniquely specifying a revision function (or a contraction function) can be
solved, assuming only very little structure on the belief sets apart from their
logical properties. For a comparison between epistemic entrenchment and
the closely related approach of possibility theory, see (Dubois and Prade
1991a, G¨ardenfors and Makinson 1994).
If Aand Bare sentences in L, the notation A<Bwill be used as a
shorthand for ‘Bis epistemically more entrenched than A’. The following
presentation is based on (Rott 1992b). ardenfors and Makinson’s account
in terms of a non-strict relation can be recovered by taking the converse
of complement of <, i.e. by defining ABas B6< A.
A relation of epistemic entrenchment is a relation over the set Lof all
sentences satisfying the following basic set of postulates:
(EE1) Not A < A (irreflexivity)
(EE2 ) If A < B and B`C, then A < C (continuing up)
(EE2 ) If A < B and C`A, then C < B (continuing down)
(EE3 ) If A < B and A < C, then A < B C(conjunction up)
(EE3 ) If AB < B, then A < B (conjunction down)
The justification for (EE2 ) is that if it is a smaller change to give up A
than to give up B, and Blogically entails C, then it will be a smaller change
to give up Athan to give up C, because Bmust in any case be retracted
in order to give up C—provided that the revised belief set is to satisfy
the integrity constraint (ii). The justification for (EE2 ) is similar. The
rationale for (EE3 ) and (EE3 ) is as follows: If one wants to retract the
conjunction BCof two sentences Band Cfrom K, this can be achieved,
and can only be achieved, by giving up either Bor C. Consequently, the
informational loss incurred by giving up BCwill be the same as the
loss incurred by giving up Bor that incurred by giving up C. So if it is a
smaller change to give up A both than to give up Band than to give up
C, it will be smaller change to give up A than to give up BC. On the
other hand, if it is a smaller change to give up ABthan to give up B,
then ABwill be discarded by discarding A, so this means that it is a
smaller change to give up Athan to give up B.
Note that all conditions are in Horn form. If we wish to avoid any
reference to particular connectives in the object language, we can employ
the following versions of the postulates:
(EE) If A < B for every Bin a non-empty set Hand H`C,
then A < C (closure up)
(EE) If A < B and {B, C} ` A, then C < B (closure down)
It is easy to check that closure up (closure down) is equivalent to the
combination of continuing up and conjunction up (continuing down and
conjunction down).
Belief Revision 33
We note the following simple consequences of these postulates:
Lemma 4.2.1. Suppose the relation <over Lsatisfies (EE1) – (EE3).
Then it also has the following properties:
(i) If A < B , Cn(A) = Cn(A0)and Cn(B) = C n(B0),
then A0< B0(extensionality)
(ii) If AC < B C, then A < B
(iii) If A < B and C < D, then AC < B D
(iv) If A < B then not B < A (asymmetry)
(v) If A < B and B < C, then A < C (transitivity)
(vi) A < A Biff A∨ ¬B < A B.
The epistemic entrenchment relations of (G¨ardenfors and Makinson
1988) are required to further satisfy the following supplementary postu-
lates:
(EE4) If A < B, then either A < C
or C < B (virtual connectivity)
(EE5) If K6=K, then AK
iff B < A for some B(minimality)
(EE6) If not `A, then A < B for some B(maximality)
Given (EE4), the relation , defined by ABiff neither A < B nor
B < A, is an equivalence relation. Then ABexpresses the fact that A
and Bare tied rather than incomparable. Thus virtual connectivity guar-
antees that any two sentences of Lcan be compared with respect to their
epistemic entrenchment. The postulates (EE5) and (EE6) take care of the
limiting cases: (EE5) requires that the sentences not in Kare just the ones
which are minimally entrenched; and (EE6) says that only logically valid
sentences can be maximal in <. The converse of (EE6) follows from (EE1)
and (EE2 ).
Virtual connectivity, (EE4), is a very powerful condition.
Lemma 4.2.2. Suppose the relation <over Lsatisfies (EE4). Then
(i) if <satisfies (EE1) and (EE), then it also satisfies (EE);
(ii) if <is asymmetric and satisfies (EE), then it also satisfies (EE).
Since (EE1) is weaker than asymmetry and (EE) is more transparent
than (EE), we suggest characterizing relations of epistemic entrenchment
with comparability by postulates (EE1), (EE) and (EE4).
More supplementary conditions are given by
(EE50) If AKand B6∈ K, then B < A (K-representation)
(EE60) If A < >and not B < >, then A < B (top equivalence)
The main purpose of this section is to show the connections between
orderings of epistemic entrenchment and the AGM contraction and revision
functions postulated in Sections 3.1 and 3.2. We will accomplish this by
providing two conditions, one of which determines an ordering of epistemic
entrenchment assuming a contraction function and a belief set as given, and
34 Peter G¨ardenfors and Hans Rott
the other of which determines a contraction function assuming as given an
ordering of epistemic entrenchment and a belief set. The first condition is:
(C <)A < B if and only if A6∈ K.
ABand BK.
AB.
The idea underlying this definition is that when we contract Kwith
respect to ABwe are forced to give up at least one of Aand B, and A
should be retracted while Bshould be kept just in case Bis epistemically
more entrenched than A.
The second, and from a constructive point of view most central, con-
dition gives an explicit definition of a contraction function in terms of the
relation of epistemic entrenchment:
(C.
)BK.
Aif and only if BK, and either A < A Bor A6∈ Kor
not A < >.
Perhaps the best way of motivating this condition (apart from the fact that
it ‘works’ in the sense of Theorems 4.2.3 to 4.2.6 below) is the following.
If Ais not in K, then we do not need to change K, and accordingly (C.
)
rules K.
A=K. So let Abe in K. According to (C <), A<Bis
essentially the same as BK.
AB. If we replace Bby AB, we
get ABK.
A(AB) iff A<AB. But K.
A(AB) is
the same as K.
A. And, given the understanding that the contraction
operation should satisfy the recovery postulate (K.
5), we have also that
¬ABK.
A, assuming that Bis in K. Hence, for any BK, we
have ABK.
Aiff BK.
A. Putting this together gives: If AK,
then for any BK, B K.
Aiff A < A B. (Note that this argument
does not stand completely on its own feet, since it presumes (C <) and the
validity of several of the basic postulates for contraction including most
conspicuously (K.
5)).
As mentioned above, one might take the ordering of epistemic entrench-
ment to be more fundamental than a contraction or a revision function.
Condition (C.
) now provides us with a tool for explicitly defining a con-
traction function in terms of the ordering <. An encouraging test of the
appropriateness of such a definition is the following theorem proved in Rott
(1992b) which generalizes an earlier result of G¨ardenfors and Makinson
(1988):
Theorem 4.2.3. If a relation <over Lsatisfies (EE1) – (EE3), then
the contraction function .
which is uniquely determined by (C.
)satisfies
(K.
1) (K.
3),(K.
5) (K.
6),(K.
8c)as well as (K.
7c),
(K.
7p)If AK.
ABand K.
AB6=K, then AK.
ABC15
and the condition (C <)restricted to sentences Aand Bin K. If in
addition <satisfies (EE5) and (EE50), then .
satisfies the full condition
15(K.
7p) is a slightly restricted form of an equivalent to (K.
7); cf. (Alchourr´on et
al. 1985, Observation 3.3).
Belief Revision 35
(C <); if <satisfies (EE6), then .
satisfies (K.
4); if <satisfies (EE60),
then .
satisfies (K.
7); if <satisfies (EE4), then .
satisfies (K.
8).
Indirectly, 4.2.3 provides us with a consistency proof for the set of pos-
tulates for contractions (and thereby also for the postulates for revisions via
3.3.1) since it is easy to show, using finite models, that the set of basic and
supplementary postulates for epistemic entrenchment is consistent. Man-
ageable finite representations (‘bases’) of epistemic entrenchment relations
satisfying (EE1) – (EE6) are supplied in (Rott 1991a, Section 3).
Conversely, we can show that if we start from a given contraction func-
tion and determine an ordering of epistemic entrenchment with the aid of
condition (C <), the ordering will have the desired properties:
Theorem 4.2.4. If a contraction function .
over Ksatisfies (K.
1)
(K.
3) and (K.
5) (K.
6) then the ordering <that is uniquely deter-
mined by (C <)satisfies the condition (C.
). If in addition .
satisfies
(K.
7p),(K.
7c)and (K.
8c), then <satisfies (EE1) – (EE3) and (EE5)
and (EE50); if .
satisfies (K.
4), then <satisfies (EE6); if .
satisfies
(K.
7), then <satisfies (EE60); if .
satisfies (K.
7) and (K.
8), then <
satisfies (EE4).
Theorems 4.2.3 and 4.2.4 imply that conditions (C.
) and (C <) are
interchangeable in the following sense. Restricting our attention to the
case considered by (G¨ardenfors and Makinson 1988), we denote by Cbe
the class of contraction functions satisfying (K.
1) – (K.
8) and by Ethe
class of orderings satisfying (EE1) – (EE6). Let Cbe a map from Eto C
such that C(<) = .
is the contraction function determined by (C.
) for a
given ordering <; and let Ebe a map from Cto Esuch that E(.
) =<is
the ordering determined by (C <) for a given contraction function .
. We
have as an immediate consequence of Theorems 4.2.3 and 4.2.4 that:
Corollary 4.2.5. For all .
in C, C(E(.
)) = .
, and for all <in E,
E(C(<)) =<.
These results suggest that the problem of constructing appropriate
contrac-tion and revision functions can be reduced to the problem of pro-
viding an appropriate ordering of epistemic entrenchment. Furthermore,
condition (C.
) gives an explicit answer to which sentences are included
in the contracted belief set, given the initial belief set and an ordering of
epistemic entrenchment. From a computational point of view, applying
(C.
) is trivial, once the ordering <of the elements of Kis given.
Rott’s liberalization of the concept of epistemic entrenchment meets
some objections against the strength of the postulates of the G¨ardenfors-
Makinson account. An earlier rejoinder to one of these objections was
offered by Lindstr¨om and Rabinowicz (1991) who replace G¨ardenfors and
Makinson’s conjunctiveness condition, according to which AABor
BABfor every Aand B, by a weaker condition stating that if AB
36 Peter G¨ardenfors and Hans Rott
and ACthen ABC. However, Lindstr¨om and Rabinowicz cannot
apply a contraction recipe as simple as (C.
) in their framework. On the
other hand, the relations defined above—which are not required to satisfy
(EE4) – (EE6)—are also appropriate to model (a sceptical interpretation
of) relational belief changes in the style of Lindstr¨om and Rabinowicz (see
(Rott 1992b)).
We now turn to some computational aspects of adding an ordering of
epistemic entrenchment to the representation of a belief state. One impor-
tant question concerns the amount of information that needs to be specified
in order to determine an ordering of epistemic entrenchment. In applica-
tions, belief sets will in general be logically finite. In algebraic terms,
the set of these equivalence classes will be isomorphic to a finite Boolean
algebra. This isomorphism is helpful when it comes to implementing a rep-
resentation of a belief set. A logically finite belief set can, for example, be
described via its set of atoms or via its set of dual atoms (which correspond
to maximal disjunctions of literals).
For example consider a propositional language with two elementary
propositions Aand Band the belief set K= Cn({AB}). The logical
relations between the elements of this set can be divided into equivalence
classes which can be described as an 8-element Boolean algebra. This
algebra can be depicted by a Hasse diagram as in Figure 3 (lines upwards
mean logical implication):
Fig. 3. A Hasse diagram for the eight element Boolean algebra
The dual atoms are the elements A∨ ¬B,ABand ¬AB, or more
exactly, the corresponding equivalence classes in Kunder Cn. A Boolean
algebra with natoms has 2nelements and, in general an ordering over
a Boolean algebra must be specified for all these elements so that there
exist (2n)! different total orderings of such an algebra (the number of pre-
orderings is even larger). However, the postulates (EE1) - (EE6) introduce
constraints on the ordering <so that the number of orderings satisfying
these postulates will be much smaller. The following result shows that
the number of total orderings of epistemic entrenchment over a Boolean
Belief Revision 37
algebra with 2nelements is only n!:
Theorem 4.2.6. Let Kbe a logically finite belief set, and let Tbe the set of
all top elements of K, i.e. all dual atoms of K. Then any two entrenchment
relations and 0, each satisfying (EE1) - (EE6), that agree on all pairs
of elements in Tare identical.
The theorem implies that if the ordering of entrenchment of A ¬B , A
Band ¬ABin the Hasse diagram above is specified, then the ordering
of the remaining elements is fully determined. For example, if we assume
that A∨ ¬B < A B < ¬AB, it follows from (EE1) – (EE6) that the
ordering of the eight elements is ABAA∨ ¬BAB < B
AB < ¬AB < >.
The computational interpretation of this result is that in order to specify
the ordering of epistemic entrenchment over a belief set Kcontaining 2n
elements, and thus a contraction function over Kaccording to 4.2.3, one
needs only feed in the (transitive and connected) ordering of nelements
from K. This means that the information required is linear in the number
of atomic facts in K.16 4.2.6, however, does not hold for the generalized
notion of epistemic entrenchment characterized by the reduced postulate
set (EE1) - (EE3).
Let us assume that the belief set Kof some agent is represented as a
logical database in some computer system and that the system contains an
‘efficient’ theorem prover (based on something like the resolution method)
that is used to determine the logical consequences of the elements in the
database (barring all problems of NP-completeness or undecidability for
the moment). What is requested from the user of such a system, in order
for the system to determine a revision of the database representing K,
is that she can provide an ordering of the top elements of the Boolean
algebra that are in K. Epistemologically, this is not appealing since the
top elements represent maximal disjunctions of literals. Interpreting and
comparing the content of these formulas may be an arduous task. To take
a simple example, if K=Cn(A, B, C }) in the propositional language
with the elementary propositions A, B and C, then the dual atoms are the
equivalence classes of ABC,AB∨ ¬C,A∨ ¬BC,¬ABC,
¬AB∨ ¬C,¬A∨ ¬BC, and ¬A∨ ¬B∨ ¬C. For each pair of these
disjunctions, the user has to decide which, if any, is the epistemically more
entrenched one.
However, from a computational point of view, listing the disjunctions
adds nothing to the computational costs, since these (or equivalent) formu-
las will appear as resolution clauses when the logical consequences of the
database are calculated by the theorem prover. In other words, the user is
asked to order the resolution clauses of the current belief set. When the
16But cf. (Doyle 1992, p. 43).
38 Peter G¨ardenfors and Hans Rott
system determines the revision of Kwith respect to A, it cuts off those
clauses at the end of the ordering that would make the set inconsistent
with A. The resolution clauses that remain, together with the input A,
can then be used as a basis for KA.
The comparison A<ABin (C.
) is slightly counterintuitive. In
(Rott 1991c), a more straightforward version of the condition is ventilated:
(C..
)BK..
Aif and only if BK, and either A<Bor A6∈ Kor not
A < >.
The contraction function ..
defined in this way has the following prop-
erties:
Theorem 4.2.7. If a relation <over Lsatisfies (EE1) – (EE3), then
the contraction function ..
which is uniquely determined by (C..
)satisfies
(K.
1) (K.
3),(K.
6),(K.
7p),(K.
7c), and (K.
8c). If in addition
<satisfies (EE6), then ..
satisfies (K.
4); if <satisfies (EE60), then ..
satisfies (K.
7); if <satisfies (EE4), then ..
satisfies (K.
8). However, ..
does not in general satisfy (K.
5).
Since ..
does not satisfy the controversial ‘recovery’ postulate (K.
5),
it follows that ..
defined by (C..
) is in general not identical to .
defined by
(C.
). The function ..
is a ‘withdrawal function’ in the sense of (Makinson
1987, Section 3.3). Rott proves that .
and ..
are revision equivalent:
Theorem 4.2.8. R(..
)and R(.
)are identical revision functions.
A consequence of this theorem is that if we are interested only in the
modelling of revisions obtained by the Levi identity, we can use the ex-
tremely simple test (C..
) when computing the revision functions, without
having to bother about the disjunctions in (C.
).
Unlike the relations of G¨ardenfors and Makinson (1988) which have
to satisfy (EE5), epistemic entrenchment relations required to satisfy only
(EE1) - (EE3 ) are not dependent on a specific belief set K. The depen-
dence on Kis transported to the maps (C <) and (C.
). Consequently,
they can be used for iterated contractions and revisions (Rott 1992b).
There are alternative ways to tackle the problem of iterated belief
change in this context. First, one can try to obtain a direct revision <Aof
<by A. We define B < ACif and only if AB < A C(Rott 1991b).
However, this idea is not satisfying because <Aattributes to Aan unre-
visable maximal a posteriori degree of epistemic entrenchment. Secondly, it
is possible to construct relations of epistemic entrenchment from a less de-
manding kind relation over the set Lof all sentences, as in (Schlechta 1991a,
Schlechta 1991b, Rott 1992a) and (Rott 1992b, Section 6). These sugges-
tions are based on relations which need not respect the logical structure
of the sentences involved in the way epistemic entrenchment relations do.
The most intuitive method is described in (Rott 1992a). Given an acyclical
Belief Revision 39
relation over L, we can define an ordering <of epistemic entrenchment
for a given belief set Kas follows:
(Def <)A < B iff there is an HKsuch that H`Band His, in terms
of , ‘safer than’ every GKsuch that G`A, i.e., G6=and
for every sentence Cin Hthere is a sentence Din Gwith DC.
It is shown in Rott (1992a) that <has the desired properties whenever
we start from a well-behaved relation :
Theorem 4.2.9. If is acyclic over L, then the relation <given by
(Def <) is acyclic and satisfies (EE1), (EE2), (EE2),(EE3),(EE5) and
(EE6). If in addition is transitive, then <satisfies (EE3); if is
virtually connected, then <satisfies (EE4).
(Def <) is further supported by a connecting result with safe contrac-
tions (see Section 4.3). Note, however, that the computational costs in-
volved in the construction of a relation of epistemic entrenchment via (Def
<) are enormous. The procedure proposed by Schlechta (1991a) seems to
be somewhat more efficient.17
A third way to define iterated belief changes is to resort to more com-
plex models of belief change, like Spohn’s (1988, 1990) ‘ordinal conditional
functions’ which will be presented in Section 4.5.
Although partial meet contractions are made to suit integrity constraint
(iii) and epistemic entrenchment contractions account for integrity con-
straint (iv) (except for the nasty disjunction in (C.
)), they are equivalent
in a rather strong sense, provided the former are transitively and connec-
tively relational. This is shown in (Rott 1991c) for epistemic entrenchment
relations with comparability:
Theorem 4.2.10.
1. Let Pbe a transitive and connected relation over M(K)that deter-
mines a partial meet contraction function .
via the selection function
given by (Def S). Then it is possible directly to define from Pa re-
lation of epistemic entrenchment <Eover Lthat determines .
by
(C.
), viz. by putting
A <EBiff for every MM(K)such that B6∈ Mthere is an
M0M(K)such that A6∈ M0and M <PM0.
Here <Pdenotes the strict part of P.
2. Let <Ebe an epistemic entrenchment relation over Lthat determines
a contraction function .
by (C.
). Then it is possible directly to
define from <Ea transitive relation Pover M(K)that determines
17However, it is not possible to use Schlechta’s preference relation in combination
with the simple construction recipe (C.
). Proposition 2.4 (a) in (Schlechta 1991b) is
incorrect, as is shown by (Rott 1992a, Example 1).
40 Peter G¨ardenfors and Hans Rott
.
by partial meet contraction (via the selection function given by (Def
S)), viz. by putting
MPM’ iff for every A6∈ M0there is a B6∈ Msuch that A <EB
or AEB.
4.3 Safe contraction
We next turn to a presentation of a yet another approach to the problem
of constructing contraction functions. The approach was introduced by
Alchourr´on and Makinson (1981, 1985) and is called safe contraction. Their
contraction procedure can be described as follows: Let Kbe a belief set,
and suppose that we want to contract Kwith respect to A. We say that
a subset K0of Kis an entailment set for Aiff K0`Abut K00 `Afor
no proper subset of K0(this term is Fuhrmann’s). Now Alchourr´on and
Makinson postulate a ‘hierarchy’ <over Kthat is assumed to be acyclical
(that is, for no A1, A2, . . . , Anin Kis it the case that A1< A2< . . . <
An< A1). Given such a hierarchy, we say that an element Bof Kis safe
with respect to Aiff Bis not a minimal element (under <) of any entailment
set for A. Equivalently, every entailment set for Aeither does not contain
Bor else contains some Csuch that C < B.
Intuitively the idea is that Bis safe with respect to Aif it can never be
‘blamed’ for the implication of A. Observe that, in contrast to the earlier
constructions, this definition uses minimal subsets of Kthat entail Arather
than maximal subsets of Kthat do not entail A. It should be noted that the
hierarchy <over Kcan be seen as something like an ordering of epistemic
entrenchment. In fact, Alchourr´on and Makinson (1985p. 411) give an
epistemological interpretation of the hierarchy when they say that A < B
is to reflect the idea that Ais less ‘secure or reliable or plausible’, or more
‘exposed’ or ‘vulnerable’, than B.
The set of all elements of Kthat are safe with respect to Ais written
K/A. The safe contraction of a belief set K(modulo a hierarchy <) can
then be defined as the set of all logical consequences of K/A, that is,
K.
A=Cn(K/A). The basic result in (Alchourr´on and Makinson 1985)
is their observation 3.2:
Theorem 4.3.1. Any safe contraction function satisfies (K.
1) (K.
6).
An immediate consequence of this theorem and 4.1.5 is the following:
Corollary 4.3.2. Every safe contraction function over a belief set Kis a
partial meet contraction function over K.
Alchourr´on and Makinson then investigate the consequences of impos-
ing further restrictions on the hierarchy <. The first notions say that
hierarchies should respect logical relationships. A hierarchy <continues
up `over Kiff for all A, B, C in K, if A<Band B`C, then A<C.
Similarly, <continues down `over Kiff for all A, B, C in K, if A`B
Belief Revision 41
and B < C, then A<C. These are the conditions (EE2) and (EE2)
restricted to a given belief set K. The following result is of interest.
Theorem 4.3.3. Let Kbe any belief set. Any safe contraction function
generated by a hierarchy <that continues either up or down `over K
satisfies (K.
7).
Although the properties of continuing up and down are similar, the
respective proof methods for 4.3.3 are remarkably different. Another re-
striction on <investigated by Alchourr´on and Makinson is (EE4) (there
are no conterparts to (EE3) and (EE3)). A hierarchy <is virtually con-
nected over Kiff for all A, B, C in K, if A<B, then either A<Cor
C < B. With the aid of this notion the following theorem can now be
proved.
Theorem 4.3.4. Let Kbe any belief set. Any safe contraction function
generated by a hierarchy <that is virtually connected and continues up or
down `over Ksatisfies (K.
8).
In another article, Alchourr´on and Makinson (1986) prove some repre-
sentation results for the case when Kis a logically finite belief set. The
following result combines their Theorem 1 with 4.1.6 above:
Theorem 4.3.5. Let Kbe a logically finite belief set and .
a contraction
function over K. Then the following conditions are equivalent:
1. .
satisfies (K.
1) (K.
7) and (K.
8r);
2. .
is a safe contraction function generated by a hierarchy <that con-
tinues up and down `over K;
3. .
is a relational partial meet contraction function over K:
The next theorem combines a result proven for the finite case as (Al-
chourr´on and Makinson 1986, Theorem 2) and for the general case in (Rott
1992a) with observations already mentioned.
Theorem 4.3.6. Let Kbe any belief set and .
a contraction function
over K. Then the following conditions are equivalent:
1. .
satisfies (K.
1) (K.
8);
2. .
is a safe contraction function generated by a hierarchy <that con-
tinues up or down `over Kand is virtually connected;
3. .
is a transitively relational partial meet contraction function over
K;
4. .
is an epistemic entrenchment contraction function.
It is shown in Rott (1992a) that an epistemic entrenchment relation <can
be used as a hierarchy for the construction of safe contractions, with the
same result as <used in (C.
). And conversely, given a hierarchy <over
K, one can apply (Def <) to define an epistemic entrenchment relation <0
42 Peter G¨ardenfors and Hans Rott
which via (C.
) yields the same result as a safe contraction determined by
the original hierarchy < .
Even though 4.3.6 shows that the three kinds of contraction functions
are mutually equivalent as regards their logical content, they may be of
different quality when it comes to implementing a contraction or revision
method. From a programmer’s point of view, epistemic entrenchment con-
traction functions are easy to handle given the recipe (C.
). Partial meet
contraction functions, one the other hand, assume that all maximal subsets
of Kthat don’t entail A be computed, before K.
Acan be determined;
while safe contraction functions require the computation of all minimal
subsets of Kthat entail A.
4.4 Minimal changes of models
Several proposals for belief revision operations that have been presented in
the AI literature are based on using some form of ‘minimal changes’ of the
models of a belief set rather than of the belief set itself. The discussions
of minimal model theorists have focussed on revisions rather than contrac-
tions, but we may easily use the connections established in Section 3.3 to
compare the approaches. In this section, which uses some material from the
survey by (Katsuno and Mendelzon 1989, Katsuno and Mendelzon 1991),
some such belief revision methods will be presented.
Let Lbe the language of standard propositional logic and let Pbe the
set of propositional letters in L. An interpretation of Lis a function I
from Pto the set {T , F }of truth values. This function is extended to L
recursively, in the standard way, so that I(AB) = Tiff I(A) = Tand
I(B) = T, etc. The set of all interpretations is denoted I, and the set of
all sentences A such that I(A) = Tis denoted |I|. A model of a sentence
Ais an interpretation Isuch that I(A) = T. A model of a set of sentences
His an interpretation Isuch that I(A) = Tfor all AH. Mod(A) and
Mod(H) denote the set of all models of Aand Hrespectively.
Let Kbe a belief set which is to be contracted with respect to A. In-
stead of applying relational partial meet revisions and using an ordering
of M(K) = S{KB:BKand 6` B}(recall that KBis the set of
all maximal subsets of Kthat do not entail B) when determining KA,
some researchers have proposed to look at an ordering of the set of all
interpretations and then use this ordering to decide which interpretations
should constitute models of KA, and thus indirectly determine KA
in this way. The intended meaning of such an ordering is that some inter-
pretations that are models of A(but not of K) are closer to models of K
than other interpretations. Such an ordering of interpretations should, of
course, be dependent on the belief set K.
It was first pointed out by (Grove 1988) that there is a one-to-one cor-
respondence between the elements of K⊥¬Aand the elements of Mod(A):
Mis in K⊥¬Aiff Mod(M) = M od(K)∪ {I}for some Iin M od(A), for
Belief Revision 43
every Asuch that ¬Ais in KCn(). By the same token, there is a one-
to-one correspondence between the elements of M(K) and IMod(K).
More precisely, we put for any MM(K)
I(M) = Iwith |I|=Cn(M∪ {¬A}) for some AKM18
and for any IIMod(K)
M(I) = K∩ | I|.
It can be verified that Iis a well-defined bijective mapping from M(K)
onto IMod(K) and that Mis its converse, that is M(I(M)) = Mand
I(K(I)) = Ifor any Mand I.
Thus the ordering of the elements of M(K) used in a relational partial
meet contraction can also be interpreted as an ordering of the elements of
IMod(K). Extending this to an ordering of the whole of I, we notice
that in order to respect (K3) and (K4), the models of Kmust be
the smallest elements in I. Technically, we assign to each belief set Ka
relation Kover the set Iof interpretations of L, and a corresponding
strict relation <K. Following Katsuno and Mendelzon we say that Kis
persistent if these conditions hold:
1. If IMod(K), then IKJfor all interpretations J.
2. If IMod(K) and J6∈ Mod(K), then I <KJ.19
If Int is a set of interpretations of L, we let Min(I nt, K) denote the set
of interpretations Iwhich are minimal in Int with respect to K. Katsuno
and Mendelzon (1991) now determine KAfrom Kas the belief set
which has exactly Min(Mod(A),K) as its set of models.20 This is indeed
similar to relational partial meet revision, as is seen from the following
chain of reasoning which uses (Def0S) instead of the original (Def S):
KA= (K.
−¬A) + A(by the Levi identity)
= (T{MK⊥¬A:M < M 0for no M0K⊥¬A}) + A
(by partial meet)
=T{M+A:MK⊥¬Aand M < M 0for no M0K⊥¬A}
=T{| I(M)|:MK⊥¬Aand M < M 0for no M0K⊥¬A}
=T{| I|:IMod(A) and I0<KIfor no I0M od(A)}
=T{| I|:IMin(Mod(A),K)}.
Here we exploit the one-to-one correspondence of K⊥¬Aand Mod(A)
and invert the direction of <over M(K) (where we wish to maximize) when
18Or equivalently(!), with |I|=Cn(M∪ {¬A}) for every AKM.
19Since Katusuno and Mendelzon consider finitely axiomatizable sets of sentences K
which are not closed under Cn, they include a further condition which we may translate
thus: if C n(K) = C n(K0) then K=K0.
20That there is such a KAin the framework of Katsuno and Mendelzon follows from
the fact that they work with a finitary propositional language. We neglect questions of
definability of sets of interpretations. The construction used here is based on the same
kind of idea as Shoham’s (1988) ‘preferential models’ for nonmonotonic logics. For a
comparison between belief revision and nonmonotonic logic, see Section 6.
44 Peter G¨ardenfors and Hans Rott
interpreting it as a relation <Kover I(where we wish to minimize). Having
observed this, we may regard the following general result of (Katsuno and
Mendelzon 1991, Theorem 3.3) as a variation on the earlier theorem of
Alchourr´on, G¨ardenfors and Makinson:
Theorem 4.4.1. A revision function satisfies (K1) (K8) if and
only if there exists a persistent total pre-ordering Ksuch Mod(KA) =
Min(Mod(A),K).
Still another closely related representation is used by Grove (1988) who
connects partial meet contraction with a ‘sphere modelling’ in the style
of Lewis (1973). Instead of interpretations Grove uses possible worlds de-
scribed as maximal consistent subsets of L.21 Each such possible world
widentifies an interpretation Iwfor which Iw(A) = Tiff Ais true in w;
and conversely each interpretation Idetermines a possible world wI=|I|.
So we may identify such possible worlds with interpretations. And instead
of an ordering K, Grove uses a system of ‘spheres’ centered on the set
of possible worlds where Kholds, that is Mod(K). The system of spheres
is a collection Sof subsets of interpretations which satisfies the following
conditions:
(S1) Sis totally ordered by ; that is, if Sand S0are in S, then either
SS0or S0S.
(S2) Mod(K) is the ⊆ −minimum of S.
(S3) The set of all interpretations is in S.
(S4) If Ais a sentence and there is a sphere Sin Sintersecting Mod(A),
then there is a smallest sphere in Sintersecting Mod(A).22
It is easy to show that a system Sof spheres can be used to generate
a persistent ordering Kvia the following definition: IKJiff, every
sphere that contains Jalso contains I. There is also a very natural way
of defining a system of spheres from an ordering K: For each Ilet SIbe
the set of all interpretations Jsuch that JKI(several interpretations
may determine the same SI). It is easy to show that the set Sof all such
sets will satisfy (S1) - (S3). However, (S4) will not be satisfied in general,
unless Kis a well-ordering.23
Grove then uses a system of spheres to define a revision function in
the following way: Let SAdenote the smallest sphere in Sintersecting
Mod(A), which exists according to (S4). Then let KAbe defined as the
21As David Makinson (personal communication) has pointed out, this is more restric-
tive than taking possible worlds as arbitrary points, as is customary. Grove’s method
implies injectivity: no two distinct possible worlds satisfy exactly the same formulae of
L. See (Freund 1993) and (Makinson 1993, Section 4.1).
22Cf. the ‘limit assumption’ in (Lewis 1973) and the ‘smoothness’ assumption in
(Kraus et al. 1990).
23For all this, also cf. (Lewis 1973, Section 2.3).
Belief Revision 45
set of sentences true in all interpretations in the set Mod(A)SA. This set
can be seen as the set of A-worlds that are ‘closest’ to the worlds which are
models of K. Grove proves the following representation result:
Theorem 4.4.2. A revision function satisfies (K1)(K8) if and only
if there exists a system Sof spheres such that Mod(KA) = Mod(A)SA.
Theorems 4.4.1 and 4.4.2 can be seen as variations of Theorem 4.1.7
based on slightly different types of ‘semantics’ for the belief sets. These
results are useful when describing and analysing some of the more concrete
proposals for belief revision methods that can be found in the literature,
to which we now turn.
Dalal (1988a, 1988b) uses the number of propositional letters on which
two interpretations differ as a measure of the ’distance’ between them.
Using this distance measure we can, following Katsuno and Mendelzon
(1991), construct a ‘persistent’ ordering of interpretations in the following
way: Define the distance between two interpretations Iand J,dist(I , J),
as the number of propositional letters whose interpretation is different in
Iand J. Next define the distance between Mod(K) and an interpretation
Ias dist(Mod(K), I ) = minJMod(K)dist(J, I). Then the ordering Kis
defined as IKJif and only if dist(Mod(K), I )dist(Mod(K), J ). It is
easy to show that Kdefined in this way is a persistent ordering. It then
follows from Theorem 4.4.1 that the Dalal revision function determined by
this ordering satisfies conditions (K1) – (K8).
Another related method is proposed by Borgida (1985). When deter-
mining a revision KA, he concentrates on sets of propositional letters on
which a model of Kand a model of Adiffer. Let us denote by Diff (I , A)
the collection of all the sets of propositional letters on which Iand some
model of Adiffer. Borgida’s revision method can be defined as follows:
If Ais inconsistent with K, then an interpretation JMod(KA) iff
JMod(A) and there is some IMod(K) such that the set of proposi-
tional letters on which Iand Jdiffer is a minimal element (set-theoretically)
of Dif f (I, A).24 If Ais consistent with K, then KA=K+Aas usual.
Katsuno and Mendelzon show that Borgida’s revision method cannot be
defined in terms of a persistent ordering K. Thus the revision method
cannot satisfy all of the postulates (K1) – (K8), but it can be shown
that it satisfies (K1) – (K7). A revision method similar to Borgida’s
is suggested by Jackson and Pais (1991).
Winslett (1988) proposes a revision method for the context of reasoning
about action. Her revision operator is defined for the first order calculus
and not for propositional logic. However, if the operator is restricted to the
propositional case it turns out that it is identical with Borgida’s revision
method for the case when Ais inconsistent with K. But, even if Ais
24This measure of closeness of models is also used in (Doyle 1983).
46 Peter G¨ardenfors and Hans Rott
consistent with K, Winslett defines in the same way as in the inconsistent
case. This means that her revision operator will violate (K4), that is the
postulate that if Kis consistent with A, then K+AKA. In fact, it even
violates the preservation principle that KKAwhenever Kis consistent
with A. On the other hand, it satisfies a monotonicity principle, according
to which K1K2implies K1AK2A. Intuitively, the validity of this
principle, combined with the failure of (K4) reflects a crucial feature of
change-recording updates, as contrasted with knowledge-adding revisions
(see (Katsuno and Mendelzon 1992), and Marianne Winslett’s Chapter on
Epistemic Aspects of Databases of this Handbook). There is a deep tension
between the preservation principle and the monotonicity principle which
ultimately leads to the incompatibility theorem of G¨ardenfors (1986, 1987).
Satoh (1988) proposes a revision method for first order databases using
the notion of circumscription. As Katsuno and Mendelzon (1991) point out,
if his method is restricted to the propositional case, one obtains a global
version of Borgida’s revision method. As a generalisation of Borgida’s
Dif f (I, A), Satoh defines a measure Diff (K, A) = SIMod(K)Dif f (I , A).
He defines an interpretation Jto be a model of KAiff JM od(A) and
there is some IMod(K) such that the set of propositional letters on
which Iand Jdiffer is a minimal element of Diff(K, A). This definition
thus minimalizes over a set Diff (K, A) that depends on both Kand A.
Again, Katsuno and Mendelzon show that Satoh’s revision method cannot
be defined in terms of a persistent ordering K. As for Borgida’s method,
the proposed revision method cannot satisfy all of the postulates (K1) –
(K8), but once again it can be shown that it satisfies (K1) – (K7).
4.5 Ordinal conditional functions
A possible worlds model as well as a belief set gives a crude representation
of a subject’s beliefs because we can express only the most elementary
epistemic attitudes. This is a consequence of the fact that the set of possible
worlds representing the epistemic state divides the possible worlds into only
two classes. These models do not have a built-in way of expressing any
degree of plausibility of different possible worlds or propositions. The best-
known way of modeling degrees of belief is to introduce probabilities defined
over a language or a class of propositions. Such models will not be treated
in this chapter (cf. (Jeffrey 1965, Pearl 1988, Lindstr¨om and Rabinowicz
1989), (G¨ardenfors 1988, Ch. 5), and Kyburg’s Chapter on Probabilistic
Logic in Volume 3 of this Handbook). In this section we present Spohn’s
(1988, 1990) theory of ordinal conditional functions, which is a different
way of introducing degrees of belief.25
25Related modellings include Hamblin’s (1959) ‘plausibility measures’, Shackle’s
(1961) ‘potential surprise’, Rescher’s (1964, 1973, 1976) ‘modal categories’ and ‘plausi-
bility indices’, Cohen’s (1977) ‘inductive probability’, Shafer’s (1976) ‘consonant belief
Belief Revision 47
An ordinal conditional function is a function κfrom a given set Wof
possible worlds into the class of ordinals such that some possible worlds
are assigned the smallest ordinal 0.26 Intuitively κrepresents a plausibility
grading of the possible worlds. The worlds that are assigned the smallest
ordinals are the most plausible, according to the beliefs of the individual.
The plausibility ranking of possible worlds can be extended to a ranking
of propositions, (sets of possible worlds), by requiring that the ordinal
assigned to a proposition A be the smallest ordinal assigned to the worlds
included in A; that is, κ(A) = min {κ(w) : wA}. This definition
entails that the plausibility ranking of propositions has the following two
properties:
(O1) For all propositions A, either κ(A) = 0 or κ(¬A) = 0.
(O2) For all nonempty propositions Aand B, κ(AB) = min{κ(A), κ(B)}.
For a given belief set K, let Mod(K) again denote the set of possible
worlds where all sentences in Kare true. We can now identify the set
Mod(K) with the set of the most plausible possible worlds, that is, the
set of worlds wsuch that κ(w) = 0. Following this, we can introduce the
basic acceptability criterion: A sentence Ais accepted in the epistemic state
represented by the ordinal conditional function κiff κ(¬A)>0. That this
definition is the natural one follows from the fact that κ(¬A) = 0 means
that Mod(¬A) and Mod(K) have some world in common; that is, Ais not
believed true in K. So κ(¬A)>0 means exactly that all worlds in Mod(K)
belong to Mod(A).
An important feature of ordinal conditional functions is that it makes
sense to talk of greater or lesser plausibility or firmness of belief, relative
to some function κ. We can distinguish several cases: If both Aand Bare
accepted, we can say that Ais believed more firmly than Biff κ(¬A)>
κ(¬B), that is, if the most plausible worlds outside Aare less plausible
than the most plausible world outside B. There are other cases where A
is more plausible than B. First, the case where Ais accepted and Bis
not, that is, where κ(¬A)> κ(¬B) = 0. Second, there is a case where A
is not believed false but Bis; that is κ(A) = 0 < κ(B). Finally, we have
the case where both Aand Bare believed false but Aless firmly so, that
is, 0 < κ(A)< κ(B). This leads us to the following definition: Ais more
plausible than Brelative to κiff κ(A)< κ(B) or κ(¬B)< κ(¬A).
We thus see that representing epistemic states by ordinal conditional
functions makes it possible to introduce more interesting epistemic atti-
tudes, to wit, ‘believed more firmly than’ and ‘more plausible than’, beside
the standard ‘accepted’, ‘rejected’, and ‘kept in suspense’. For the close
functions’, Zadeh’s (1978) and Dubois and Prade’s (1988) ‘possibility distributions’.
26Readers not familiar with ordinal number theory may replace ‘ordinal number’ by
‘natural number’ throughout this section. This will restrict the generality somewhat but
not distort the construction, as the finite ordinals coincide with the natural numbers.
48 Peter G¨ardenfors and Hans Rott
connections between ordinal conditional functions and possibility theory,
see (Dubois and Prade 1991a, Dubois and Prade 1992).
We now turn to how Spohn models revisions and contractions and show
how these processes relate to contractions and revisions of belief sets. A key
feature is that the epistemic inputs are not only propositions but proposi-
tions together with a degree of plausibility. The typical form of an input is
thus a pair (A, α) of a proposition Aand an ordinal α. The interpretation
of this input is that Ais the information that the agent wants to accept
in the new state of belief and αis the degree of firmness with which this
information is incorporated into the new state. (Recall that the degree of
firmness of an accepted proposition is measured by the value of κ(¬A)—the
higher the value, the more firmly Ais believed.)
Suppose that the present state of belief is described by the ordinal condi-
tional function κand that the state changes as a result of an epistemic input
(A, α). How is the new state of belief, call it κ(A, α), to be described?
Interestingly enough, Spohn presents an explicit construction rather than
postulates. Spohn calls κ(A, α), the (A, α)conditionalization of κ.
As an auxiliary concept, Spohn introduces the notion of the A-part of
κ, which we denote κ(.|A). This function, which is defined only for the
worlds in A, is determined by the following rule:
(O3) For all wA, κ(w|A) = κ(A) + κ(w).
For technical reasons Spohn here makes use of the notion of left-sided
subtraction of ordinals, which is defined as follows: If aand bare two
ordinals with ab, then a+bis the uniquely determined ordinal c
such that a+c=b. In the case when all ordinals are finite, this notion
coincides with standard subtraction, and rule (O3) can be written more
perspicuously as κ(w|A) = κ(w)κ(A). Thus one might say that the
A-part of κis the restriction of κto Ashifted to 0, that is, in such a way
that κ(A|A) = 0.
With the aid of this concept, we can now define the ordinal conditional
function κ(A, α) representing the new state of belief:
(O4) Let Abe a proposition such that A6=. Then for all wA, κ
(A, α)(w) = κ(w|A), and for all w∈ ¬A, κ (A, α) = α+κ(w| ¬A).
Thus the (A, α)conditionalization of κis the union of the A-part of
κand the ¬Apart of κshifted up by αdegrees of plausibility. It follows
from the definition that κ(A, α)(A) = 0 and κ(A, α)(¬A) = α. This
means that Ais accepted in κ(A, α) with firmness α. Furthermore,
definition (O4) is constructed so that getting informed about A does not
change the epistemic state restricted to Aor to ¬A. In other words, the
(A, α)conditionalization of κleaves the A-part as well as the ¬Apart of
κunchanged; they are only shifted against each other.27
27The so-called Jeffrey conditionalization, defined by P(A,α)(B) = α.P(B|A) + (1
Belief Revision 49
If α > 0 and ¬Ais accepted in κ,that is, if κ(¬A)=0< κ(A), then
the process of forming the (A, α)conditionalization is a generalization of
a revision of belief sets. To make this more precise, let us define the belief
set Kassociated with the ordinal conditional function κas the set of all
propositions that are accepted in κ. If we let KAdenote the belief
set associated with κ(A, α), where α > 0, then it can be shown that the
revision function defined in this way satisfies postulates (K1) – (K8). In
the case when ¬Ais not accepted in relation to κ, that is, when κ(A) = 0,
the (A, α)conditionalization of κcorresponds to an expansion.
However, it is interesting to note that Spohn’s notion covers other cases
as well. In particular, the (¬A, κ(A))conditionalization of κcorresponds
to a contraction with respect to A! In the principal case when A is accepted
relative to κ, that is, when κ(¬A)>0, this is identical with the (A,0)-
conditionalization. For this kind of change we have both κ(A, 0)(A) = 0
and κ(A, 0)(¬A) = 0; that is, Ais not accepted in relation to κ(A, 0), and
thus we have a proper contraction. Again, if we let K.
Adenote the belief
set associated with κ(¬A, κ(A)), it can be shown that the contraction
function defined in this way satisfies (K.
1) – (K.
8).
Next, there is a case that has no correspondence in terms of belief sets.
Suppose that κ(A) = 0 and κ(¬A) = β; that is, Ais already accepted in
relation to κwith degree βof firmness. Now if α > β, then A is still believed
in κ(A, α), but with a higher degree of firmness. This corresponds to a
situation when one obtains additional evidence for A. And if α < β but
α > 0, then Arepresents a situation when one has obtained some evidence
against Awhereby the belief in Ais weakened but not retracted. It is an
advantage of Spohn’s model of the dynamics of belief that it can handle
this case. This kind of change cannot be modeled in terms of belief sets
only.
Finally, Spohn’s model can process a more complex type of input which
can be viewed as representing uncertain observations. Rather than accept-
ing a sharp proposition with a unique degree of plausibility, a function κ
can be updated by another ordinal conditional function λ. Let {Ai:iI}
be a partition of W, the set of ‘possible observations’, and λbe such that
it is constant over each Ai. Then the following λconditionalization of κ
is a natural generalization of the rule (O4):
(O5) For all Aiand all wAi, κ λ(w) = λ(Ai) + κ(w|Ai).(= λ(Ai)
κ(Ai) + κ(w))
In the λconditionalization of κ, the Aiparts are all left intact and shifted
only towards each other. Clearly, κ(A, α) = κλ, if λis defined by
λ(w) = 0 for wAand λ(w) = αfor w6∈ A. Alternative updating
α).P(B| ¬A),0α1, is a probabilistic version of the same kind of belief change.
See (Jeffrey 1965).
50 Peter G¨ardenfors and Hans Rott
and combination rules for ordinal conditional functions and similar rep-
resentations of belief states are discussed by (Hunter 1990, Shenoy 1991,
Dubois and Prade 1992).
Spohn’s main argument against probability distributions as representa-
tions of belief states is that they cannot capture plain belief; Dubois and
Prade argue to a similar effect that probability distributions cannot capture
plain ignorance. Now one could object against Spohnian belief dynamics
that it does not provide a model for plain revision by a sentence A, with-
out any accompanying certainty factor. This drawback, which could be
remedied by decreeing a default value α= 1 or α=(for a new inaccessi-
ble ordinal ), is outweighed by the fact that ordinal conditional functions
provide an elegant way of modelling iterated belief revision: Given an initial
function κ, the belief states (κ(A, α)) (B, β) and (κλ)µare perfectly
well-defined, without any trouble of finding new selection mechanisms.
5Base contractions and revisions
Up to now we have been dealing exclusively with constructive models
within the coherentist appoach to belief revision. One paradigm of foun-
dational belief revision is the revision of belief bases. (The other is truth
maintenance—see Section 7.) Recall from Section 2.3 that a belief base H
is an arbitrary set of sentences from our object language L;His a base
for the belief set Kiff Cn(H) = K. The basic intuitive idea is that a
belief base records the explicit beliefs of an agent (which he has indepen-
dent reasons to accept) whereas a belief set contains also the implicit belief
(which are derived from, and thus justified exclusively by, the explicit be-
liefs). Some writers require that bases are finite (Nebel 1989, Brewka 1991,
Dubois and Prade 1991a), and some don’t (Fuhrmann 1991, Nebel 1992,
Hansson 1991a). For the sake of simplicity, we join the first party in this
section.
The principal idea of base revision is that syntax matters. It thus
involves a direct rejection of Dalal’s (1988a) ‘principle of the irrelevance of
syntax’. Given two different bases Hand H0, it does of course not follow
from Cn(H) = C n(H0) that H.
A=H0.
Aor HA=H0A, nor does it
follow that Cn(H.
A) = Cn(H0.
A) or Cn(HA) = C n(H0A). However,
the terms ‘base contraction’ and ‘base revision’ are ambiguous. They may
mean that we identify epistemic states with belief bases, and that belief
changes operate on belief bases only. Base contraction and revision in this
sense transform Hinto H.
Aand HA, respectively. On the second
reading, the primary epistemological entities are still belief sets, but they
are supposed to be generated by belief bases. Each belief set Kis then
associated with a base Hsuch that K=Cn(H). Belief changes operate on
the level of belief sets, but how the changes are to be performed is guided,
to some extent at least, by the structure of the base. The “axiomatization”
Belief Revision 51
of a belief set by a base plays the same role in base contraction in the second
sense as, for instance, a selection function in partial meet contraction.
Fig. 4. Two senses of ‘base revision’
base revision – belief base represents belief state (upper part of diagram)
theory revision through base revision – theory represents belief state (full
diagram)
As base revision in both ways is based on non-trivial change operations
combined with standard inference operations, it is an instance of the logic-
constrained mode of belief revision in the sense explained in Section 1.4.
Note that this raises the problem of categorial matching (see Section
1.2). We start from a pair hK, H iand should ideally end up with an up-
dated pair hK.
A, H .
Aiwhere K.
Ais obtained by taking the logical
consequences of H.
A(‘theory change through base change’, as Fuhrmann
puts it). We shall see, however, that many approaches give us only K.
A
without a base for it, so that iterated belief change is impossible (an excep-
tion being (Hansson 1993b)). Clearly, on this account one and the same
belief set may be revised diffently. For instance, the two bases H={A, B }
and H0={AB}generate the same belief set, but we intuitively expect
that Bshould survive a withdrawal of Afrom the belief state characterized
by H, while Aand Bstand and fall together in H0(they are ‘lumped’, as
Kratzer puts it), so that Bis not to be preserved after a contraction with
respect to Ain this case.
By and large, the methods of changing belief sets presented in Section
4 have all been investigated as models for the change of belief bases as well.
This holds in particular for partial meet contraction and revision ((Hans-
son 1989, Hansson 1991a), special case (Nebel 1989, Nebel 1992)) and for
52 Peter G¨ardenfors and Hans Rott
safe contraction (Alchourr´on and Makinson 1985, Fuhrmann 1991, Nayak
1991). In addition, both special cases of partial meet contraction which
were trivialized for belief set operations (see Section 4.1) yield sensible re-
sults when applied to belief bases: maxichoice contraction ((Alchourr´on and
Makinson 1981, Alchourr´on and Makinson 1982), special cases (Nebel 1992,
Nayak 1991)) and full meet contraction (Nebel 1989). Since changes of be-
lief bases make explicit reference to the syntactic structure of our beliefs, a
model-theoretic characterization seems to be out of the question. However,
some kinds of base revision are representable model-theoretically, either in a
direct way (Lewis 1981) or indirectly via a connection with partial meet op-
erations on theories (Nebel, Nayak). Epistemic entrenchment contractions
have not yet been applied to belief bases, but some writers use related rela-
tions (Rescher 1964, Rescher 1973, Rescher 1976, Dubois and Prade 1991a,
Dubois and Prade 1992) and there are some connections with other meth-
ods of base revision.
Since from a logical point of view belief bases are not as shapely as belief
sets, it is hardly surprising that base revisions lose some of the distinguished
properties of belief set revisions. In particular, base contractions notori-
ously violate the recovery postulate (H.
5). The supplementary G¨ardenfors
postulates (H.
8) and (H8) are also violated as a rule. From a compu-
tational point of view, the changes performed on belief bases are obviously
the more important ones. Still, as will be clear from the following, base
contractions and revisions are less well-investigated than belief set revisions
and there are many more open questions at the time of writing.
5.1 Full meet contraction
This method draws only on the syntactic information encoded in the belief
base (‘axiomatization’) Hof Kand does not need any additional selection
mechanism. It was studied in the context of the evaluation of conditionals
by Veltman (1976) and Kratzer (1981), in the context of default reasoning
by Poole (1988),28 and in the context of belief revision proper by Fagin,
Ullman and Vardi (1983) and Nebel (1989).
Fagin, Ullman and Vardi (1983) present a theory of updates in databases
that shows clear similarities to the AGM constructions. They start with
the idea that a base29 H0accomplishes the revision (or the contraction) of
Hby a sentence Aiff H0is consistent and AH0(or A6∈ C n(H0)). Then
they say that H0accomplishes the revision (or the contraction) of Hby a
sentence Awith a smaller change than H00 iff both H0and H00 accomplish
the revision by Aand H0has either fewer deletions or the same deletions
28On the subtle difference between Poole inference systems and full meet revisions,
see (Makinson and G¨ardenfors 1991).
29A consistent belief base in our terminology is called ‘theory’ in (Fagin et al. 1983,
Fagin et al. 1986).
Belief Revision 53
and fewer insertions than H00. Then it is said that H0accomplishes the
revision (or contraction) of Hby Aminimally if there is no belief set H00
that accomplishes it with a smaller change than H0. Fagin, Ullman and
Vardi then prove (their Theorem 1)
Theorem 5.1.1.
1. H0accomplishes the contraction of Afrom Hminimally iff H0is a
maximal subset of Hthat is consistent with ¬A;
2. Cn(H0∪ {A})accomplishes the revision of Hby Aminimally iff H0
is a maximal subset of Hthat is consistent with A.
They then address the problem of what should be done in the case
several belief sets accomplish the update (i.e. revision or contraction) min-
imally. If Kis a belief set (i.e. closed under Cn) their proposal is that
in this case we take the update to be the intersection of all these belief
sets. This corresponds to adopting the full meet contraction (or revision)
function of Section 4.1. And in parallel to 4.1.4, Fagin, Ullman and Vardi
are able to prove (their Theorem 3) that in the principal case that Ais
inconsistent with K, the revision of Kby Awill be only Cn(A). Because
it is not satisfactory to throw away all the old knowledge each time such a
revision is attempted, they suggest that the problem may be circumvented
if bases for belief sets are used instead of logically closed sets.
Let us develop this line further. Let HAdenote the set of all subsets
H0of Hsuch that (i) A6∈ Cn(H0) and (ii) there is no set H00 such that H0
H00 Hand A6∈ Cn(H00 ). The principal idea of full meet contraction is
to take only those sentences which are contained in every H0from HA,
i.e., H.
A=THA. But as Alchourr´on and Makinson (1982) remarked,
this in general destroys too much information. For instance, if we have
H={B, C }and want to withdraw BC, then full meet contraction
leaves us with no information at all, i.e. H.
BC=. Obviously, it would
not help to put H.
A=HT{Cn(H0) : H0HA}. But the situation
is somewhat more pleasing if we take H.
A=T{Cn(H0) : H0HA}.
In the present example this would give us H.
A=Cn(BC), so we
preserve an appropriate amount of information. Unfortunately, we then
arrive at a theory instead of a belief base and thus violate the principle
of categorial matching. A last idea (in fact the one advocated by Fagin,
Ullman and Vardi) is to take disjunctions. Supposing that HAhas m
elements H0
1, . . . , H0
m, we can put H.
A={B1. . .Bm:BiH0
i}, and in
the example we get H.
A={BC}which is quite reasonable. In general,
however, the reasoner’s belief base will be swamped with disjunctions, and
we cannot hope to keep the initial idea that belief bases always represent
explicit beliefs.
Thus the method of full meet base revision faces a trilemma. Either we
lose too much information after performing a contraction, or we violate the
54 Peter G¨ardenfors and Hans Rott
principle of categorial matching, or we obtain bases without intuitive un-
derpinnings. (This trilemma also holds, to a lesser extent, for partial meet
contractions, but it vanishes for maxichoice contractions where a maximal
amount of information is preserved. It is also absent in the evaluation of
flat conditionals and in default reasoning where the principle of categorial
matching need not be obeyed.) Fagin, Kuper, Ullman and Vardi (1986) are
aware of this trilemma and as a way out they propose to represent epistemic
states by sets of belief bases (‘flocks of theories’ in their terminology).
These philosophical considerations set aside, let us consider what we
take to be the most plausible interpretation of full meet base contraction
which construes the full meet base transition as one from hCn(H), H ito
hT{Cn(H0) : H0HA},??i. What we have here is a theory contraction
of Cn(H) governed by the syntactic information of base H—and by nothing
else. The principle of categorial matching is violated because no new base
is specified. For the same reason, the pictorial description given by Figure
4 does not apply any longer. (But in that respect full meet base contraction
does not fare worse than most of the current accounts of belief change.)
Full meet base revision is obtained by applying the Levi identity.
The properties of this contraction operator are summarized in the next
theorem (Rott 1993):
Theorem 5.1.2. Let Hbe a base for the belief set K. Then the theory
contraction function .
defined by the full meet base contraction K.
A=
T{Cn(H0) : H0HA}in the principal case that 6` A, and defined by
putting K.
A=Kwhen `A, satisfies postulates (K.
1) (K.
4),(K.
6)
(K.
7) as well as (K.
8c). However, it does not satisfy (K.
5), nor does
it satisfy (K.
8), and in fact not even the weaker (K.
8r).
In order to restore the recovery postulate, Nebel (1989) proposes to
identify the contraction of Hwith respect to Anot with T{Cn(H0) :
H0HA}, but basically with T{Cn(H0(AH)) : H0HA},
where AHis an abbreviation of {AB:BH}.30 The suggestion
is also adopted by Nayak (1991). This modified concept of contraction of
course yields recovery, and it is revision-equivalent with the simpler idea of
full meet base contraction. Its logic is investigated in (Rott 1993).
Theorem 5.1.3. The following two claims are equivalent, for any con-
traction function .
over a finite belief set K:
1. there is a base Hfor Ksuch that for all Awith 6` A,K.
A=
T{Cn(H0∪ {AH}) : H0HA}31 ;
2. .
satisfies postulates (K.
1) (K.
7),(K.
8r),(K.
8c).
30In fact, according to Nebel’s definition, H.
Acontains only a single sentence in the
object language L. This definition severely violates the principle of categorial matching.
31Or equivalently, for all Awith 6` A,K.
A=Cn((T{C n(H0) : H0HA})∪ {A
H}).
Belief Revision 55
As a consequence of Theorem 5.1.3, we get that the full meet base
revisions of logically finite theories satisfy and indeed determine the logic
comprising the postulate set (K1) – (K7), (K8r) and (K8c) (see Sec-
tion 3.1). The modification of the concept of contraction, however, seems
somewhat ad hoc and clearly involves a departure from the fundamental
idea that belief bases contain explicit beliefs.
Nebel (1989) presents a reconstruction of the modified form of full meet
base contraction in terms of partial meet contraction of theories. Let K=
Cn(H) and define relations and over KAas follows
K0K00 iff K0HK00 H
and
K0K00 iff not K00 K0.
Notice that but not is transitive. Nebel then proves
Theorem 5.1.4. Let Hbe a base for the belief set K. Then the theory
contraction function .
defined by the modified full meet base contraction
K.
A=T{Cn(H0(AH)) : H0HA}is identical with the
relational partial meet theory contraction T{K0KA:K00K0for
every K00 in KA}.
5.2 Partial meet contraction
Fagin, Ullman and Vardi (1983) also note that not all propositions in a
database, alias belief base, are equally viable to deletion. In addition to
the syntactic information of H, ‘domain-dependent information’ (Ginsberg
1986) will enter into the selection mechanism. In particular, so-called in-
tegrity constraints for logical databases,32 i.e. propositions that are always
to be fulfilled by the database, should be retained in the database as far as
possible (cf. Winslett’s Chapter in this Volume). To handle this problem,
Fagin, Ullman and Vardi introduce the notion of a tagged sentence hi, Ai
where iis a natural number and Aa sentence. A logical database is then
defined as a finite set of tagged sentences. The intention is that the lower
the tag of a sentence, the higher its priority. Most integrity constraints
would be tagged by 0.
At first sight this conception seems similar in spirit to the use of a
relation of epistemic entrenchment. There are two minor technical differ-
ences: Firstly, Fagin, Ullman, and Vardi assign numerical values to the
sentences, rather than just ordering them. Secondly, the lower the tag is
for a sentence A, the more difficult it is to delete; while for epistemic en-
trenchment, the higher the degree of entrenchment, the more difficult it is
32These should not be confused with our metatheoretical integrity constraints stated
in Section 1.2.
56 Peter G¨ardenfors and Hans Rott
to delete. However, a crucial difference between their method of tagging
and the notion of epistemic entrenchment is that they ignore the relations
between the tags of logically compound sentences and the tags of their
constituents. This implies that their updating method only functions in
relation to statements serving as logically simple elements of a belief base
in our sense, and leaves open how logical consequences of the base are to
be treated. Consequently, they cannot use the simple construction recipe
(C.
). Notions similar to Fagin, Ullman and Vardi’s logical databases are
Rescher’s (1964, 1973, 1976) ‘modal categories’ or ‘plausibility indexings’,
Dubois and Prade’s (1991a, 1991b) ‘uncertain’ or ‘necessity-valued’ knowl-
edge bases, Nebel’s (1990, 1992) ‘prioritized belief bases’ and Brewka’s
(1991) ‘default theories’. Rescher’s suggestions, however, are quite close
to epistemic entrenchment in that his ‘tagging’ of the statements respects
their logical relations (see Section 5.5).
When comparing two databases to see which of them accomplishes an
update (contraction or revision) with a smaller change, they are compared
according to the priorities given to the sentences. The formal method of
comparison defined by Fagin, Ullman and Vardi is not like the construction
in terms of the definition (C.
) (cf. Section 4.2) or the construction of safe
contractions (cf. Section 4.3), but rather a special case of partial meet base
contraction.
Let us first present the theory of partial meet base contraction in full
generality. Using a selection function Sin the same way as in Section
4.1, but putting S(HA) = Hwhen HAis empty, we now consider the
following contraction function:
(PMBC)H.
A=TS(HA).
Notice that this construction operates directly on the level of belief bases,
without any logical closures being taken. A contraction function deter-
mined in this way by some selection function Swill be called a partial
meet base contraction function. Partial meet contractions and revisions are
investigated in the context of the evaluation of counterfactuals by Gins-
berg (1986))33 and in the context of belief revision proper by Hansson
(1989, 1991a). Hansson shows the following representation theorem:
Theorem 5.2.1. For any base Hand any contraction operation .
over
H,.
is a partial meet base contraction function iff it satisfies (H H .
1)
(HH .
4).
Here, as in the AGM framework, the selection function is specific for
a particular belief base. Hansson (1991a) introduces superselectors which
are functions that assign a selection function SHto each belief base H. A
superselector is unified iff for all bases Hand Git holds that if HA=
33In this case logical closures do come into play.
Belief Revision 57
GB6=, then TSH(HA) = TSG(GB). A unified partial meet base
contraction function is a contraction function that is based on a unified
superselector. Hansson then proves the following:
Theorem 5.2.2. For any base Hand any contraction operation .
over H,
.
is a unified partial meet base contraction function iff it satisfies (HH .
1)
(HH .
5).
Again as before we say that a selection function S, and also the con-
traction operation determined by it, is relational iff there is a relation
over M(H) = S{HA:ACn(H)Cn()}such that
(Def S)S(HA) = {H0HA:H00 H0for all H00 HA}
A selection function Sand the contraction operation it determines are
called maximizing or reducing iff Sis relational with respect to a relation
over M(H) which satisfies respectively
(MAX) If H0H00 then H0H00 but not H00 H0, and
(RED) H0H00 iff H0H00 H00 H0
Hansson shows
Theorem 5.2.3. Maximizing partial meet base contractions satisfy (H.
7),34
reducing partial meet base contractions satisfy (HH .
6).
If we put K=Cn(H) and K.
A=Cn(H.
A), then the base con-
traction (PMBC) generates a contraction function over K. The logic of
belief set contractions that can be represented in this way has recently
been investigated by Hansson (1993b). His basic representation theorem is
as follows:
Theorem 5.2.4. An operation .
on a belief set Kis generated by partial
meet contraction of a finite base for Kif and only if .
satisfies (K.
1)
(K.
5) and the following three conditions:
(‘Finitude’) There is a finite set Hsuch that for every Athere is
some H0Hsuch that K.
A=Cn(H0)
(‘Symmetry’) If K.
D`Aif and only if K.
D`Bfor all D, then
K.
A=K.
B
(‘Conservativity’) If K.
B6⊆ K.
A, then there is some Dsuch that K.
A
K.
D6` Aand (K.
D)(K.
B)`A
Representations theorems have also been obtained for contractions that are
generated by maxichoice, full meet, and transitively relational partial meet
contractions on finite bases (Hansson 1993b).
Let us now return to the special case of partial meet contraction which
is endorsed by Rescher, Fagin et al., Dubois and Prade, Nebel and Brewka.
34Essentially this is also shown as part of (Ginsberg 1986, Lemma 4.1)
58 Peter G¨ardenfors and Hans Rott
Following Nebel, we shall call this the method of prioritized base contrac-
tion.
Aprioritized belief base is a pair hH, ≤i such that His a belief base
and is a weak ordering of H, i.e., a transitive and connected relation
over H. A prioritized belief base is linear iff is antisymmetric. Since
belief bases are assumed finite, we can represent a prioritized belief base
by a sequence hH1, H2, . . . , Hniwhere the Hi’s are the equivalence classes
of Hunder (ABiff ABand BA), and ABholds for
AHiand BHjiff ij. If the prioritized base is linear, then the
Hi’s are singletons. The relation is called ‘epistemic relevance’ in (Nebel
1989, Nebel 1992). The intention here is that ABmeans that Bis
epistemically more relevant, or has higher priority, than A.35 More general
prioritizations that do not presuppose connectedness and well-foundedness
are investigated in (Weydert 1992).
Let hH, ≤i be a prioritized belief base; then H0His a preferred
element of HAiff for every H00 Hand every isuch that H0(Hi
Hi+1 . . . Hn)H00 (HiHi+1 . . . Hn) it holds that H00 `A.
That is, H0maximizes the set of sentences taken from Hat each priority
level, subject to the constraint that Ais not implied. When Ais to be
withdrawn from the theory Kwith base H, we are advised to pass into
T{Cn(H0) : H0is a preferred element of HA}.
The choice function Spicking preferred elements in this way can be
construed as relational. It chooses those H0in HAwhich are maximal
in HAunder the following preference relation between arbitrary sets of
sentences:
GG0iff there is an isuch that GHiG0Hiand for every j > i,
GHj=G0Hj
or equivalently, which are greatest under , defined by GG0iff not G0
G(see (Nebel 1992)). Notice that but not is transitive.
As in the case of full meet base contraction, (Nebel 1992) shows that
the special kind of partial meet base change just sketched is representable
‘on the knowledge level’ as a partial meet theory change. Because of the
complications with the recovery postulate, he presents his results for revi-
sions rather than for contractions. As a generalization of Theorem 5.1.4,
Nebel proves:
Theorem 5.2.5. Let hH, ≤i be a prioritized base for the belief set K.
Then the theory contraction function .
defined by the prioritized partial
meet base revision
35Special cases of ‘prioritizations’ include: Pollock’s (1976) evaluation of counter-
factuals with the help of strong subjunctive generalizations, n= 3, weak subjunctive
generalizations, n= 2, and simple propositions, n= 1; Fagin, Ullman and Vardi’s (1983)
‘integrity constraints’, n= 2, and ‘facts’, n= 1; and Ginsberg’s (1986) ‘protected’ facts,
n= 2, and ‘unprotected’ ones, n= 1.
Belief Revision 59
KA= (T{Cn(H0) : H0is a preferred element of H⊥¬A}) + A
is identical with the relational partial meet theory revision
KA= (T{K0KA:K00K0for every K00 in K⊥¬A}) + A.
Nebel notes that his epistemic relevance revisions in general satisfy (K
1) – (K7) but not (K8). More generally, it is shown in (Rott 1993) that
the logic of prioritized partial meet contractions and revisions is identical
with that of full meet contractions and revisions of belief bases. This is
essentially due to the fact that is transitive and conversely well-founded.
5.3 Maxichoice contraction
A maxichoice base contraction function takes H.
Ato be a single element
of HA. Such a strategy is recommended in (Alchourr´on and Makinson
1982).
In answering the question which element of HAto choose one may
think of a linear prioritized belief base hH, ≤i (Nebel). Then there is exactly
one preferred element of HA, for every A. Hence the resulting operation
.
is a maxichoice base contraction. The induced preference relation is
transitive, so .
satisfies all G¨ardenfors postulates, including (K.
8). Nayak
(1991) suggests an amendment of safe base contractions which ends up with
the same maxichoice contraction function as Nebel. A similar procedure is
recommended in (Dubois and Prade 1991a, p. 235).
Maxichoice base contraction completely eschews the methodological
trilemma facing full and also partial meet base contraction. The assump-
tion of a linear ordering of the base elements, however, is very strong and
does not conform well to the intuitive interpretation of a prioritized be-
lief base as a collection of explicit beliefs with various, but possibly equal
degrees of certainty.
5.4 Safe contraction
Consider the set H/A of all elements of Hwhich are ‘safe’ with respect to
A(see Section 4.3 for the definition). Alchourr´on and Makinson (1985)
suggest putting H.
A=HCn(H/A), and (Fuhrmann 1991) recom-
mends the simpler H.
A=H/A. Fuhrmann calls <a relation of com-
parative retractibility. For theory contractions, this makes no difference,
since Cn(HC n(H/A)) = Cn(H/A). For the contraction of proper bases,
this may make a difference which is hard to appreciate intuitively, however.
We have the following observations:
Theorem 5.4.1. The safe base contractions H.
A=H/A and H.
A=
HCn(H/A) satisfy postulates (H.
1)(H.
4) and (H.
6) but they violate
recovery (H.
5).
Theorem 5.4.2. Let Hbe a base for the belief set K. Then the theory con-
traction function .
defined by the safe base contraction K.
A=Cn(H/A)
60 Peter G¨ardenfors and Hans Rott
satisfies the postulates (K.
1) (K.
4) and (K.
6), but violates recovery
(K.
5) as well as (K.
7) and (K.
8).
Violations of (K.
5) and (K.
7) are discovered very easily. Consider
the base H={A, B}of Ktogether with the empty hierarchy <over H
(neither A < B nor B < A). For (K.
5), contraction with respect to AB
will give us a counterexample. For (K.
7), observe that K.
A=Cn(B)
and K.
B=Cn(A) but K.
AB=Cn(), so ABK.
AK.
Bbut
AB6∈ K.
AB. On the other hand, a counterexample against (K.
8) is
rather difficult to come by and has been supplied only recently in (Nayak
1991).
Nayak restores (K.
5),(K.
7) and (K.
8). He uses choice functions
over subsets of Hwhich may be thought of as determining a linear epis-
temic relevance ordering. Then he enlarges the original H/A in such a way
that he gets the unique preferred element of HA. In this way his proposal
to modify safe contractions of belief bases results in maxichoice contrac-
tions. In a similar vein, (Dubois and Prade 1992, p. 167) advocate safe
contractions based on a linear ordering of H. (Their reference to (Dubois
and Prade 1991a), however, is misleading, since that paper they suggest
prioritized base contractions based on a linear ordering of H.)
As pointed out in Section 4.3, the central idea of safe contractions is
to focus on entailment sets for A, i.e. minimal subsets of Hentailing A.
It is intesting that this very concept, applied to sets of statements which
are not logically closed, is also crucial in an entirely different approach to
belief revision, viz. in de Kleer’s (1986a, 1986b, 1986c) ‘assumption-based
truth maintenance systems’ (see Section 7.2).
5.5 Epistemic entrenchment for bases
Up to now there is no application of relations of the concept of epistemic en-
trenchment to belief bases. Rescher’s (1964, 1973, 1976) ‘conjunction-closed
modal categories’ and ‘plausibility indexings’ and Dubois and Prade’s (1991a,
1991b, 1992) ‘necessity measures’ are instruments similar to epistemic en-
trenchment. They assign to every proposition in a base Ha numerical
value between 0 and 1. The relational projection <of Rescher’s numerical
degree of plausiblity obeys (EE4) and the entailment condition over H:
(ENT) For every AHand every non-empty subset H0of H, if H0`A,
then there is a BH0such that A6< B.36
Comparing this with what was said in Section 4.2, we first note that
(ENT) follows from (EE1) and (EE). Conversely, if we remove the re-
striction of (ENT) to elements and subsets of Hand if we take (EE4) for
granted, then (EE1) – (EE3) follow from (ENT). Dubois and Prade in-
36Actually, Rescher (1973, p. 115) and (1976, p. 15) restricts (ENT) to consistent
premise sets H0.
Belief Revision 61
terpret the numbers appearing in an ‘uncertain knowledge base’ as lower
bounds on the degree of necessity from which the actual degrees of neces-
sity of the sentences in Cn(H) are computed with the help of a possibilis-
tic method of resolution and refutation. The result then satisfies (EE1) –
(EE4).
However, plausibility and necessity values are not used in the charac-
teristic way epistemic entrenchment is. This is no surprise because the
construction recipe (C.
) (see Section 4.2) for epistemic entrenchment con-
tractions makes essential reference to disjunctions which need not be, and
in general are not, included in a belief base. Rescher advocates essentially
prioritized partial meet contraction, while Dubois and Prade opt for safe
contraction.
The following question is interesting. Given a prioritized belief base
hH, ≤i, in which cases can it serve as the basis of a belief set Cn(H)
ordered by an epistemic entrenchment relation <E, in the sense that the
restriction of <Eto His identical with the asymmetric part <of ? It
turns out that <is compatible with such a full-scale relation of epistemic
entrenchment just in case it satisfies (ENT) (Rott 1991a). The canonical
construction is given by
(Def <E)A <EBiff there is an isuch that HiHi+1 . . . Hn`B, but
Hi+1 . . . Hn6` A.
The relation <Ethus generated is indeed a relation of epistemic en-
trenchment satisfying (EE1) – (EE6), and if <satisfies (ENT), then <E
restricted to His identical with <(Rott 1991a).37 Since the epistemic
relevance relation is supposed to be transitive and connected over H,
(ENT) is equivalent to the condition of entrenchment consistency:
(EC) There are no sentences A1, . . . , An, B1, . . . , Bnin Hsuch that Ai<
Biand {B1, . . . , Bn} ` Aifor every i= 1, . . . , n.
This condition is necessary and sufficient for the extensibility of an arbitrary
binary relation <over Lto a relation of epistemic entrenchment satisfying
(EE1) – (EE5) (see (Rott 1992b)).
5.6 Base revisions
Contractions and revisions can again be connected by the Levi identity, ac-
cording to which His transformed to HA= (H.
−¬A)∪ {A}, and hK, H i
is transformed to hCn(HA), H Ai. But there is also another idea which
is applicable only when belief bases are used instead of belief sets. Hans-
son (1993a) has suggested reversing the Levi identity and perform external
37For similar extension procedures of plausibility and necessity valuations of bases to
corresponding valuations of full theories, see (Rescher 1964, pp. 49-50); (Rescher 1973,
pp. 118-119, 353-355); (Rescher 1976, p. 18-19) and (Dubois and Prade 1991a, Section
4); (Dubois and Prade 1992, Section3).
62 Peter G¨ardenfors and Hans Rott
instead of internal revisions (the ones defined by the Levi identity). The
procedure is as follows. In order to revise the belief base Hby the sentence
A, first add Ato Hand then remove ¬Afrom H∪ {A}by whatever base
contraction mechanism you want to use. (Hansson himself again investi-
gates the case of partial meet contractions.) The reason why this does not
work for belief sets is that there is only one inconsistent theory K, and
there is no account in the theory of belief set revision of how the method
of contracting Kmight depend in any perspicuous way on (the method
of contracting) the original belief state K. In contrast, H∪ {A}of course
inherits the characteristic syntactic structure of H. Hansson (1993a) shows
that external revisions behave differently from internal revisions, and pro-
vides following axiomatic characterization of internal and external partial
meet base contractions. The postulates mentioned are listed in Section
3.4.
Theorem 5.6.1. Let Hbe a belief base and be a revision operation over
H. Then
1. is an internal partial meet base contraction function iff it satisfies
(HH 0) (HH 4).
2. is an internal unified partial meet base contraction function iff it
satisfies (HH 0) (HH 5).
3. is an external partial meet base contraction function iff it satisfies
(HH 0) (HH3),(HH 4w)and (HH 6).
4. is an external unified partial meet base contraction function iff it
satisfies (HH 0) (HH3),(HH 4w)and (HH 5) (HH 6).
A related idea is that of consolidation (Hansson 1991a). Instead of
removing ¬Afrom H∪ {A}one can just restore the consistency of H
{A}, i.e. eliminate from it (which Hansson again does by partial meet
contraction). In the attempted revision of Hwith respect to A, then, A
has no privileged status in comparison with the elements of the old belief
base H. The operation of set theoretical addition followed by consolidation
(removal of ) does not generally satisfy the success postulate (K2). Thus
it can serve as a model for the ‘selective fact-finding’ view of belief revision,
not for the ‘suppositional’ view.
Methods of belief revision very close to consolidation are studied in
(Brewka 1991) in a framework for default reasoning (also cf. (Nebel 1992)).
Brewka considers prioritized belief bases in a first order language (which
he calls ‘default theories’) with a finite number of ‘levels of reliability’. In
general, Brewka’s bases Hare to be conceived as inconsistent. Unlike all
approaches considered so far, Brewka’s approach instantiates the direct or
immediate mode of belief revision (see Section 1.4).
Let hH1, H2, . . . , Hnibe the relevant partition of H. Brewka consid-
ers three kinds of revision operations: The revision of Hw.r.t. Ayields
Belief Revision 63
the prioritized base hH1, H2, . . . , Hn,{A}i; the (insert-)revision of Hw.r.t.
Aat degree jyields the prioritized base hH1, . . . , Hj∪ {A}, . . . , Hni; the
(minus-)revision of Hw.r.t. Aat degree jyields the prioritized base
hH1, . . . , Hj1,{A}, Hj, . . . , Hni.
It is important that the belief state associated with a prioritized belief
base is not just Hor Cn(H)—which would as a rule be K—but (the
logical consequences of) an element of H⊥⊥ which is preferred in the sense
explained in Section 5.2. Insert- and minus-revisions have an additional
parameter, j, and they do not satisfy (K2). They easily allow for iterated
belief changes. In these respects they are strikingly similar to the belief
change operations of Spohn (see Section 4.5). If the resulting prioritized
base is inconsistent, insertions of new sentences obviously model belief-
contravening revisions. Brewka shows that they can also model the intuitive
process of belief contractions, if the default theories are equipped with
constraints in the sense of (Poole 1988).
5.7 Computational Complexity
From a computational point of view, it seems impossible to deal directly
with belief sets. Belief sets are simply too large.38 For this reason, base
revisions and contractions seem much more attractive when it comes to
computation. Nevertheless, operating on bases can also incur nonnegligible
computational costs. In order to get a precise idea of how expensive base
revision and contraction can become, we analyze the complexity of deciding
the problem
HA`B
where is one of the base revision operations described above.
Assuming that our background logic is classical propositional logic, it
is possible to derive a straightforward lower bound for base revision. From
(HH0), (HH2) and (HH3) it follows that ∅ ∗ A` ⊥ iff ` ¬A, and
{A} ∗ > ` Aiff 6` ¬A. Hence base revision is at least as hard as deciding
propositional satisfiability and unsatisfiability. In terms of computational
complexity theory, this means that base revision is NP-hard and co-NP-
hard (Garey and Johnson 1979, Johnson 1990), i.e., at least as hard as all
problems (resp. complements of problems) that can be decided in polyno-
mial time on non-deterministic machines. Since it is strongly believed that
the class P of problems that can be solved in polynomial time (on determin-
istic machines) differs from the classes NP and co-NP, NP-hard (and co-NP
hard) problems are believed to be solvable only by algorithms that re-
quire at least exponential time in the worst case (Garey and Johnson 1979,
Johnson 1990). Furthermore, since it is believed that the complexity classes
38Recall that a belief set over a finite alphabet of atoms can have a size that is double
exponential in the size of the alphabet.
64 Peter G¨ardenfors and Hans Rott
NP and co-NP are not equivalent (Johnson 1990), belief revision seems to
be strictly harder than either satisfiability or unsatisfiability.
It should be noted, however, that this lower bound mainly depends on
the complexity of the reasoning problems in the background logic. The
question is, whether revision adds another, independent source of complex-
ity that would make base revision more difficult than propositional reason-
ing. It turns out that for some of the base revision operations described
above this is indeed the case.
In order to decide the above problem for prioritized base revision, it has
to be checked whether Bfollows from all preferred elements X(HA).
Formally, this means that the problem is in the class co-NPNP = Πp
2, i.e.,
it can be solved by iterating over all such X’s (of which there may be
exponentially many) and deciding the propositional derivation problem.
In addition, one can show that there is no easier way do that, i.e., the
problem is complete for this complexity class. Nebel (1992) proves the
following theorem.
Theorem 5.7.1. The problem of deciding whether HA`B, where is
a prioritized base revision, is Πp
2-complete.
Completeness for the class Πp
2has the same consequences concerning
worst-case runtimes as NP-completeness. The problem is solvable in poly-
nomial time if and only if NP = P, and the best known algorithms are
exponential in the worst case. Hence, in some sense Πp
2-completeness is not
very different from NP-completeness.
However, there are some important differences. First of all, Πp
2-completeness
implies that there are two sources of complexity, and it is necessary to avoid
both in order to obtain a provably polynomial problem. In our case, re-
stricting the background logic to propositional Horn logic for instance is not
enough to get a problem that can be solved in polynomial time (Nebel 1992,
Eiter and Gottlob 1992). Further restrictions are necessary. One way to re-
strict the cardinality of priority classes to a fixed value (e.g. to singletons).
In this case the general problem becomes NP-equivalent in the general case
(Nebel 1992) and polynomial for Horn clauses. Other possible solutions are
size restrictions of the update formula A(see (Eiter and Gottlob 1992)).
A further consequence of the Πp
2-completeness result is that it is im-
possible to find a “dense” (i.e., polynomially sized) representation for the
revised base in polynomial time, provided that Πp
26= Πp
1, which is also
believed to be true (Johnson 1990). Whether such a dense representation
exists at all is an open problem.
Another interesting consequence of the Πp
2-completeness result is that
belief revision and propositional reasoning in almost all nonmonotonic log-
ics can be translated into each other by functions that are efficiently com-
putable, i.e., computable in polynomial time. The reason for that is that
reasoning in those logics is also Πp
2-complete (Gottlob 1992).
Belief Revision 65
The investigation of the computational complexity of belief revision and
related operations such as temporal update (Winslett 1990) has started
only recently. While there are already a number of results for different
revision schemes, a more general picture has not yet emergered. For in-
stance, a nontrivial upper bound for all possible base revision schemes is
not known. Similarly, it is not known how large a base can become af-
ter revision. Finally, another open problem is the characterization of all
polynomial base revision schemata.39
6Connections with nonmonotonic logic
6.1 The basic idea
Belief revision and nonmonotonic logic are motivated by quite different
ideas. The theory of belief revision deals with the dynamics of belief states,
that is, it aims at modelling how an agent or a computer system updates
its state of belief as a result of receiving new information. Of particular
interest, as we have seen above, is the case where the new information is
incompatible with the old state of belief.
Nonmonotonic logic, on the other hand, is concerned with a systematic
study of how we jump to conclusions from what we believe. By using
default assumptions, generalizations etc. we tend to believe in things that
do not follow from our knowledge by the classical rules of logic. A thorough
understanding of this process is desirable since we want AI-systems to be
able to perform the same kind of reasoning. As explained in Section 1.4,
nonmonotonic logic, in particular if it is of the paraconsistent kind, can also
be used for modelling belief revision in what we called the immediate mode.
A detailed overview over the variety of current approaches to nonmonotonic
reasoning is given in Volume 3 of this Handbook.
Despite the differences in motivation for the theories of belief revision
and nonmonotonic logic, the formal structures of the two theory areas,
as they have developed, are surprisingly similar. It is possible to translate
concepts, models, and results from one area to the other. Establishing such
a translation will hopefully lead to a cross-fertilization of the two research
areas. Due to the philosophical plausibility of Levi’s thesis according to
which contractions are more basic than revisions, we put our emphasis in
the previous sections on belief contractions. In the comparison with non-
monotonic reasoning, however, it is appropriate to focus on belief revisions.
There are two ways in which belief revision can be said to be nonmono-
tonic. It fails monotony with respect to its first argument K, and with
respect to its second argument A. Firstly, if one sees the revision of a
belief set Kby a proposition Aas an operation on Kmodulo A, it does
indeed fail monotony: we may have K1K2but K1A6⊆ K2A. At
39We thank Bernhard Nebel for writing this section on computational complexity.
66 Peter G¨ardenfors and Hans Rott
least we may have such failure if revision is taken to satisfy certain of the
postulates (K1) – (K8).
On the other hand, if we change our Gestalt and see the revision of
a belief set Kby a proposition A as an operation on Amodulo K, we
get a quite different picture and a second sense in which belief revision is
nonmonotonic: we may have Cn(A)Cn(B) but KA6⊆ KB.
The second Gestalt also helps us to understand the link between non-
monotonic logic and the logic of belief revision. The key idea for a trans-
lation between the two areas is:
1. See the revision of a belief set Kby a proposition A, forming a belief
set KA, as a form of nonmonotonic inference from A;
2. Conversely, see a nonmonotonic inference of a proposition Bfrom
a proposition Aas a discovery that Bis contained in the result of
revising a fixed background theory Kso as to integrate A. In this way,
the nonmonotonic relation A|Bserves as a shorthand for A|KB
which indicates that the nonmonotonic inference is dependent on the
background belief set K.
The central idea is thus to identify BKAand A|B, it being un-
derstood that Kis the ‘background knowledge’ on the basis of which the
inferences are made. In other words, given such a K, we simply identify
C(A) with KA. We write Cfor a (single-premised) nonmonotonic infer-
ence operation and |for a nonmonotonic inference relation, where we think
of Cand |as being related by the equation C({A}) = {B:A|B}.
The revision of Kby Aalways contains Aaccording to (K2), so we
have, after translation, AC(A) (we drop the curly brackets of C({A})
for simplicity). Thus Csatisfies the condition of reflexivity of inclusion
which is essential for it to deserve being called an inference operation. C
will moreover be nonmonotonic, in that we may have A`B(where `is
classical consequence), but not KA`KB. For example, suppose we put
A=BCand take K=Cn(¬B¬C). Then KA=K(BC) will be a
consistent theory containing BC. On the other hand, since Bis consistent
with ¬B∨ ¬C, we have KB=Cn(B∨ ¬C}∪{B}) = Cn(B∧ ¬C).
Thus although A`Bwe do not have KA`KB.
6.2 Translating postulates for belief revision into non-
monotonic logic
Using this recipe of translating an expression of the form BKA
about a belief revision process into an expression of the form A|KB
within nonmonotonic logic, we shall now present the translations of the
postulates (K1) – (K8). These translations are taken from (Makinson
and G¨ardenfors 1991).
(K1) KA=Cn(KA)
Translation: C(A) = CnC(A)
Belief Revision 67
The translational output is a familiar principle of nonmonotonic inference,
holding of the inference operations generated by most of the constructions
in the literature. See Makinson’s Chapter in Volume 3 of this Handbook for
details.
(K2) AKA
Translation: A|A
(K3) KACn(k∪ {A})
Translation: Whenever A|Cthen > |(AC)
Although postulate (K3) occupies a central place in the logic of the-
ory change, its translation has not played an explicit role in nonmonotonic
logic, and does not have much intuitive resonance. It is, however, a particu-
lar case of the general principle that whenever BA|Cthen B|AC
(put B=>), known as the principle of Conditionalization, and which holds
in many systems of nonmonotonic logic, notably in all classical preferen-
tial entailment relations in the sense of (Makinson 1989) and (Kraus et al.
1990). And Conditionalization, as we shall shortly see, is itself the trans-
lation of postulate (K7), of which (K3) is also a particular case. For
future reference, we call the translation of (K3) Weak Conditionalization.
(K4) If ¬A6∈ Cn(K), then C n(K∪ {A})KA
Translation: If > 6 |∼¬Aand > |(AC), then A|C
Postulate (K4) is of course a conditional converse of (K3). The source
postulate uses a negative hypothesis ¬A6∈ Cn(K) and its translation like-
wise involves the negative hypothesis that > 6 |∼¬A. The translation is
thus a non-Horn condition on |. It does not figure explicitly in discus-
sions of nonmonotonic logic. It does, however, hold for some nonmonotonic
relations considered in the literature; namely those determined by classical
stoppered preferential model structures satisfying the condition that every
minimal model is less than every non-minimal model. It also follows from
the condition of ‘Rational Monotony’, to which we shall return when con-
sidering the translation of postulate (K8). The translation of (K4) will
be termed Weak Rational Monotony.
(K5) If Cn(A)6=L, then KA6=L
Translation: If Cn(A)6=L, then C(A)6=L
An equivalent formulation of the translation output is, if A|∼ ⊥, then
A` ⊥. This condition on nonmonotonic inference operations is sometimes
called Consistency Preservation. When checking, for a given nonmonotonic
inference operation C, whether it satisfies consistency preservation, it is
vital to specify clearly which background logic Cn one is working with. Not
many nonmonotonic inference operations Csatisfy consistency preservation
with respect to classical logic Cn0, but many do satisfy the property with
68 Peter G¨ardenfors and Hans Rott
respect to some compact logic Cn with Cn0Cn C.40
(K6) If Cn(A) = C n(B), then KA=KB
Translation: If `AB, then A|Ciff B|C.
This condition is called Left Logical Equivalence. It holds for all main
nonmonotonic inference operations in the literature: Reiter default, pref-
erential models, Poole systems with or without constraints, and epsilon
entailments in the style of Adams and Pearl. See Makinson’s Chapter in
Volume 3 of this Handbook for details.
(K7) K(BA)Cn((KB)∪ {A})
Translation: If BA|Cthen B|(AC)
As already observed when discussing postulate (K3), that postulate is
essentially a special case of (K7) (putting B=>), and the trans-
lation of (K3) is likewise the same special case of the translation of
(K7). And as already mentioned, the translation of (K7) is known
as the principle of Conditionalization, and holds of all classical preferential
entailment relations. Remembering that (K7) is equivalent with condi-
tion (1) of Section 3.1, we note that conditionalization is closely related
to Disjunction in the Antecedent, which says A|Cand B|Cimplies
AB|C. Given the translation of (K1), it implies (a unit version
of) the Cut rule for the consequence relation |. See (Makinson 1989,
Kraus et al. 1990), or Makinson’s Chapter 3.2 of this Handbook for details.
(K8) If ¬A6∈ KBthen Cn((KB)∪ {A})K(BA)
Translation: If B6 |∼¬Aand B|(AC) then BA|C
Postulate (K4) is a special case of postulate (K8), putting B=>,
and likewise for their respective translations. Note that the translation of
(K8) is equivalent (given some background conditions) to the following:
If B6 |∼¬Aand B|Cthen BA|C, which is the condition known in
nonmonotonic logic as Rational Monotony. This is of course a non-Horn
condition, as was the translation of (K4). It is substantially stronger
than (a unit version of) Cautious Monotony which we discuss in the next
section. Rational Monotony holds for the inference relation generated by
any classical stoppered preferential model structure that is ranked in the
sense that there is a function rfrom the set of all models of the model
structure into a totally ordered set, such that m < m0iff r(m)< r(m0) for
all models mand m0in the model structure. Rankedness is equivalent to
the condition of virtual connectivity which says that if m < m0then either
m00 < m0or m < m00, for every model m00 (cf. Section 4.3). See (Lehmann
and Magidor 1992) or Makinson’s Chapter in Volume 3 of this Handbook
for details. The restriction of the condition to the case B=>corresponds
40We thank David Makinson for these observations on consistency preservation. For
further details, see his Chapter 3.2 of this Handbook , especially Sections 2.1–2.2.
Belief Revision 69
to the following special case of the rankedness condition: Every minimal
model in the model structure is less than every non-minimal one.
We put off the discussion of the postulates (K7c) and (K8c) for a
moment, since they fit in more naturally with the next section.
In summary, every principal postulate on translates into a condition on
|that is valid in some kinds of nonmonotonic inference in the literature.
In particular, all of the postulates (K1) to (K7)(resp. (K1) to
(K8)), including Consistency Preservation for an appropriate choice of
the background consequence operation Cn, hold, when translated, in all
classical stoppered (resp. and ranked) preferential model structures.
6.3 Translating conditions on nonmonotonic inference
We may now look briefly at some of the more important conditions on |
to see how they translate back to conditions on belief revision. We con-
sider only ‘unit’ forms of these conditions, i.e. with individual propositions
(rather than sets of propositions) on the left. As the translation introduces
a (fixed) parameter K(assumed closed under Cn) rather than eliminates
one, the translation process is straightforward.
Conditions like Consistency Preservation and Rational Monotony for |
translate back into the conditions on from which they were obtained in
Section 6.2, and there is no need to review them again. We review only
some important conditions on |that did not emerge explicitly as outputs of
the to |translation. These include: Supraclassicality, Right Weakening,
Cut, Cautious Monotony, Cumulativity, Reciprocity, Distribution, And,
and Loop (see Makinson’s Chapter in Volume 3 for a presentation of these
conditions). We shall see that their translations are all consequences of the
postulates (K1) – (K8).
Supraclassicality: If A`B, then A|B
Translation: If A`B, then BKA
The translation output is an immediate consequence of the closure and
success postulates (K1) and (K2).
Right Weakening: If A|Band B`Cthen A|C
Translation: If BKAand B`Cthen CKA
This follows from the closure postulate (K1).
Unit Version of Cut: If A|Band AB|C, then A|C.
Translation: If BKAand CK(AB), then CKA.
The translation is postulate (K7c) which is derivable from the basic
postulates plus (K7) as follows. Suppose BKAand CK(AB).
By (K7) the latter gives us CCn(KA∪ {B}) = KAby the
supposition and (K1) as desired. The translation output is rather weaker
than (K7), in the sense that (K7) is not itself derivable from (K7c)
70 Peter G¨ardenfors and Hans Rott
together with the basic postulates (K1) to (K6) on . See the discussion
of Distribution below.
Unit Version of Cautious Monotony: If A|Band A|C, then AB|
C.
Translation: If BKAand CKA, then
CK(AB).
The translation is postulate (K8c) which can be derived using basic
postulates together with (K8). We do not need the full force of (K8).
The following principle (KC), which can be shown to be weaker than
(K8) (it follows from condition (3.15) in (G¨ardenfors 1988)), suffices:
Either KAK(AB) or KAK(A∧ ¬B). The derivation is as
follows. Suppose BKAand CKA. Then using (KC) we have
either B, C K(AB) or B , C K(A∧ ¬B). The first case gives our
desired conclusion. The second case implies, using (K5), that A`Bso
Cn(A) = C n(AB) so CKA=K(AB) using (K6).
Unit Cumulativity: If A|B, then A|Ciff AB|C.
Translation: If BKA, then KA=K(AB).
Unit Cumulativity is just Unit Cut and Unit Cautious Monotony together.
The translation is derivable by simply combining the derivations of two
components.
Unit Reciprocity: If A|Band B|A, then A|Ciff B|C.
Translation: If BKAand AKB, then KA=KB.
This is condition (4) of Section 3.1. It is a well known principle of the logic
of revision, discussed in (Alchourr´on and Makinson 1982) in the context
of maxichoice revision and in (G¨ardenfors 1988) for any revision function
satisfying the postulates (K1)(K8). It also has a well known analogue
in conditional logic, discussed in (Stalnaker 1968). In nonmonotonic logic,
Reciprocity is equivalent to Cumulativity (given Supraclassicality), and in
the logic of theory change the Reciprocity condition is similarly equivalent
to postulates (K7c) and (K8c) taken together (see Section 3.1).
Unit Distribution: If A|Cand B|C, then AB|C
Translation: If CKAand CKB, then CK(AB).
We mentioned in Section 3.1 that this condition is equivalent to (K7).
As pointed out in (Kraus et al. 1990) and (Lehmann and Magidor 1992),
Unit Cut can be equivalently replaced by the following condition, provided
the other conditions presented so far are satisfied:
And: If A|Band A|C, then A|BC
Translation: If BKAand CKA, then BCKA
This is of course an immediate consequence of the closure condition (K1).
Belief Revision 71
Unit Loop: If A1|A2, A2|A3, . . . , An|A1(n2), then
C(Ai) = C(Aj), for all i, j n.
Translation: If A2KA1, . . . , AnKAn1, A1KAn(n
2), then KAi=KAjfor all i, j n.
This principle is quite unknown in the literature on theory change. Its
obvious analogue for conditional logic appears to be unknown to the liter-
ature on that subject too. The Loop principle first made its appearance
in nonmonotonic logic in work of Kraus, Lehmann and Magidor (1990),
where it serves as a syntactic counterpart of a condition of transitivity in
preferential model structures. The translation output can be derived from
(K1) – (K8). The derivation is found in (Makinson and G¨ardenfors
1991).
In summary, every condition on |from the literature on nonmonotonic
logic translates into a condition on that is a consequence of (K1) –
(K8).
6.4 Comparing models of belief revision and models
of nonmonotonic logic
We have now established that on the level of postulates there is a very close
formal correspondence between belief revision and nonmonotonic logic.
Turning finally to the connection between models for belief revision and
models for nonmonotonic inferences, one may expect that on this level one
can find semantic structures that can be applied to both areas, in accor-
dance with Figure 5 below:
Fig. 5. Postulates and models for belief revision and nonmonotonic logics
Indeed, there are several constructions in the two areas that are closely
72 Peter G¨ardenfors and Hans Rott
related. For example, Poole’s (1988) ‘maxiconsistent’ construction for non-
monotonic inference is very similar to the ‘the full meet (base) contraction’
discussed in the AGM tradition (cf. (Makinson and G¨ardenfors 1991). Fur-
thermore, the preferential entailment approach of Shoham (1988), Makin-
son (1989) and his Chapter in Volume 3 of this Handbook) and Kraus,
Lehmann and Magidor (1990), using an ordering among models to deter-
mine those that are minimal, is reminiscent of the orderings used in belief
revision. Connections with the AGM approach can be revealed either in-
directly as in Section 4.4 above. They can also be developed directly, as is
demonstrated in technical detail by Katsuno and Mendelson (1991), Kat-
suno and Satoh (1991) and Lindstr¨om (1991), and discussed from a more
philosophical point of view in Makinson (1993).
Using the strategy of taking models from belief revision theory and
using the translation presented above, G¨ardenfors and Makinson (1994),
previewed in (G¨ardenfors 1991), have been able to prove some new represen-
tation theorems connecting postulates for nonmonotonic logic with models
that are taken, more or less directly, from the area of belief revision. In
particular, the ordering of epistemic entrenchment can be given a slight
reinterpretation in terms of ‘expectations’ in the context of nonmonotonic
reasoning. In this subsection, the precise results of this reinterpretation
will be summarized. For details the reader is referred to (G¨ardenfors and
Makinson 1994).
The guiding idea for the interpretation of nonmonotonic reasoning is
that when we try to find out whether Bfollows from A, the background
information that we use for the inference does not only contain what we
firmly believe, but also information about what we expect in the given sit-
uation. Such expectations can be expressed in different ways: by default
assumptions, statements about what is normal or typical, etc. These ex-
pectations are not full beliefs but defeasible in the sense that if the premise
Ais in conflict with some of the expectations, we do not use them when
determining whether Bfollows from A.
Expectations are arguably the same kind of information as ‘full’ beliefs
in a belief set; the difference is that they are more defeasible than ‘full’
beliefs. The expectations will be described as a set ∆ of sentences from the
language L. Technically ∆ will be treated in the same way as a belief set
K. We also assume that the full beliefs are included in ∆, since what we
believe we expect to be true.
The key idea can be put informally as follows:
A nonmonotonically entails Biff Bfollows logically from A together
with as many as possible of the sentences in that are compatible with A.
This is reminiscent of Goodman’s (1947) concept of ‘cotenability’ which
was formalized by Lewis (1973). In order to make the idea more precise,
we must, of course, specify what is meant by ’as many as possible’. But
before we turn to technicalities, let us illustrate the gist of the analysis by
Belief Revision 73
an example. Anonmonotonically entails B’ will be denoted A|Bas
usual.
Let the language Lcontain the following predicates:
Sx:xis a Swedish citizen
Ix:xhas Italian parents
P x:xis a protestant
Assume that ∆ contains the expectations Sa P a and Sa Ia
¬P a. If we now learn that ais a Swedish citizen, that is Sa, this piece of
information is consistent with ∆ and thus we can conclude that S a |P a
according to the recipe above.
On the other hand, if we learn both that ais a Swedish citizen and
has Italian parents, that is Sa Ia, then this information is inconsistent
with ∆, and so we cannot use all expectations in ∆ when determining
which inferences can be drawn from Sa Ia. The most natural expedient
is to give up the expectation Sa P a and the consequence Sa → ¬I a.
The contracted set of expectations, which contains Sa Ia → ¬P a and
its logical consequences contains, in a sense to be made precise below, as
many sentences as possible in ∆ that are compatible with Sa Ia. So, by
the general rule above, we have Sa Ia |∼ ¬P a. This shows that |is
indeed a nonmonotonic inference operation.
The problem in focus is how to define which elements of the set ∆ to
give up when adding a new piece of information Athat is inconsistent with
∆. According to the general integrity constraint (iii) we should look out
for as large a subset of ∆ as possible. In analogy with the earlier notation,
the set of all maximal subsets that fail to imply Awill be denoted ∆A.
We use the sets in ∆⊥¬Afor the construction with the aid of a selection
function Sover ∆, i.e. a function such that ∅ 6=S(∆⊥¬A)⊥¬A, when
⊥¬Ais non-empty, and S(∆⊥¬A) = ∆ otherwise.
Given an expectation set ∆ and a selection function S, the expecta-
tion inference operation C,S is defined, for all AL, by the equation
C,S (A) = ∩{Cn({A} ∪ D) : DS(∆⊥¬A)}, where ∆ is a non-empty
default set and Sis a selection function. C,S is closed when ∆ = Cn(∆).
It should be noted that expectation inference operations are here only
defined for finite sets of premises, which can always be replaced by their
conjunction, which will be the single formula A in the definition above.
Technically speaking, expectation inference operations proceed by partial
meet base revisions. The definition proffers a wide class of inference oper-
ations because no particular constraints are put on the selection function.
We shall now give a characterization of the class of expectation inference
operations by examining which general conditions on inference operations
they satisfy.
Theorem 6.4.1. A nonmonotonic inference operation |satisfies Reflex-
ivity, Left Logical Equivalence, Right Weakening, And, Weak Condition-
74 Peter G¨ardenfors and Hans Rott
alization, Weak Rational Monotony, and Consistency Preservation if and
only if there exists a closed and consistently generated expectation inference
operation |,S such that A|Biff A|,S B, for all Aand B.
The nonmonotonic conditions used in this theorem are equivalent to
the set of translations of the basic postulates (K1) – (K6) in Section
6.2. In the theorem, no restrictions are put on the selection function. It
shows that for nonmonotonic inference operations based on such selection
functions we can, in general, only expect basic postulates to be satisfied,
but, for example, not Unit Cumulativity. A very natural condition on a
selection function is the following:
(SC) If S(∆⊥¬A)⊥¬B⊥¬A, then S(∆⊥¬B) = S(∆⊥¬A).
The interpretation is that if the set of maximal subsets in ∆⊥¬Bare
included in the set ∆⊥¬Aand the ‘preferred’ maximal subsets in ∆⊥¬A
are all members of ∆⊥¬B, then these are also the best in ∆⊥¬B. (The
condition (SC) is an application to the context of nonmonotonic inference
of a condition, sometimes called ‘Aizerman’, developed and studied in the
context of the general theory of rational choice. Compare (Lindstr¨om 1991)
and (Rott 1993) for details.)
Theorem 6.4.2. A nonmonotonic inference operation |satisfies the set
of postulates mentioned in Theorem 6.4.1 and Unit Cumulativity if and
only if there exists a closed and consistently generated expectation inference
operation |,S where Ssatisfies (SC) such that A|Biff A|,S B, for
all Aand B.
A further strengthening of the requirements for a selection function
would be to demand that it is generated by some underlying ‘preference’
relation in the following sense:
Definition 6.4.3. A selection function Sis relational over ∆ iff there is a
relation 4over the subsets of ∆ such that for all Awith ¬A6∈ C n(), it
holds that S(∆⊥¬A) = {D⊥¬A:D4D0for all D0⊥¬A}.Sis
transitively relational iff Sis relational under some transitive relation 4.
This definition is mirrored on the definition of a relational partial meet
contraction function in Section 4.1. It is perhaps not surprising that a
corresponding representation result can be proved.
Theorem 6.4.4. An inference relation |satisfies the set of postulates
mentioned in Theorem 6.4.1, together with Unit Distribution and Ratio-
nal Monotony if and only if there is a closed, consistently generated and
transitively relational expectation inference relation |,S with |=|,S .
Weak Conditionalization, Weak Rational Monotony, and Unit Cumu-
lativity need not be mentioned in this theorem because they are already
entailed by the remaining postulates. Instead of working with selection
functions, we can use the correspondence of orderings of epistemic entrench-
Belief Revision 75
ment to generate nonmonotonic logics. In order to mark the difference in
interpretation, an ordering of the sentences in ∆ will be called an expec-
tation ordering. From the epistemological perspective it seems intuitively
plausible that our expectations about the world do not all have the same
strength. In brief, the expectations in ∆ are all defeasible (unless logically
valid), but they exhibit varying degrees of defeasibility.
When determining whether Anonmonotonically entails B, the differ-
ent degrees of expectation among the sentences in ∆ can then be used to
determine which sentences to give up from ∆ when Ais inconsistent with
∆. In order to make this idea more precise, we assume that there is an
ordering of the sentences in ∆. AB’ should be interpreted as ‘Ais
not more expected than B’. ‘A < B’ will be written as an abbreviation for
‘not BA’ and ‘AB’ is an abbreviation for ‘ABand BA’. The
relation , which we take as primitive in this section, will be assumed to
satisfy certain constraints. The constraints to be presented here are exact
parallels of the postulates for epistemic entrenchment given in (G¨ardenfors
and Makinson 1988). Like in (Rott 1992b), however, no correspondences
of the minimality and maximality conditions (EE5) and (EE6) are needed.
(E1) If ABand BC, then AC(Transitivity)
(E2) If A`B, then AB(Dominance)
(E3) For any Aand B, A ABor BAB(Conjunctiveness)
Let us now return to how the ordering can be used to determine when
Anonmonotonically implies B. According to the general recipe of this
paper, A|Bmeans that Bfollows from Atogether with as many of the
expectations in ∆ as possible that are compatible with A. A first method
is to use a direct translation of the corresponding condition for contraction
functions in (G¨ardenfors and Makinson 1988). This leads, more or less
directly, to the following criterion:
(C|0)A|Biff ¬A < A Bor ` ¬A.
However, the following much more intuitive definition will give equivalent
results:
A (nonmonotonic) inference relation |is expectation-based iff there is
a consistent and closed set ∆ and an ordering over ∆ satisfying (E1) –
(E3) such that the following condition holds:
(C|)A|Biff BCn({A}∪{D∆ : ¬A < D})
The idea behind the condition (C|) is that when determining whether
A|B, one looks at what follows from Atogether with the class of
sentences in ∆ that are more expected than ¬A. (C|) is closely related
to the contraction recipe (C >) mentioned in Section 4.2 (cf. Theorem
4.2.8).
It can now be shown that an expectation-based nonmonotonic inference
76 Peter G¨ardenfors and Hans Rott
relation defined by (C|) can be characterized in a strong way:
Theorem 6.4.5. |is an expectation-based nonmonotonic inference rela-
tion if and only if |satisfies the set of postulates mentioned in Theorem
6.4.4.
The interpretation of the strong part of this theorem is that if you be-
lieve that postulates of Reflexivity, Left Logical Equivalence, Right Weak-
ening, And, Consistency Preservation, Unit Distribution and Rational Monotony
are reasonable for a nonmonotonic inference relation |, then there will ex-
ist an underlying expectation relation of the sentences of L, satisfying
(E1) – (E3), that will generate exactly the inference relation |.
In (G¨ardenfors and Makinson 1994) it is also shown that there is a
close correspondence between expectation-ordering semantics and a class of
preferential models in the style of Shoham (1988) and Kraus, Lehmann, and
Magidor (1990). This connection corresponds to the connection between
models based on epistemic entrenchment (Section 4.2) and possible worlds
models for belief revision (Section 4.4).
7Truth maintenance systems
In the preceding sections we have dealt with belief revision models in the
logic-constrained mode of belief revision (cf. Section 1.4), except for a brief
discussion of Brewka’s approach at the end of Section 5.6. Section 4 dealt
with models of the coherence type while Section 5 concerned models of the
foundations type (cf. Section 2.4). We now turn to models that deliberately
focus on the justifications for the beliefs in a belief system.
Under the name ‘truth maintenance systems’ or ‘reason maintenance
systems’, we find loosely grouped a great number of systems for manag-
ing beliefs on the basis of dependency information. Today there is a great
variety of truth maintenance systems (henceforth, TMSs) with widely dif-
fering features. In the following we shall mainly focus on the paradigm
provided in (Doyle 1979) justification-based TMS and further developed
e.g. by (Goodwin 1982, Goodwin 1987). In Section 7.2, we compare it
with its most prominent competitors, de Kleer’s (1986a, 1986b, 1986c)
assumption-based TMS and McAllester’s (1980) logic-based TMS. We feel
that TMSs would have merited a Handbook chapter of their own, but we
will confine ourselves to a rough and elementary overview from a belief
revision perspective. A good survey of TMSs is given by (Reinfrank 1989),
and a comprehensive annotated bibliography to the literature on TMSs is
compiled in (Martins 1990).
In so far as TMSs take explicit care of the justifications and the ground-
edness of beliefs, they follow a philosophy remarkedly distinct from all ap-
proaches we have been discussing up to now. But TMSs are quite different
for a number of further reasons.
Belief Revision 77
First, TMSs are procedural in nature, they were developed as actual
systems running in actual computers. A TMS is plugged together with a
general reasoning system, a so-called ‘problem solver’ or ‘application pro-
gram’, which communicates pieces of new information and results of its
own inferences to the TMS in the form of justifications. It is to be under-
stood that reasoning in a TMS proceeds in a stepwise and time-consuming
fashion. At least in their beginnings, TMSs were judged from an engineer’s
rather than from a logician’s point of view. In particular, there was no
declarative semantics for them. Implementation issues like the efficiency
of algorithms and the proper design of interfaces to users and application
programs are still a main concern, but the logical understanding has been
shapening a great deal. However, there is still no set of widely accepted
rationality postulates which a TMS should satisfy.
Secondly, in the TMS approach one does not remove sentences from
the knowledge base, but instead modifies the labels assigned to them by a
labelling algorithm specific to the system. Once represented in the TMS as
a ‘node’, a datum will never be erased physically but only marked as out
(not believed) in the current belief state. Belief revision on this account is
the relabelling of nodes.
Thirdly, TMS representations are based on a severely restricted lan-
guage and thus cannot be required to respect the logical relationships
between beliefs in the same way as the previous approaches. Nodes are
unstructured objects, and justifications between nodes are one-way rules
which are not to be applied in contraposition (although we shall see that
this latter point is to be taken with a grain of salt). In general, only the
problem solver which ‘uses’ the TMS is capable of taking into account the
logical relations between the contents of beliefs. When they are discovered,
they can be communicated to the TMS in the form of justifications. We
shall see that there is a closure condition for TMSs, too, but this is defined
with respect to the particular set of justifications currently ‘known’ and
not with respect to some universal logical consequence operation.
Fourthly, TMSs handle belief revision mainly according to the ‘direct’
or ‘immediate’ mode in the sense of Section 1.4. In the transition from one
belief state to another, i.e., when the nodes in the belief network get rela-
belled as believed or unbelieved, the task of drawing the right conclusions
from the new body of input (premises and justifications) is given priority
to the task of respecting the contents of the original belief state. In the
inner workings of the computer program some conservative strategy would
be followed, but this is just for the sake of efficiency and not a major goal of
the TMS. The important issue is how to define and determine an admissible
state on the basis of a certain input information, or in other words, how to
implement a sound (non-standard) inference operation. The concrete real-
ization of change operations employed by TMSs, like dependency-directed
backtracking, nogood elimination or Boolean constraint propagation, are
78 Peter G¨ardenfors and Hans Rott
details to be filled out by clever system designers.
7.1 Justification-based truth maintenance systems
Doyle’s (1979) justification-based truth maintenance system (JTMS) is an
attempt to model changes of belief in the setting of a foundational theory.
As Doyle remarks (p. 232), the name ‘truth maintenance system’ not
only sounds like Orwellian Newspeak, but is also a misnomer, because
what is maintained is the consistency of beliefs and justifications for belief.
Doyle (1983) later changed the name to ‘reason maintenance systems’. In
a broad sense TMSs can be said to be semantic network models, but their
belief structure and their techniques for handling changes of belief are more
sophisticated than in other semantic network models.
There are two basic types of entities in a truth maintenance system:
nodes representing propositional beliefs and justifications representing rea-
sons for beliefs. A node stands for an potential individual belief. However,
there is nothing in a node that represents the internal (logical) structure
of a belief. Nodes can rather be seen as names of (sentences expressing)
beliefs. As we said, this means that logical relations between beliefs cannot
be captured in a direct way by a TMS.
A node may be in or out, which corresponds to accepting and not
accepting the belief represented by the node. Suppose we somehow know
that the node n1represents the negation of the sentence represented by a
node n2(see below for the question of how to represent this in a TMS). As
should be expected, if n2is out in the system, this does not entail that n1
is in. On the other hand, as a rationality requirement, if both n1and n2
are in, then the system will start a revision of the sets of nodes and their
justifications in order to reestablish consistency. In some cases a node is
treated as a premise in the sense that the only reason for the TMS to believe
the node is that it has been told to do so without further consideration.
A justification supports belief in a node by relating it to belief and
disbelief in other nodes. A justification is defined as a pair of lists, an inlist
I and an outlist O, together with the node nit is a justification for. A
justification of this form will be denoted hI|Oni. The node nis called
the consequence of the justification. The principal idea is that a node is in
if and only if it has some justification (there may be several for the same
node), the inlist of which contains only nodes that are in and the outlist
of which contains only nodes that are out. The inlist or the outlist may be
empty. If both are empty, nis a premise.
A justification hI|Onican be read as: ‘If all the nodes in I are
believed (that is in) and all nodes in Oare not believed (that is out), then
the node nis believed (that is in)’. Borrowing some symbols from electronic
circuitry, the information in a simple justification like h{n1, n2}|{n3} →
n4ican be represented graphically as in Figure 6 (cf. (Goodwin 1987)).
Here the nodes are represented by ovals and the justification by an
Belief Revision 79
Fig. 6. Graphic representation of truth maintenance systems
AND-gate. The black dot means roughly negation (default negation, nega-
tion as failure), so that n3has to be out and n2in in order for n4to be
in.
The basic concepts of TMS are best illustrated by an example:
Node Justification Status
INLIST OUTLIST
(n1) Oscar is not convicted (n2) (n3)in
ofdefamation.
(n2) The accused should have in
the benefit of the doubt
(n3) Oscar called the queen (n4),(n5) — out
a harlot.
(n4) The witness’s report is in
correct
(n5) The witness says he out
heard Oscar call the
queen a harlot.
Assume that (n2) and (n4) are given as premises. In this situation (n3)
is out because not both of (n4) and (n5) are in. Node (n1) is in because
(n2) is in and (n3) is out. If (n5)changes status to in (this may be assumed
to be beyond the control of system),(n3) will become in and consequently
80 Peter G¨ardenfors and Hans Rott
assumption (n1) is out. Graphically, this simple TMS can be depicted as
in Figure 7.
Fig. 7. An example
This network illustrates the nonmonotonicity of TMSs. If we come to
know that n5is true so that its status changes to in, then n3will change
to in and, consequently, n1will become out. So expanding the knowledge
base by n5leads to a retraction of n1.
More abstractly (following (Reinfrank 1989, p. 20)), a TMS network
can be described as a triple T=hN , P, Ji, where Nis a finite set of
nodes,PNis a (sometimes empty) set of premises, and Jis a set of
justifications of the form hI|Oni. For most purposes, premises pmay
be identified with justifications h∅ | ∅ → pi, so we can essentially get along
with two basic categories. Dropping the curly brackets for the I’s and O’s,
our example is characterized by T=h{n1, . . . , n5},{n2, n4},{hn4, n5| ∅ →
n3i,hn2|n3n1i}i.
Networks are the input for TMSs. A TMS belief state is a subset of N,
viz., the set of nodes labelled in. It is the main task of the TMS to draw
well-founded conclusions from a given network T=hN , P, Jiin order to
arrive at an appropriate belief state SN.
Given a network T=hN , P, Ji, we say that a justification hI|Oniis
valid in a set SNof nodes if and only if ISand OS=; we call it M-
valid in Sif and only if OS=. For example, in Figure 7 the justification
hn2|n3n1iis valid, and hn4, n5| ∅ → n3iis M-valid, in {n2, n4}. A set
SNof nodes is said to be locally grounded in Tif and only if for every
nS, either nPor nis the consequence of some justification that is
valid in S. Intuitively, local groundedness is not enough: Given the TMS
network T=h{n1, n2},,{n1|∅→n2, n2|∅→n1}i, the set {n1, n2}
is locally grounded, but intuitively we would hardly call it supported by
Belief Revision 81
T. So we had better introduce a stronger notion of groundedness. Using a
simple definition due to Elkan (1990), we say that a set SNof nodes is
(globally) grounded in Tif and only if there is a total ordering n1< . . . < nk
of the elements of Ssuch that for each ni, niis either an element of Por
there is a justification hI|Oniiin Jsuch that I⊆ {n1, . . . , ni1}and
OS=. In this case we can say that ni(or rather ni’s being in) depends
on the premises and justifications used to establish {n1, . . . , ni}.
Groundedness makes sure that we do not have too many nodes in. On
the other hand, we must not have too many nodes out. A set SNis
closed with respect to T=hN, P, J iif PSand for every hI|Oni
in Jwhich is valid in Swe have nS. An admissible (belief) state
with respect to Tis a set Sof nodes which is globally grounded in Tand
closed with respect to T. A network Tmay have one, more than one or
no admissible state, as is shown by the examples h{a, b},{a},{ha| ∅ →
bi}i,h{a, b},,{h∅ | abi,h∅ | bai}i, and h{a},,{h∅ | aai}i,
respectively. If Tpossesses at least one admissible state then it is called
coherent, otherwise incoherent. If Tpossesses more than one admissible
state, we face the well-known multiple-extension problem of nonmonotonic
reasoning: Which of admissible states should be ‘the current belief state’
sanctioned by the TMS? The sceptical idea is to take the intersection of
all admissible states. However, this intersection need not be an admissible
state. For this reason—and for the sake of efficiency—JTMSs follow a more
credulous practice and remain in the first admissible state they happen
to find. The multiple-extension problem does not arise for a monotonic
network, in which each justification in Jhas an empty outlist. It is easy
to show that such networks possess exactly one admissible state.
There is an interesting alternative way to capture groundedness. Con-
sider a subset Sof Nand the set JSof justifications in Jwhich are M-valid
in S. For each j=hI|Oniin JS, let jm=hI| ∅ → nithe ‘monotonic
part’ of j, and let Jm
S={jm:jJS}. Following Elkan, we say that a
set Sis stable with respect to Tif it is the unique inclusion-minimal set
of nodes which is closed with respect to T=hN, P, J m
Si. (Elkan 1990,
Theorem 3.8) shows that a set of nodes Swhich is closed with respect to
Jis stable with respect to Tif and only if it is globally grounded in T.
Thus TMSs are found to be related to the stable model semantics for logic
programming developed in (Gelfond and Lifschitz 1988).
An important goal of TMSs is to maintain consistency. The concept
of inconsistency is introduced into the TMS framework by distinguishing a
subset Nof Nas contradiction nodes. Dependency networks are then of
the form T=hN, N, P, J i. The definition of an admissible state Sof Tis
enriched by the consistency requirement SN=, i.e., no contradiction
node is allowed to be in. With the help of contradiction nodes n, logical
relations can be encoded in a TMS. Assume that our sample network is
extended by a new node n6representing the statement
82 Peter G¨ardenfors and Hans Rott
Oscar is convicted of defamation.
The fact that the sentence associated with n6is the negation of the sentence
associated with n1can now be communicated to the TMS by adding the
justification hn1, n6|ni. Justifications with a contradiction node as
consequence can be viewed as constraints on admissible states. Adding a
justification of the form hn|niis a means for contracting nfrom
the set of current beliefs (without buying the negation of n, of course): It
eliminates all admissible states with nlabelled in. (This is a contraction
in the immediate mode of belief change; cf. Section 1.4.)
If the TMS has found an admissible state Sfor a given (coherent) net-
work T, then Srepresents the stock of current beliefs of the system. But as
time goes by, the input Twill change. Conceivable changes of TMS input
include additions or deletions of nodes, premises and justifications. In prac-
tice, however, the actions taken vis `a vis a TMS usually are incremental,
that is, nodes, premises and justifications are added, but never withdrawn.
Recall, however, that due to the nonmonotonic character of some justifica-
tions, adding premises or justifications does not mean enlarging the set of
nodes labelled in. The reader may easily verify that in our above example
Thas the unique admissible state S={n1, n2, n4}while adding n5to the
premises of Tgives us the unique admissible state S0={n2, n3, n4, n5}. So
S6⊆ S0even though PP0(and of course JJ0).
Is is now clear that in principle TMSs perform belief revision in the
immediate mode. But when computing the new admissible state after a
premise, say, has been added, a TMS does draw on the prior state and thus
incorporates elements of logic-constrained belief revision. Justification-
based TMSs accomplish belief revisions by dependency-directed backtrack-
ing (Stallman and Sussman 1977). We illustrate this for the paradigmatic
case of inconsistency elimination which can be regarded as a contraction op-
eration involved in every belief-contravening revision in TMSs (in a fashion
similar to Hansson’s ‘consolidation’; cf. Section 5.6).
Neglecting many interesting details, the procedure of dependency-directed
backtracking is as follows. If the TMS ‘detects’ a contradiction, i.e. if an
inconsistency node nis labelled in, the TMS traces back and looks for
the set Jof justifications on which this labelling is currently grounded.
Then it picks an element nof the outlist of one of these justifications
and creates a new justification of the form hI|On∗i where Iis the
union of the inlists in Jand Ois the union of the outlists of Jwith the
exception of nitself. So if each element of Iis in and each element of
Oand nare out, then instead of triggering the contradiction node n
this new justification causes nto be in. The old nonmonotonic proof of
nbreaks down, because the old justification with nin its outlist is no
longer valid. Intuitively, nonbelief in nhas been identified as being re-
sponsible for the contradiction. As a simple example, consider the network
Belief Revision 83
T=h{a, b, n},{a},{ha|bni}i which would give us the inconsistent
admissible state {a, n}. The TMS avoids this embarrassing situation by
creating the justification ha| ∅ → biwhich then triggers brather than n.
If Tis part of a more complex dependency network, then the change of b’s
label propagates through a reconsideration of each justification with bin
either its inlist or its outlist. The justification ha| ∅ bicreated by the
TMS can be regarded as something like the contraposition of the original
justification ha|bni. For a systematic treatment of this perspective,
see (Giordano and Martelli 1991).
The technique of removing contradiction nodes by adding new justifica-
tions is at the same time an illustration how logic-constrained contractions
of nodes (in this case, of n) are accomplished by purely incremental means
(in this case, by adding the justification ha|bi). If some undesired
belief is in because a certain more plausible belief is out, then the TMS
discards the former by making us believe the latter. Note however that
this idea of contraction through nonmonotonicity does not conform to the
inclusion postulate (K.
2). In a sense, every contraction is a revision.
Belief change by dependency-directed backtracking has been criticized
as being insufficiently understood and controlled. First, in some examples
the procedure can lead to so-called ‘odd loops’ in which case the labelling
procedure of (Doyle 1979) fails to terminate. This shortcoming was re-
moved by (Goodwin 1982). Another problem is that the choice of nabove
is arbitrary, so that the internally created justification with consequence
nmay conflict with later extensions of the dependency net by the prob-
lem solver. The TMS then has to perform ‘spurious belief revision’ and
backtracking for the only reason that it had leaped to an inappropriate ad-
missible state. This difficulty which afflicts many credulous approaches to
the multiple extension problem indicates that logic-constrained reasoning
may be generally ill-fated in nonmonotonic contexts.
JTMSs have been endowed with a declarative semantics only recently.
Two different ways have been opened. The first one is to devise formal
systems and a semantics especially tailored to TMS (Brown, Jr 1988).
The second one connects TMS theory to independently developed types
of reasoning in AI and logic programming. One can draw on fairly estab-
lished semantical accounts (Gelfond and Lifschitz 1988, Etherington 1987)
by taking a route either via logic programming with negation (Pimentel
and Cuadrado 1989, Elkan 1990) or via autoepistemic logic (see Konolige’s
Chapter in Volume 3), or via default logic (see Poole’s Chapter in Volume
3). In pursuit of the second route, different translations of justifications
and, correspondingly, different reconstructions of admissible states have
been used. With Bel denoting the autoepistemic belief operator, the simple
justification ha|bciis translated as Bel(a)¬Bel(b)cin (Reinfrank
et al. 1989) and (Fujiwara and Honiden 1989), as Bel(a)¬Bel(Bel(b))
cin (Marek and Truszczynski 1989), and as a∧ ¬Bel(b)cin (Elkan
84 Peter G¨ardenfors and Hans Rott
1990). Another way of likening TMS to well-known accounts of nonmono-
tonic reasoning is explored by Brown and Shoham (1989) which propose a
semantics in the style of Shoham (1988). Meritorious as they are, it must
be said that all these semantic undertakings are quite complicated and un-
derline the fact that groundedness is an essentially procedural notion. (But
compare Makinson’s Chaper in Volume 3, especially Section 3.6.)
7.2 Other kinds of truth maintenance systems
Next to Doyle-style JTMS, the most prominent approach to reason mainte-
nance is de Kleer’s (1986a, 1986b, 1986c) assumption-based TMS, or ATMS
for short. The basic ATMS categories are again nodes and justifications,
and the input of an ATMS is again a dependency network of the form
T=hN, N, P, J i.
However, ATMSs are very different from JTMSs. First, there is no
nonmonotonic reasoning, hence no multiple extension problem, and no de-
pendency directed backtracking in (the original versions of) ATMS. All
justifications are monotonic. Second, while in JTMS a node is simply la-
belled in or out, corresponding to its status in current belief state, in ATMS
each node nis labelled with the set of consistent entailment sets for n. Re-
call from Sections 4.3 and 5.4 that an entailment set for a sentence Ais a
minimal set of sentences (from K) sufficient to derive A. If we substitute
‘nodes (from N)’ for ‘sentences (from K)’, think of nodes as atoms, and
finally bear in mind that derivation is not meant as derivation with respect
to some universal logical consequence operation but as derivation with the
help of a given set of monotonic justifications, then we know what an ATMS
label is. In the restricted language of TMSs, a set of nodes is consistent if
it does not permit the derivation of a distinguished contradiction node n.
An assumption in an ATMS is a special kind of node. It is ‘restricted
to designate a decision to assume without any commitment as to what is
assumed’ (de Kleer 1986a, p. 142). Intuitively, assumptions are poten-
tial premises or temporary hypotheses the consequences of which are to be
explored. Sets of assumptions are called ‘environments’ in ATMS terminol-
ogy. The set of all assumptions in an ATMS is deliberately not required to
be consistent, but derivations proceeding from inconsistent environments
are excluded. Inconsistent environments are called nogoods. As they are
central to the ATMS algorithms, they are stored in a separate database.
Instead of a single current belief state, many—and possibly all—assumption
sets are taken into account simultaneously. Thus ATMSs are particularly
well-suited for abductive reasoning which aims at finding explanations for
observed facts. If the current set Pof premises and assumptions is consis-
tent and covers an entailment set for n, then, by the monotonicity of the
system, nis believed in the unique belief state (‘context’ in ATMS terminol-
ogy) generated by P. If Pchanges to P0, then again one only has to check
whether P0is consistent and covers an entailment set of n. The adaption
Belief Revision 85
process is one of shifting precomputed contexts instead of backtracking.
ATMSs adopt the immediate mode of belief revision more decidedly than
JTMSs do. The behaviour of ATMS assumes a nonmonotonic flavour when
an environment needed for the derivation of a certain node is vitiated by
additional information showing that this environment is in fact a nogood.
Important generalizations and logical reconstructions of the ATMS ap-
proach are provided by Reiter and de Kleer (1987) in terms of clause man-
agement systems, by Ginsberg (1988) in terms of bilattices, and by Fujiwara
and Honiden (1991) in terms of propositional Horn logic. The lattice-based
TMS of Brown, Gaucas and Benanav (1987) subsumes JTMS and ATMS
and is given a formal semantics in the unifying abstract framework of ‘log-
ics for justified belief’ in (Brown, Jr 1988). Martins and Shapiro’s (1988)
semantic network processing system combines an inference engine using a
kind of relevance logic with an ATMS-style dependency management (in
fact, historically it was the first realization of the multiple context idea).
See (Kumar 1990) for some recent developments of this system. Exten-
sions of ATMS which allow for an explicit treatment of nonmonotonicity
are hinted at by de Kleer ((1986b, 1988)), and investigated in a more sys-
tematic fashion by Dressler (1989), Junker (1989), Giordano and Martelli
(1990), and Rodi and Pimentel (1991).
We conclude by sketching another monotonic approach to truth mainte-
nance. The distinctive feature of McAllester’s (1980, 1990) logic-based TMS
(LTMS) is that its justifications are clauses in propositional language, i.e.
disjunctions of atoms and negations of atoms. This makes it possible for
an LTMS to use justifications in a multidirectional manner. Nodes are la-
belled true or false, or they are not labelled at all. For example, a clause
¬a∨ ¬bcforces the node cto be labelled true if aand bare labelled true,
but it will just as well force the node ato be labelled false if bis labelled true
and cis labelled false. This process of propagating labels is called Boolean
constraint propagation and is a logically incomplete, but extremely efficient
way of drawing conclusions from clausal constraints. LTMSs have become
increasingly influential, as is witnessed e.g. by the switch of the textbook
Artificial Intelligence Programming (Charniak et al. 1987) from offering an
introduction to truth maintenance based on JTMS in the first edition to
one based on LTMS in the second. A recent paper of McDermott (1991)
generalizes the LTMS idea to a comprehensive TMS which can cope with
JTMS-like nonmonotonicity as well as with ATMS-like multiple contexts.
In conclusion, we think that truth maintenance systems fare well in
comparison with the coherentist approaches dealt with in Section 2 so long
as aspects of practical application are the main criteria of assessment. As
regards understanding on an abstract level, on the other hand, the latter
seem to compare favourably with the former. However, it has recently
been suggested that the two kinds of approaches are not so different after
all. ardenfors (1990) indicates how reasons may be incorporated into the
86 Peter G¨ardenfors and Hans Rott
coherentist appoach, while Doyle (1992) proposes ways to impose strong
principles of conservativism on the foundationalist framework. Cooperation
between the two camps, which has hardly begun at the time of writing, may
well be expected to lead to many new and illuminating insights.
Acknowledgements
The authors wish to thank Sven Ove Hansson and David Makinson for very
detailed and valuable hints and criticisms of an earlier version of this chap-
ter. They are also grateful to David Makinson for his permission to include
numerous results he produced in collaboration with the first author, and
to Bernhard Nebel for composing the section on computational complexity.
Thanks are also due to Oskar Dressler, Bernhard Nebel, Maurice Pagnucco
and Emil Weydert for a number of helpful comments.
References
Alchourr´on and Makinson 1981. C. Alchourr´on and D. Makinson. Hierar-
chies of regulations and their logic. In R. Hilpinen, editor, New Studies
in Deontic Logic, pages 125–148. Reidel, Dordrecht, 1981.
Alchourr´on and Makinson 1982. C. Alchourr´on and D. Makinson. On the
logic of theory change: contraction functions and their associated revi-
sion functions. Theoria, 48:14–37, 1982.
Alchourr´on and Makinson 1985. C. Alchourr´on and D. Makinson. On the
logic of theory change: Safe contraction. Studia Logica, 44:405–422, 1985.
Alchourr´on and Makinson 1986. C. Alchourr´on and D. Makinson. Maps
between some different kinds of contraction function: The finite case.
Studia Logica, 45:187–198, 1986.
Alchourr´on et al. 1985. C. Alchourr´on, P. G¨ardenfors, and D. Makinson.
On the logic of theory change: Partial meet contraction and revision
functions. Journal of Symbolic Logic, 50:510–530, 1985.
Belnap 1977a. N. D. Belnap. How a computer should think. In G. Ryle,
editor, Contemporary Aspects of Philosophy, pages 30–56. Oriel Press,
New York, 1977.
Belnap 1977b. N. D. Belnap. A useful four-valued logic. In G. Epstein
and J.M. Dunn, editors, Modern Uses of Multiple-valued Logic, pages
8–37. Reidel, Boston MA, 1977.
Borgida 1985. A. Borgida. Language features for flexible handling of ex-
ceptions in information systems. ACM Transactions on Database Sys-
tems, 10:563–603, 1985.
Brewka 1991. G. Brewka. Belief revision in a framework for default rea-
soning. In A. Fuhrmann and M. Morreau, editors, The Logic of Theory
Change, pages 206–222. LNAI 465, Springer, Berlin, 1991.
Belief Revision 87
Brown Jr and Shoham 1989. A. L. Brown Jr and Y. Shoham. New results
on semantical non-monotonic reasoning. In M. Reinfrank, J. de Kleer,
M. L. Ginsberg, and E. Sandewall, editors, Non-Monotonic Reasoning.
Proceedings of the 2nd International Workshop 1988, pages 19–26. LNAI
346, Springer-Verlag, Berlin, 1989.
Brown Jr et al. 1987. A. L. Brown Jr, D. E. Gaucas, and D. Benanav. An
algebraic foundation for truth maintenance. In Proceedings IJCAI-87,
pages 973–980, 1987.
Brown, Jr 1988. A. L. Brown, Jr. Logics of justified belief. In Proceedings
ECAI-88, pages 507–512, 1988.
Bulygin and Alchourr´on 1977. E. Bulygin and Alchourr´on. Un-
vollst¨andigkeit, widerspr¨uchlichkeit und unbestimmtheit der normenord-
nungen. In G. Conte, R. Hilpinen, and G.H. von Wright, editors, Deontic
Logic and Semantics, pages 20–32. Athenaion, Wiesbaden, 1977.
Charniak et al. 1987. E. Charniak, C. Riesbeck, D. McDermott, and Mee-
han. Artificial Intelligence Programming. Earlbaum, Baltimore, second
edition, 1987. (Chapter 15: ‘Data dependencies and reason maintenance
systems’) (First edition by Charniak, Riesbeck and McDermott 1980.).
Cohen 1977. L. J. Cohen. The Probable and the Provable. Clarendon
Press, Oxford, 1977.
Cross and Thomason 1992. C. B. Cross and R.H. Thomason. Condition-
als and knowledge-base update. In G¨ardenfors, editor, Belief Revision,
pages 247–275. Cambridge University Press, 1992.
Dalal 1988a. M. Dalal. Investigations into a theory of knowledge base
revisions: Preliminary report. In Proceedings of the 7th NCAI, pages
475–479, 1988.
Dalal 1988b. M. Dalal. Updates in propositional databases. Technical Re-
port DCS-TR222, Department of Computer Science, Rutgers University,
New Brunswick, NJ, 1988.
de Kleer 1986a. J. de Kleer. An assumption-based tms. Artificial Intelli-
gence, 28:127–162, 1986.
de Kleer 1986b. J. de Kleer. Extending the atms. Artificial Intelligence,
28:163–196, 1986.
de Kleer 1986c. J. de Kleer. Problem solving with the atms. Artificial
Intelligence, 28:197–224, 1986.
de Kleer 1988. J. de Kleer. A general labeling algorithm for assumption-
based truth maintenance. In in Proceedings 7th NCAI, pages 188–192,
1988.
Doyle and London 1980. J. Doyle and P. London. A selected descriptor-
indexed bibliography to the literature on belief revision. SIGART, 71:7–
23, 1980.
88 Peter G¨ardenfors and Hans Rott
Doyle 1979. J. Doyle. A truth maintenance system. Artificial Intelligence,
12:231–272, 1979.
Doyle 1983. J. Doyle. The ins and outs of reason maintenance. In Pro-
ceedings 8th IJCAI, pages 349–351, 1983.
Doyle 1991. J. Doyle. Rational belief revision: Preliminary report. In
J. Allen, R. Fikes, and E. Sandewall, editors, Principles of Knowl-
edge Representation and Reasoning: Proceedings of the 2nd International
Conference, pages 163–174, San Mateo, Ca, 1991. Morgan Kaufmann.
Doyle 1992. J. Doyle. Reason maintenance and belief revision: Founda-
tions vs. coherence theories. In P. G”ardenfors, editor, Belief Revision,
pages 29–51. Cambridge University Press, 1992.
Dressler 1989. O. Dressler. An extended basic atms. In M. Reinfrank,
J. de Kleer, M. L. Ginsberg, and E. Sandewall, editors, Non-Monotonic
Reasoning. Proceedings of the 2nd International Workshop 1988, pages
143–163, Berlin, 1989. LNAI 346, Springer.
Dubois and Prade 1988. D. Dubois and H. Prade. Possibility theory: An
Approach to Computerized Processing of Uncertainty. Plenum Press,
New York, 1988.
Dubois and Prade 1991a. D. Dubois and H. Prade. Epistemic entrench-
ment and possibilistic logic. Artificial Intelligence, 50:223–239, 1991.
Dubois and Prade 1991b. D. Dubois and H. Prade. Possibilistic logic,
preferential models, non-monotonicity and related issues. In Prodeed-
ings 12th IJCAI, 1991.
Dubois and Prade 1992. D. Dubois and H. Prade. Belief change and pos-
sibility theory. In P. G¨ardenfors, editor, Belief Revision, pages 142–182.
Cambridge University Press, 1992.
Eiter and Gottlob 1992. T. Eiter and G. Gottlob. On the complexity of
propositional knowledge base revision. Artificial Intelligence, 57:227–
270, 1992.
Elkan 1990. C. Elkan. A rational reconstruction of nonmonotonic truth
maintenance systems. Artificial Intelligence, 43:219–234, 1990.
Etherington 1987. D. W. Etherington. A semantics for default logic. In
Proceedings 10th IJCAI, pages 495–500, 1987.
Fagin et al. 1983. R. Fagin, J. D. Ullman, and M. Y. Vardi. On the seman-
tics of updates in databases. In Proceedings of Second ACM SIGACT-
SIGMOD, Atlanta, pages 352–365, 1983.
Fagin et al. 1986. R. Fagin, J.D. Ullman, G.M. Kuper, and M.Y. Vardi.
Updating logical databases. Advances in Computing Research, 3:1–18,
1986.
Freund 1993. M. Freund. Injective models and disjunctive relations. Jour-
nal of Logic and Computation, 3:231–247, 1993.
Fuhrmann and Hansson 1994. A. Fuhrmann and S.O. Hansson. A survey
Belief Revision 89
of multiple contractions. Journal of Logic, Language and information,
3:39–76, 1994.
Fuhrmann 1988. A. Fuhrmann. Relevant Logics, Modal Logics, and The-
ory Change. PhD thesis, Australian National University, Canberra, 1988.
Fuhrmann 1991. A. Fuhrmann. Theory contraction through base contrac-
tion. Journal of Philosophical Logic, 20:175–203, 1991.
Fujiwara and Honiden 1989. Y. Fujiwara and S. Honiden. Relating the
tms to autoepistemic logic. In Proceedings IJCAI-89, pages 1199–1205,
1989.
Fujiwara and Honiden 1991. Y. Fujiwara and S. Honiden. On logical
foundations of the atms. In J.P. Martins and M. Reinfrank, editors,
Truth Maintenance Systems, pages 125–135. LNAI 515, Springer-Verlag,
Berlin, 1991.
ardenfors and Makinson 1988. P. G¨ardenfors and D. Makinson. Revi-
sions of knowledge systems using epistemic entrenchment. In M. Vardi,
editor, Proceedings of the Second Conference on Theoretical Aspects of
Reasoning about Knowledge, pages 83–95, Los Altos, CA, 1988. Morgan
Kaufmann.
ardenfors and Makinson 1994. P. G¨ardenfors and D. Makinson. Non-
monotonic inference based on expectations. Artificial Intelligence, 1994.
To appear.
ardenfors 1978. P. G¨ardenfors. Conditionals and changes of belief. In
I. Niiniluoto and R. Tuomela, editors, The Logic and Epistemology of
Scientific Change. Acta Philosophica Fennica 30, pages 381–404. North-
Holland, Amsterdam, 1978.
ardenfors 1982. P. G¨ardenfors. Rules for rational changes of belief. In
T. Pauli, editor, Philosophical Essays Dedicated to Lennart ˚
Aqvist on
His Fiftieth Birthday, pages 88–101. University of Uppsala Philosophical
Studies 34, 1982.
ardenfors 1984. P. G¨ardenfors. The dynamics of belief as a basis for
logic. British Journal for the Philosophy of Science, 35:1–10, 1984.
ardenfors 1986. P. G¨ardenfors. Belief revisions and the ramsey test for
conditionals. Philosophical Review, 95:81–93, 1986.
ardenfors 1987. P. G¨ardenfors. Variations on the ramsey test: More
triviality results. Studia Logica, 46:321–332, 1987.
ardenfors 1988. P. G¨ardenfors. Knowledge in Flux: Modeling the Dy-
namics of Epistemic States. Bradford Books, MIT Press, Cambridge,
Mass, 1988.
ardenfors 1989. P. G¨ardenfors. The dynamics of normative systems. In
A.A. Martino, editor, Proceedings of the 3rd International Congress on
Logica, Informatica, Dritto, Consiglio Nazionale delle Ricerche, Flo-
rence, pages 293–299, 1989. (Translated into Italian as ‘La dinamica
90 Peter G¨ardenfors and Hans Rott
dei sistemi normativi’, in A. A. Martino ed., Systemi esperti nel dritto,
Cedem, Padova, 1989, 283-291.).
ardenfors 1990. P. G¨ardenfors. The dynamics of belief systems: Founda-
tions vs. coherence theories. Revue Internationale de Philosophie, 44:24–
46, 1990.
ardenfors 1991. P. G¨ardenfors. Nonmonotonic inference based on expec-
tations: A preliminary report. In J. Allen, R. Fikes, and E. Sandewall,
editors, Principles of Knowledge Representation and Reasoning: Pro-
ceedings of the 2nd International Conference, pages 585–590, San Mateo,
Ca, 1991. Morgan Kaufmann.
Garey and Johnson 1979. M.R. Garey and D.S. Johnson. Computers and
Intractability—A Guide to the Theory of NP-Completeness. Freeman,
San Francisco, CA, 1979.
Gelfond and Lifschitz 1988. M. Gelfond and V. Lifschitz. The stable
model semantics for logic programming. In K. Bowen and R. Kowal-
ski, editors, Logic Programming: Proceedings of the 5th International
Conference, pages 1070–1080. MIT Press, 1988.
Ginsberg 1986. M. L. Ginsberg. Counterfactuals. Artificial Intelligence,
30:35–79, 1986.
Ginsberg 1988. M. L. Ginsberg. Multivalued logics: A uniform approach
to reasoning in artificial intelligence. Computational Intelligence, 4:265–
316, 1988.
Giordano and Martelli 1990. L. Giordano and A. Martelli. An abductive
characterization of the tms. In Proceedings 9th ECAI-90, pages 308–313,
London, 1990. Pitman.
Giordano and Martelli 1991. L. Giordano and A. Martelli. Truth main-
tenance systems and belief revision. In J.P. Martins and M. Reinfrank,
editors, Truth Maintenance Systems, pages 71–86. LNAI 515, Springer-
Verlag, Berlin, 1991.
Goodman 1947. N. Goodman. The problem of counterfactual condition-
als. In reprinted in: Fact, Fiction, and Forecast (1954), pages 13–34.
Athlone Press, London, 1947. Second edition Bobbs-Merrill, Indianapo-
lis, New York, Kansas City 1965.
Goodwin 1982. J. Goodwin. An improved algorithm for non-montonic de-
pendency network update. Technical Report LITH-MAT-T-82-23, Uni-
versity of Link¨oping, 1982.
Goodwin 1987. J. Goodwin. A theory and system for non-monotonic rea-
soning, link¨oping studies in science and technology. In Dissertations no.
165. University of Link¨oping, 1987.
Gottlob 1992. G. Gottlob. Complexity results for nonmonotonic logics.
Journal of Logic and Computation, 2, 1992.
Grahne 1991. G. Grahne. Updates and counterfactuals. In J.A. Allen,
Belief Revision 91
R. Fikes, and E. Sandewall, editors, Principles of Knowledge Represen-
tation and Reasoning: Proceedings of the 2nd International Conference,
pages 269–276, San Mateo, Ca., 1991. Morgan Kaufmann.
Grove 1988. A. Grove. Two modellings for theory change. Journal of
Philosophical Logic, 17:157–170, 1988.
Halpern et al. 1986 1988 1990. J.Y. Halpern, M.Y. Vardi, and R.J.
Parikh, editors. Proceedings of the First/Second/Third Conference on
Theoretical Aspects of Reasoning about Knowledge. Morgan Kaufmann,
1986, 1988, 1990.
Hamblin 1959. C. L. Hamblin. The modal ’probably. Mind, 68:234–240,
1959.
Hansson 1989. S. O. Hansson. New operators for theory change. Theoria,
55:114–132, 1989.
Hansson 1991a. S. O. Hansson. Belief base dynamics. PhD thesis, Uppsala
University, 1991.
Hansson 1991b. S. O. Hansson. Belief contraction without recovery. Stu-
dia Logica, 50:251–260, 1991.
Hansson 1992. S. O. Hansson. In defense of base contraction. Synthese,
91:239–245, 1992.
Hansson 1993a. S. O. Hansson. Reversing the Levi identity. Journal of
Philosophical Logic, 22:637–669, 1993.
Hansson 1993b. S. O. Hansson. Theory contraction and base contraction
unified. Journal of Symbolic Logic, 58:602–625, 1993.
Hansson 1994. S. O. Hansson. Hidden structures of belief. In A. Fuhrmann
and H. Rott, editors, Logic, Action and Information. de Gruyter, Berlin,
1994. To appear.
Harman 1986. G. Harman. Change in View. Bradford Books, MIT Press,
Cambrige, Mass., 1986.
Harper 1977. W. L. Harper. Rational conceptual change. Philosophy of
Science Association, 2:462–494, 1977.
Hilpinen 1981. R. Hilpinen. On normative change. In E. Morscher and
R. Stranzinger, editors, Ethics: Foundations, Problems, and Applica-
tions, pages 155–164. H¨older-Pichler-Tempsky, Wien, 1981.
Hunter 1990. D. Hunter. Parallel belief revision. In R.D. Schachter, T.S.
Levitt, L.N. Kanal, and J.F. Lemmer, editors, Uncertainty in Artificial
Intelligence 4, pages 241–251. North-Holland, Amsterdam, 1990.
Jackson and Pais 1991. P. Jackson and J. Pais. Semantic accounts of be-
lief revision. In J.P. Martins and M. Reinfrank, editors, Truth Mainte-
nance Systems, pages 155–177. LNAI 515, Springer-Verlag, Berlin, 1991.
Jeffrey 1965. R. C. Jeffrey. The Logic of Decision. Chicago University
Press, Chicago, 1965. Second Edition 1983.
92 Peter G¨ardenfors and Hans Rott
Johnson 1990. D.S. Johnson. A catalog of complexity classes. In J. van
Leeuwen, editor, Handbook of Theoretical Computer Science, Vol. A,
pages 67–161. MIT Press, 1990.
Junker 1989. U. Junker. A correct non-monotonic atms. In Proceedings
IJCAI-89, pages 1049–1054, 1989.
Katsuno and Mendelzon 1989. H. Katsuno and A.O. Mendelzon. A uni-
fied view of propositional knowledge base updates. In Proceedings of
the 11th International Joint Conference on Artificial Intelligence, San
Mateo, CA, pages 269–276. Morgan Kaufmann, 1989.
Katsuno and Mendelzon 1991. H. Katsuno and A.O. Mendelzon. Propo-
sitional knowledge base revip sion and minimal change. Artificial Intel-
ligence, 52:263–294, 1991.
Katsuno and Mendelzon 1992. H. Katsuno and A.O. Mendelzon. On
the difference between updating a knowledge base and revising it. In
P. G´ardenfors, editor, Belief Revision, pages 183–203. Cambridge Uni-
versity Press, Cambridge, 1992. Earlier version in Proceedings 2nd KR,
1991, 387-394.
Katsuno and Satoh 1991. H. Katsuno and K. Satoh. A unified view of
consequence relation, belief revision, and conditional logic. In Proceed-
ings of the 12th IJCAI, pages 406–412, 1991.
Kratzer 1981. A. Kratzer. Partition and revision: The semantics of coun-
terfactuals. Journal of Philosophical Logic, 10:201–216, 1981.
Kraus et al. 1990. S. Kraus, D. Lehmann, and M. Magidor. Nonmono-
tonic reasoning, preferential models and cumulative logics. Artificial
Intelligence, 44:167–207, 1990.
Kumar 1990. D. Kumar, editor. Current Trends in SNePSpSemantic Net-
work Processing System. LNAI437, Springer-Verlag, Berlin, 1990.
Lehmann and Magidor 1992. D. Lehmann and M Magidor. What does a
conditional knowledge base entail? Artificial Intelligence, 55:1–60, 1992.
Lewis 1973. D. Lewis. Counterfactuals. Blackwell, Oxford, 1973. Second
edition 1986.
Lewis 1979. D. Lewis. A problem about permission. In E. Saarinen et al.,
editors, Essays in Honour of Jaakko Hintikka, pages 163–179. Reidel,
Dordrecht, 1979.
Lewis 1981. D. Lewis. Ordering semantics and premise semantics for coun-
terfactuals. Journal of Philosophical Logic, 10:217–234, 1981.
Lindstr¨om and Rabinowicz 1989. S. Lindstr¨om and W. Rabinowicz. On
probabilistic representation of non-probabilistic belief revision. Journal
of Philosophical Logic, 18:69–101, 1989.
Lindstr¨om and Rabinowicz 1991. S. Lindstr¨om and W. Rabinowicz. Epis-
temic entrenchment with incomparabilities and relational belief revision.
Belief Revision 93
In A. Fuhrmann and M. Morreau, editors, The Logic of Theory Change,
pages 93–126. LNAI 465, Springer-Verlag, Berlin, 1991.
Lindstr¨om 1991. S. Lindstr¨om. A semantic approach nonmonotonic rea-
soning: Inference operations and choice. Technical Report Uppsala
Prints and Preprints in Philosophy, Number 6, 1991, Department of
Philosophy, University of Uppsala, 1991.
Makinson and G¨ardenfors 1991. D. Makinson and P. G¨ardenfors. Rela-
tions between the logic of theory change and nonmonotonic logic. In
A. Fuhrmann and M. Morreau, editors, The Logic of Theory Change,
pages 185–205. LNAI 465, Springer-Verlag, Berlin, 1991.
Makinson 1985. D. Makinson. How to give it up: A survey of some formal
aspects of the logic of theory change. Synthese, 62:347–363, 1985.
Makinson 1987. D. Makinson. On the status of the postulate of recovery
in the logic of theory change. Journal of Philosophical Logic, 16:383–394,
1987.
Makinson 1989. D. Makinson. General theory of cumulative inference. In
M. Reinfrank, J. de Kleer, M. L. Ginsberg, and E.Sandewall, editors,
Non-Monotonic Reasoning. Proceedings of the 2nd International Work-
shop 1988, pages 1–18, Berlin, 1989. LNAI 346, Springer-Verlag.
Makinson 1990. D. Makinson. The G¨ardenfors impossibility theorem in
non-monotonic contexts. Studia Logica, 49:1–6, 1990.
Makinson 1993. D. Makinson. Five faces of minimality. Studia Logica,
52:339–379, 1993.
Marek and Truszczynski 1989. W. Marek and M. Truszczynski. Autoepis-
temic logic, defaults, and truth maintenance (first draft). Technical re-
port, University of Kentucky at Lexington, 1989.
Martins and Shapiro 1988. J. P. Martins and S.C. Shapiro. A model for
belief revision. Artificial Intelligence, 35:25–79, 1988.
Martins 1990. J. Martins. The truth, the whole truth, and nothing but the
truth: An indexed bibliography to the literature on truth maintenance
systems. AI Magazine, 11(5):7–25, 1990.
McAllester 1980. D. McAllester. An outlook on truth maintenance. Tech-
nical Report Memo 551, MIT AI-Lab, 1980.
McAllester 1990. D. McAllester. Truth maintenance. In Proceedings 8th
NCAI, pages 1109–1116, 1990.
McDermott 1991. D. McDermott. A general framework for reason main-
tenance. Artificial Intelligence, 50:289–329, 1991.
Morreau 1992. M. Morreau. Epistemic semantics for counterfactuals.
Journal of Philosophical Logic, 21:33–62, 1992.
Nayak 1991. A. C. Nayak. Foundational belief change. Technical report,
University of Rochester, 1991. To appear in the Journal of Philosophical
Logic.
94 Peter G¨ardenfors and Hans Rott
Nebel 1989. B. Nebel. A knowledge level analysis of belief revision. In
R. Brachman, H. Levesque, and R. Reiter, editors, Principles of Knowl-
edge Representation and Reasoning: Proceedings of the 1st International
Conference, pages 301–311, San Mateo, Ca., 1989. Morgan Kaufmann.
Nebel 1990. B. Nebel. Reasoning and Revision in Hybrid Representation
Systems. LNAI 422, Springer-Verlag, Berlin, 1990.
Nebel 1992. B. Nebel. Syntax-based approaches to belief revision. In
P. G¨ardenfors, editor, Belief Revision, pages 52–88. Cambridge Univer-
sity Press, 1992.
Nieder´ee 1991. R. Nieder´ee. Multiple contraction: A further case against
ardenfors’ principle of recovery. In A. Fuhrmann and M. Morreau, ed-
itors, The Logic of Theory Change, pages 322–334. LNAI 465, Springer-
Verlag, Berlin, 1991.
Pearce and Rautenberg 1991. D. Pearce and W. Rautenberg. Proposi-
tional logic based on the dynamics of disbelief. In A. Fuhrmann and
M. Morreau, editors, The Logic of Theory Change, pages pp. 243–258.
LNAI 465, Springer-Verlag, Berlin, 1991.
Pearl 1988. J. Pearl. Probabilistic Reasoning in Intelligent Systems. Mor-
gan Kaufmann, San Mateo, Ca., 1988.
Pimentel and Cuadrado 1989. S. G. Pimentel and J. L. Cuadrado. A truth
maintenance system based on stable models. In Proceedings of the North
American Conference on Logic Programming, pages 274–290. MIT Press,
1989.
Pollock 1976. J. Pollock. Subjunctive Reasoning. Reidel, Dordrecht, 1976.
Poole 1988. D. Poole. A logical framework for default reasoning. Artificial
Intelligence, 36:27–47, 1988.
Priest et al. 1989. G. Priest, R. Routley, and J. Norman. Paraconsis-
tent Logic. Essays on the Inconsistent. Philosophia, M¨unchen, Hamden,
Wien, 1989.
Quine and Ullian 1978. W. V. O. Quine and J. S. Ullian. The Web of
Belief. Random House, New York, second edition, 1978.
Quine 1951. W. V. O. Quine. Two dogmas of empiricism. Philosophical
Review, 60:20–43, 1951. Reprinted in Quine 1953, pp. 20-46.
Quine 1953. W. V. O. Quine. From a Logical Point of View. Harvard
University Press, Cambridge, Mass, 1953.
Rao and Foo 1989. A. S. Rao and N. Y. Foo. Formal theories of belief
revision. In R. Brachman, H. Levesque, and R. Reiter, editors, Princi-
ples of Knowledge Representation and Reasoning: Proceedings of the 1st
International Conference, pages 369–380, San Mateo, Ca., 1989. Morgan
Kaufmann.
Reinfrank et al. 1989. M. Reinfrank, O. Dressler, and G. Brewka. On the
Belief Revision 95
relation between truth maintenance and autoepistemic logic. In Proceed-
ings IJCAI-89, pages 1206–1212, 1989.
Reinfrank 1989. M. Reinfrank. Fundamentals and logical foundations of
truth maintenance. Technical Report Studies in Science and Technology,
Dissertations No 221, University of Link¨oping, 1989.
Reiter and de Kleer 1987. R. Reiter and J. de Kleer. Foundations of
assumption-based truth maintencance systems: Preliminary report. In
Proceedings 6th NCAI, pages 183–188, 1987.
Reiter 1984. R. Reiter. Towards a logical reconstruction of relational
database theory. In M. L. Brodie, J. Mylopoulos, and J. Schmidt,
editors, On Conceptual Modelling: Perspectives from Artificial In-
telligence, Databases, and Programming Languages, pages 191–233.
Springer-Verlag, Berlin, 1984.
Rescher 1964. N. Rescher. Hypothetical Reasoning. North-Holland, Ams-
terdam, 1964.
Rescher 1973. N. Rescher. The Coherence Theory of Truth. Oxford Uni-
versity Press, Oxford, 1973.
Rescher 1976. N. Rescher. Plausible Reasoning. Van Gorcum, Assen, Am-
sterdam, 1976.
Rodi and Pimentel 1991. W. L. Rodi and S. G. Pimentel. A nonmono-
tonic assumption-based tms using stable bases. In Principles of Knowl-
edge Representation and Reasoning: Proceedings of the 2nd International
Conference, pages 485–495, San Mateo, Ca., 1991. Morgan Kaufmann.
Rott 1991a. H. Rott. A nonmonotonic conditional logic for belief revision
I. In A. Fuhrmann and M. Morreau, editors, The Logic of Theory Change,
pages 135–183. LNAI 465, Springer-Verlag, Berlin, 1991.
Rott 1991b. H. Rott. Reduktion und Revision: Aspekte des nichtmonoto-
nen Theorienwandels. Verlag Peter Lang, Frankfurt a.M, 1991.
Rott 1991c. H. Rott. Two methods of constructing contractions and revi-
sions of knowledge systems. Journal of Philosophical Logic, 20:149–173,
1991.
Rott 1992a. H. Rott. On the logic of theory change: More maps between
different kinds of contraction function. In P. G¨ardenfors, editor, Belief
Revision, pages 122–141. Cambridge University Press, Cambridge, 1992.
Rott 1992b. H. Rott. Preferential belief change using generalized epis-
temic entrenchment. Journal of Logic, Language and Information, 1:45–
78, 1992.
Rott 1993. H. Rott. Belief contraction in the context of the general theory
of rational choice. Journal of Symbolic Logic, 58:1426–1450, 1993.
Satoh 1988. K. Satoh. Nonmonotonic reasoning by minimal belief revi-
sion. In Proceedings of the 2nd Conference on Theoretical Aspects of
Reasoning about Knowledge, pages 97–111, 1988. Proceedings of the In-
96 Peter G¨ardenfors and Hans Rott
ternational Conference on Fifth Generation Computer Systems, ICOT,
December 1988, 455-462.
Schlechta 1991a. K. Schlechta. Some results on theory revision. In
A. Fuhrmann and M. Morreau, editors, The Logic of Theory Change,
pages 72–92. LNAI 465, Springer-Verlag, Berlin, 1991.
Schlechta 1991b. K. Schlechta. Theory revision and probability. Notre
Dame Journal of Formal Logic, 32:307–319, 1991.
Shackle 1961. G.L.S. Shackle. Decision, Order and Time in Human Af-
fairs. Cambridge University Press, Cambridge, 1961.
Shafer 1976. G. Shafer. A Mathematical Theory of Evidence. Princeton
University Press, Princeton, 1976.
Shenoy 1991. P. P. Shenoy. On Spohn’s rule for revision of beliefs. Inter-
national Journal of Approximate Reasoning, 5:149–181, 1991.
Shoham 1988. Y. Shoham. Reasoning about Change: Time and Causation
from the Standpoint of Artificial Intelligence. MIT Press, Cambridge,
Mass, 1988.
Sosa 1980. E. Sosa. The raft and the pyramid: Coherence versus founda-
tions in the theory of knowledge. Midwest Studies in Philosophy, 5:3–25,
1980. Also in E. Sosa, Knowledge in Perspective, Cambridge University
Press, Cambridge 1991, pp. 165–191.
Spohn 1988. W. Spohn. Ordinal conditional functions: A dynamic theory
of epistemic states. In W.L. Harper and B. Skyrms, editors, Causation in
Decision, Belief Change, and Statistics, vol. 2, pages 105–134. Dordrecht,
Reidel, 1988.
Spohn 1990. W. Spohn. A general non-probabilistic theory of inductive
reasoning. In R.D. Schachter, T.S. Levitt, L.N. Kanal, and J.F. Lemmer,
editors, Uncertainty in Artificial Intelligence 4, pages 149–15. North-
Holland, Amsterdam, 1990.
Stallman and Sussman 1977. R. Stallman and G.J. Sussman. For-
ward reasoning and dependency-directed backtracking in a system for
computer-aided circuit analyis. Artificial Intelligence, 9:135–196, 1977.
Stalnaker 1968. R. Stalnaker. A theory of conditionals. In N. Rescher, ed-
itor, Studies in Logical Theory, American Philosophical Quarterly Mono-
graph Series 2, pages 98–112. Blackwell, Oxford, 1968.
Veltman 1976. F. Veltman. Prejudices, presuppositions and the theory of
counterfactuals. In J. Groenendijk and M. Stokhof, editors, Amsterdam
Papers of Formal Grammar, Vol. I, pages 248–281. Centrale Interfacul-
teit, Universiteit Amsterdam, 1976.
Weydert 1992. E. Weydert. Relevance and revision: About generalizing
syntax-based belief revision. In D. Pearce and G. Wagner, editors, Logics
in AI, European Workshop JELIA ’92, pages 126–138. Springer, 1992.
Winslett 1988. M. Winslett. Reasoning about action using a possible mod-
Belief Revision 97
els approach. In Proceedings of the Seventh National Conference on Ar-
tificial Intelligence, pages 89–93, 1988.
Winslett 1990. M. Winslett. Updating Logical Databases. Cambridge Uni-
versity Press, Cambridge, 1990.
Woods 1975. W. Woods. What’s in a link: Foundations for semantic
networks. In D. Bobrow and A. Collins, editors, Representation and
Understanding: Studies in Cognitive Science. Adademic Press, 1975.
Zadeh 1978. L. A. Zadeh. Fuzzy sets as a basis for a theory of possibil-
ity. Fuzzy Sets and Systems, 1:3–28, 1978. Or 1975, ‘Fuzzy logics and
approximate reasoning’, Synthese 30, 407-428.
Article
The Meno problem, asking for the surplus value of knowledge beyond the value of true justified belief, was recently much treated within reliabilist and virtue epistemologies. The answers from formal epistemology, by contrast, are quite poor. This paper attempts to improve the score of formal epistemology by precisely explicating Timothy Williamson's suggestion that ‘present knowledge is less vulnerable than mere present true belief to rational undermining by future evidence’. It does so by combining Nozick's sensitivity analysis of knowledge with Spohn's fact‐asserting epistemic interpretation of conditionals. Accordingly, the surplus value of knowledge lies in a specific kind of stability of knowledge, which differs, though, from that claimed by other so‐called stability analyses of knowledge.
Chapter
For each validity, there is a sound and complete deduction system
Chapter
Let \(*\in \{\texttt{t},\texttt{f}\}.\) A 1/2-sequent \(\textbf{X}\) is \(\textbf{M}^*_{1/2}/\textbf{N}^*_{1/2}\)-valid, denoted by \(\models ^*_{1/2}/\models ^{\ne *}_{1/2}\textbf{X},\) if for any interpretation I, there is a statement \(X\in \textbf{X}\) such that \(I(X)=*/I(X)\ne *.\)
Chapter
There are four kinds of validity.
Chapter
Let \(*,*_1,*_2\in \{\texttt{t},\texttt{m},\texttt{f}]\).
Chapter
Let \(Q_1,Q_2\in \{\textbf{A},\textbf{E}\}\). We consider \(\textbf{G}^{Q_1Q_2}\)-valid sequents and \(\textbf{G}_{Q_1Q_2}\)-valid co-sequents, Gentzen deduction systems \(\textbf{G}^{Q_1Q_2}, \textbf{G}_{Q_1Q_2}\), and corresponding R-calculi \(\textbf{R}^{Q_1Q_2},\textbf{R}_{Q_1Q_2}\).
Chapter
Let \(Q_1,Q_2\in \{\textbf{A},\textbf{E}\}, i,j\in \{0, 1\},\) and \(Y_1,Y_2\in \{\textbf{R},\textbf{Q},\textbf{P}\}.\)
Article
In this paper we establish a link between fuzzy and preferential semantics for description logics and self-organizing maps (SOMs), which have been proposed as possible candidates to explain the psychological mechanisms underlying category generalization. In particular, we show that the input/output behavior of a SOM after training can be described by a fuzzy description logic interpretation as well as by a preferential interpretation, based on a concept-wise multipreference semantics, which takes into account preferences with respect to different concepts and has been recently proposed for ranked and for weighted defeasible description logics. Properties of the network can be proven by model checking on the fuzzy or on the preferential interpretation. Starting from the fuzzy interpretation, we also provide a probabilistic account for this neural network model.
Article
Full-text available
In an earlier paper [Rational choice and AGM belief revision, Artificial Intelligence, 2009] a correspondence was established between the set-theoretic structures of revealed-preference theory (developed in economics) and the syntactic belief revision functions of the AGM theory (developed in philosophy and computer science). In this paper we extend the re-interpretation of those structures in terms of one-shot belief revision by relating them to the trichotomous attitude towards information studied in Garapa (Rev Symb Logic, 1–21, 2020) where information may be either (1) fully accepted or (2) rejected or (3) taken seriously but not fully accepted. We begin by introducing the syntactic notion of filtered belief revision and providing a characterization of it in terms of a mixture of both AGM revision and contraction. We then establish a correspondence between the proposed notion of filtered belief revision and the above-mentioned set-theoretic structures, interpreted as semantic partial belief revision structures. We also provide an interpretation of the trichotomous attitude towards information in terms of the degree of implausibility of the information.
Article
Full-text available
Truth maintenance is a collection of techniques for doing belief revision. A truth maintenance system's task is to maintain a set of beliefs in such a way that they are not known to be contradictory and no belief is kept without a reason. Truth maintenance systems were introduced in the late seventies by Jon Doyle and in the last five years there has been an explosion of interest in this kind of systems. In this paper we present an annotated bibliography to the literature of truth maintenance systems, grouping the works referenced according to several classifications.
Article
Full-text available
Many of the philosophically most interesting notions are overtly or covertly epistemological. Overtly epistemological notions are, of course, the concept of belief itself, the concept of subjective probability, and, presumably the most important, the concept of a reason in the sense of a theoretical reason for believing something. Covertly epistemological notions are much more difficult to understand; maybe, they are not epistemological at all. However, a very promising strategy for understanding them is to try to conceive of them as covertly epistemological. One such notion is the concept of objective probability;1 the concept of explanation is another. A third, very important one is the notion of causation, which has been epistemologically problematic ever since Hume. Finally, there is the notion of truth. Many philosophers believe that there is much to be said for a coherence theory of truth or internal realism; they hold some version of the claim that something for which it is impossible to get a true reason cannot be true, and that truth is therefore covertly epistemological.
Conference Paper
Full-text available
Assumption-based truth maintenance systems have become a powerful and widely used tool in Artifi- cial Intelligence problem solvers. The basic ATMS is restricted to accepting only horn clause justifica- tions. Although various generalizations have been made and proposed to allow an ATMS to handle more general clauses, they have all involved the addition of complex and difficult to integrat~' hyperresolution rules. This paper presents an alternative approach based on negated assumptions which integrates sim- ply and cleanly into existing ATMS algorithms and which does not require the use of a hyperresolution rule to ensure label consistency.
Article
The paper surveys some recent work on formal aspects of the logic of theory change. It begins with a general discussion of the intuitive processes of contraction and revision of a theory, and of differing strategies for their formal study. Specific work is then described, notably Grdenfors'' postulates for contraction and revision, maxichoice contraction and revision functions and the condition of orderliness, partial meet contraction and revision functions and the condition of relationality, and finally the operations of safe contraction and revision. Verifications and proofs are omitted, with references given to the literature, but definitions and principal results are presented with rigour, along with discussion of their significance.
Article
In some recent papers, the authors and Peter Grdenfors have defined and studied two different kinds of formal operation, conceived as possible representations of the intuitive process of contracting a theory to eliminate a proposition. These are partial meet contraction (including as limiting cases full meet contraction and maxichoice contraction) and safe contraction. It is known, via the representation theorem for the former, that every safe contraction operation over a theory is a partial meet contraction over that theory. The purpose of the present paper is to study the relationship more finely, by seeking an explicit map between the component orderings involved in each of the two kinds of contraction. It is shown that at least in the finite case a suitable map exists, with the consequence that the relational, transitively relational, and antisymmetrically relational partial meet contraction functions form identifiable subclasses of the safe contraction functions, over any theory finite modulo logical equivalence. In the process of constructing the map, as the composition of four simple transformations, mediating notions of bottom and top contraction are introduced. The study of the infinite case remains open.
Article
An assumption-based truth maintenance system provides a very general facility for all types of default reasoning. However, the ATMS is only one component of an overall reasoning system. This paper presents a set of concerns for interfacing with the ATMS, an interface protocol, and an example of a constraint language based on the protocol. The paper concludes with a comparison of the ATMS and the view of problem solving it entails with other approaches.
Article
To choose their actions, reasoning programs must be able to make assumptions and subsequently revise their beliefs when discoveries contradict these assumptions. The Truth Maintenance System (TMS) is a problem solver subsystem for performing these functions by recording and maintaining the reasons for program beliefs. Such recorded reasons are useful in constructing explanations of program actions and in guiding the course of action of a problem solver. This paper describes (1) the representations and structure of the tms, (2) the mechanisms used to revise the current set of beliefs, (3) how dependency-directed backtracking changes the current set of assumptions, (4) techniques for summarizing explanations of beliefs, (5) how to organize problem solvers into “dialectically arguing” modules, (6) how to revise models of the belief systems of others, and (7) methods for embedding control structures in patterns of assumptions. We stress the need of problem solvers to choose between alternative systems of beliefs, and outline a mechanism by which a problem solver can employ rules guiding choices of what to believe, what to want, and what to do.