Computing Belief Revision

3 downloads 0 Views 240KB Size Report
This paper introduces a belief revision system on a foundational basis ... program based on this formal system has been implemented. ... tions are not essential.2 `Coherence' usually means or includes logical ... Foundational systems have a clearer style of reasoning: At any point in time, all ..... can hold on to in the future.
Computing Belief Revision Bettina Berendt, Alan Smaill 

Abstract This paper introduces a belief revision system on a foundational basis which uses epistemic entrenchment to guide its decisions in the face of con ict. The formalism satis es the AGM postulates for belief revision and thus combines foundational and coherence approaches. The belief revision system is based on a metatheoretic treatment of rst-order logic and treats expansions and contractions as primitives. An algorithm is presented which describes a policy of accommodating new information. With minimal data requirements, this algorithm is guaranteed to lead to a consistent belief system while assembling and keeping as much information as possible and respecting a preference ordering over beliefs. A program based on this formal system has been implemented. A decision-theoretic interpretation of this formalism is discussed. Key words: handling inconsistency, truth maintenance, belief revision

1 Introduction 1.1 Foundational and coherence approaches in belief revision In the theory of belief revision, two main approaches can be distinguished.1 The foundations approach holds that propositions are believed if and only if some justi cation can be exhibited. The coherence approach focuses on the global  Department of Arti cial Intelligence, University of Edinburgh, 80 South Bridge, Edinburgh

EH1 1HN; Email:fbettina,[email protected] 1 For a recent survey of the eld, see [Gardenfors 92]. [Doyle 92a] in that volume discusses foundational and coherence approaches.

1

Computing Belief Revision

2

structure, propositions are believed if they cohere with other beliefs; justi cations are not essential.2 `Coherence' usually means or includes logical consistency. Central to most coherence approaches is a notion of `conservativity': As many beliefs as possible should be retained if the system of beliefs is changed. Foundational systems distinguish two kinds of beliefs: `Foundational beliefs', which are taken to be self-evident, i.e. do not require any justi cations, and thus serve as the last steps in a justi cation of any belief, and `derived beliefs'. Coherence theory does not make any such distinction. The standard example of foundational systems are truth maintenance systems (e.g. [Doyle 79]). Coherence systems have typically been given in terms of speci cations of their desired behaviour, rather than via an algorithmic implementation. The standard example of this is the system of AGM postulates ([Alchourron et al. 85]).

1.2 A combined approach The system to be introduced here3 tries to combine the strengths of both approaches. We perceive these strengths to be as follows. Foundational systems have a clearer style of reasoning: At any point in time, all beliefs are either justi ed by other beliefs or explicitly marked as assumptions. In a coherence system, it is possible that there are beliefs which were derived at an earlier time, have lost all their justi cations in the meantime, but have survived because they cohere with the other beliefs. Particularly in systems which are designed as decision aids, this can make explanations of why beliefs are held dicult to generate and/or understand by the user. Another advantage of foundationalism is that fewer data are required (cf. section 4.3). On the other hand, coherence approaches have the advantage of using a declarative logic in order to provide a clear semantics. Our formalism satis es the AGM rationality postulates for arbitrary belief revision schemes. The second important element from coherence theories is the basis for choice in situations when logic and the structure of justi cations alone allow more than one way of revising the system of beliefs, represented as a belief set. If, by the acquisition of new information, a statement f and its negation :f both become believed, there are two ways of reaching a new consistent belief set. Either f and beliefs supporting it or :f and beliefs supporting it could be dropped. In the present system, a preference ordering over beliefs is used in order to decide in such con icts. This ordering is called epistemic entrenchment after [Gardenfors 88]. 2 The distinction is due to [Harman 86]. 3 For a full description, see [Berendt 92].

Computing Belief Revision

3

Epistemic entrenchment is thought of as representing the following common phenomenon in human reasoning: Deciding a con ict (of believing both f and :f) requires going back in a chain of reasoning and deciding on the relative merits of beliefs that underlie the con icting beliefs. These are very likely to impinge not only on the present problem, but on a larger part of the reasoner's system of belief. And it seems that among these, some beliefs prove to be more tenacious than others. The more tenacious beliefs will usually be those that are more central to the reasoner's whole system of beliefs, that are more useful for his general style of argumentation. This phenomenon is well documented for scienti c (e.g. [Quine 70], p. 100) as well as common-sense reasoning (e.g. [Festinger 57], [Wicklund / Brehm 76]).

2 Representation of beliefs and inference A logic-based formalism is used to represent belief. Beliefs are modeled as statements in (a restricted) rst-order logic. The deductive process in this logic is modeled in an appropriate meta-theory. The reasoner starts from a set of assumptions, which are taken to be self-evident beliefs that need no justi cation. Only formulae derivable from these are believed. The assumptions are distinguished from other beliefs by the meta-theory. The agent's beliefs are assumed to be closed under the inference system used. If the agent's theory includes some formula f and its negation :f, this contradiction is resolved by choosing a theory that only includes one of them. This choice is made from a meta level viewpoint, because properties outside the object level logic of beliefs (the assumptions' degrees of epistemic entrenchment) serve as decision criteria. In order to allow this, reasoning about what is believed (on the object level) is e ected on the meta level, via the predicate BEL(ieved). This meta level is consistent. If certain sentences are derivable in it (namely that there is an f such that both f and :f are BELieved), a decision rule is invoked to determine which subset of object level beliefs to choose. The next two sections introduce the object and the meta logics.

2.1 The object level The object level logic is a function-free rst-order logic. 1. Terms: only constants are taken as terms, of which there are only nitely many. 2. Basic formulae: are of the form p(t1 ; t2; : : : ; tn), where p is an n-ary predicate symbol and the ti are terms.

Computing Belief Revision

4

3. Well-formed formulae: are basic formulae, negations of basic formulae, or of the form A1 ^ A2 ^ : : : ^ An  (:)B; for basic formulae A1 : : : An ; B. We assume the standard notion of logical consequence for the object theory. Only the assumptions (self-evident beliefs) from the object level are used directly in reasoning - they serve as starting points for deriving more beliefs. The system re ects on these assumptions and their consequences in terms of the meta language only. Belief sets are de ned in terms of the meta language (cf. section 2.3). So the system does not use the object level logic when it reasons about beliefs and what they imply.

2.2 The meta level The meta language is a sorted rst-order language with equality. The sorts are well-formed formulae of the object language and rational numbers between 0 and 1. The role of the sorts will become clear in the discussion of the predicates below.4 1. Logical constants and logical variables denote well-formed formulae of the object language and rational numbers between 0 and 1. Every propositional formula of the object language (say f) is assigned a logical constant (say f^), and every rational number between 0 and 1 is assigned a logical constant (itself). 2. Terms are logical constants and logical variables and complex terms constructed from formulae of the object language in the following way:  f ^ g is denoted by and(f^; g^)  f  g is denoted by implies(f^; g^)  :f is denoted by not(f^) Where there are no ambiguities, the argument formulae will however be written in their original form (f ^ g etc.) in order to simplify notation. So for example (IMP), the modus ponens for beliefs, 8f^ 8g^[BEL(implies(f^; g^))  (BEL(f^)  BEL(^ g))] will be written as 8f 8g[BEL(f  g)  (BEL(f)  BEL(g))]:

4 The design of the meta level language is in uenced by [Konolige 80]'s system.

Computing Belief Revision

5

3. Well-formed formulae are  p(t1 ; t2 ; : : : ; tn ), where p is an n-ary predicate symbol and the ti are terms.  :f, f ^ g, f _ g, f  g, f  g, 8X : f, 9X : f, where f and g are well-formed formulae, and X is a vector of variables free in f. Syntax and semantics are de ned in the usual way.5 The meta language has a number of special predicates. 

 

Assumptions:

ASSUMPTION(f) is true if formula f is one of the assumptions of the agent, i.e. a belief that is justi ed by the empty set. Tautologies: TAUTOLOGY(f) is true if formula f is a constructive tautology. Belief: BEL(f) is true if formula f is in the agent's belief set. BEL derives belief in possibly non-atomic formulae of the object language from belief in other such formula. Inferring new beliefs proceeds via the following rules: All assumptions are believed (ASS): 8f[ASSUMPTION(f)  BEL(f)]

Conjunction (CON): 8f 8g[(BEL(f) ^ BEL(g))  BEL(f ^ g)]

Implication (IMP): 8f 8g[BEL(f  g)  (BEL(f)  BEL(g))]

Only beliefs that are justi ed by these axioms are believed. For any , the following holds (MINBEL): 8f[ASSUMPTION(f)  (f)] ^ 8f 8g[((f) ^ (g))  (f ^ g)] ^ 8f 8g[(f  g)  ((f)  (g))]  8f[BEL(f)  (f)]

5 cf. [Konolige 80]

Computing Belief Revision 

6

Epistemic entrenchment: Degrees of epistemic entrenchment are assigned to object language formulae in the meta language. EE(f; e) is true if formula f has the degree of epistemic entrenchment (a rational number between 0 and 1) of e. No believed assumption has a degree of epistemic entrenchment of 0, and only (constructive) tautologies (if they are believed) can have a degree of epistemic entrenchment of 1. All assumptions and only they have degrees of epistemic entrenchment (EE1):6 8f[9e[EE(f; e) ^ (e > 0) ^ (e < 1)]  ASSUMPTION(f) ^:TAUTOLOGY(f)]

Only constructive tautologies (if they are believed) can have a degree of epistemic entrenchment of 1 (EE2): 8f[TAUTOLOGY(f) ^ ASSUMPTION(f)  EE(f; 1)];

All disbelieved statements have a degree of epistemic entrenchment of 0 (EE3): 8f[:BEL(f)  EE(f; 0)] Epistemic entrenchment is functional (EE4): 8f 8e1 8e2[EE(f; e1 ) ^ EE(f; e2 )  (e1

= e2)]:

Also, the ordering must be strict (EE5):7 8f 8g 8e[EE(f; e) ^ EE(g; e) ^ (e > 0) ^ (e < 1)  (f = g)]: 

Preferences between assumptions on the basis of epistemic entrenchment: A consistent meta-level theory can represent an inconsistent set of beliefs. This is the case if there is an f such that BEL(f) and BEL(:f). In this case, either f or :f must be dropped from the set of beliefs in order to obtain a new, consistent set of beliefs. Moreover, no set of beliefs from which the `loser' in this con ict is derivable can be retained. Ultimately, this reduces to eliminating one or more assumptions from the set of beliefs. Which assumption(s) are chosen depends on the overall ordering of epistemic entrenchment. Because of the foundational design of the system, any evaluation of a derived belief in these terms must ultimately depend on the evaluation of its underlying assumptions (see also section 4.3). The choice of a new belief

6 The arithmetical predicates / operators =; >; < are used in their standard meaning and

written in in x notation for clarity. 7 This restriction is discussed in section 3.1.

Computing Belief Revision

7

set is based on formulae derived on this meta-level. These formulae must have assumptions as arguments. This is expressed by the special predicate PREFER(a1; a2 ), where a1; a2 are assumptions, one of which supports f and the other of which supports :f. This expression is equivalent to a conjunction of various BEL(:), EE(:; :) and possibly other relations between beliefs (equality and arithmetical relations). Its details depend on the decision criterion chosen and can become rather tedious. For this reason, an example decision criterion shall only be outlined informally.8 Consider the decision criterion maximin. This is formulated as: Let the beliefs f and :f have the sets of assumption bases M1 and M2, respectively.9 Let the elements of each assumption base mij 2 Mi; i = 1; 2; j = 1; :::; Ji be aijk; k = 1; :::; Kij with degrees of epistemic entrenchment given as EE(aijk ; eijk). Determine a^ij such that EE(a^ij ; e^ij) and e^ij  eijk; k = 1; :::; Kij. The assumption a^ij thus has the minimum degree of epistemic entrenchment in assumption base mij and can be regarded as the `bottleneck' of this assumption base. Now determine ai such that EE(ai ; ei) and ei  e^ij; j = 1; :::; Ji. The assumption a^i thus has the maximum degree of epistemic entrenchment among all the bottlenecks in the assumption bases in Mi. The decision criterion maximin stipulates that the element of the con icting pair f; :f with the higher maximal bottleneck degree of entrenchment remains believed. This is realised in the system in two steps: 1. The de nition of PREFER ensures that e1 > e2 (e2 > e1) i PREFER(a1; a2 ) (PREFER(a2 ; a1)): 2. The algorithm for belief changes UPDATE, presented in section 3.2, ensures that if PREFER(a1; a2) (PREFER(a2 ; a1)), a2 (a1 ) and thus :f (f) become disbelieved. A default belief (\All birds can y.") is expressed as a set of ground instances, which does not include any instance which has turned out to be an exception to the default (e.g. \Tweety can y."). All general statements are defaults and thus defeasible.

2.3 The representation of belief sets Our strategy is not to model consequence of beliefs for arbitrary sentences, but only closely enough to be able to detect inconsistency. The construction of the last section de nes a notion of logical consequence: 8 The details can be found in [Berendt 92], pp. 47 . 9 An assumption base is a set of assumptions sucient to derive a belief. See de nition 2.4.

Computing Belief Revision

8

De nition 2.1 (`BEL) For two sentences f and g:

f `BEL g i BEL(f) [ BELAX ` BEL(g) and for a set of sentences S and a sentence f: S `BEL f i S0 [ BELAX ` BEL(f); where in the general case S0 = fBEL(f)jf 2 Sg; or in the case of speci cally treating S as a belief base consisting of assumptions 10 S0 = fASSUMPTION(f)jf 2 Sg: BELAX is a set containing the axioms for BEL as given in section 2.2. The logical consequence operator can be de ned accordingly as De nition 2.2 (CnBEL)

CnBEL(S) = ffjS `BEL fg The consequence relation satis es some of the AGM postulates: Let K be a set of (object) sentences. (`BEL 2) MP: If K `BEL A  B and K `BEL A, then K `BEL B. (`BEL 3) Not ; `BEL A ^ :A. That is, `BEL is consistent. It does not satisfy (`BEL 1) If A is a truth-functional tautology, then `BEL A. Since we are only modeling belief with a view to detect inconsistencies, we do not expect to derive explicitly tautologies which are not essential for that purpose.

10 This second de nition of S0 could always be rewritten in terms of the rst, since by (ASS) ASSUMPTION(f) implies BEL(f). The di erence is therefore only a conceptual distinction of whether one wants to treat S as a base for a belief set or as a belief set. It does not impinge on

the logical properties, and will only be used where this conceptual distinction is thought to aid legibility.

Computing Belief Revision

9

satis es the deduction theorem only in one direction: If K `BEL A  B, then K [ fAg `BEL B. `BEL is compact, i.e. if A is a logical consequence of some set K, then A is a consequence of some nite subset of K. CnBEL is monotonic, i.e. if S `BEL A, then S [ fBg `BEL A. 11 AGM de ne a (non-absurd) belief set as a set of sentences which is closed under the respective notion of logical consequence and consistent. The notion of logical consequence de ned here leads to the following de nition of a belief set: The representation of belief sets in the present system will usually start from a belief base BASE(K) for belief set K. This base is the set of assumptions of the object language: De nition 2.3 (Belief set) A belief set K is the closure of its belief base BASE(K): K = CnBEL(BASE(K)) The beliefs in BASE(K) are called assumptions, the beliefs in K n BASE(K) are called derived beliefs. Belief bases and thus belief sets are nite. K? denotes the absurd belief set, i.e. the degenerate case of an inconsistent belief set. De nition 2.4 An assumption base for a belief f in a belief set K is a set of assumptions A 2 K such that A `BEL f and there is no set of assumptions A0  A such that A0 `BEL f. In the logic, this set is associated with the conjunction of its members. An argument for a belief f is a tree with f as root, an assumption base A as the set of leaves, and nodes labelled with formulae corresponding to the BELAX rules. Any assumption in itself is consistent, and a given set of assumptions is nonredundant, i.e. the axioms are independent of one another, and consistent.

`BEL

3 Response to new information: an algorithm for belief changes 3.1 Technical primitives: expansions and contractions Technically, a belief set can be changed by adding or removing beliefs. Since the system is foundational and beliefs are closed under logical consequence, belief 11 The postulates and a motivation for them can be found in [Gardenfors 88], pp. 21 . Proofs

for the present system can be found in [Berendt 92], pp. 130 .

Computing Belief Revision

10

sets can only be expanded or contracted by assumptions. De nition 3.1 (Expansion) Expanding belief set K by assumption A results in the belief set K+A = CnBEL(BASE(K) [ fAg); where BASE(K) is K's belief base. De nition 3.2 (Contraction) Contracting belief set K by assumption A results in the belief set K?A = CnBEL(BASE(K) n fAg); where BASE(K) is K's belief base. In order to satisfy all of AGM's postulates for contractions, a special de nition is needed: De nition 3.3 (Contraction by a conjunction) Contracting belief set K by the conjunction of assumptions A ^ B results in the belief set K?A^B = K?A i PREFER(B; A) K?A^B = K?B i PREFER(A; B). This is only de ned for A 6= B. (PREFER(A; B) and PREFER(B; A) cannot occur in a strict ordering of epistemic entrenchment.) These elementary belief changes satisfy the AGM postulates (K+ 1) For any sentence A and any belief set K, K+A is a belief set. (K+ 2) A 2 K+A (K+ 3) K  K+A (K+ 4) If A 2 K, then K+A = K. (K+ 5) If K  H, then K+A  H+A. (K+ 6) For all belief sets K and all sentences A, K+A is the smallest belief set that satis es (K+ 1) - (K+ 5). and (K? 1) For any sentence A and any belief set K, K?A is a belief set. (K? 2) K?A  K.

Computing Belief Revision (K? 3) (K? 4) (K? 5) (K? 6) (K? 7) (K? 8)

11

If A 62 K, then K?A = K. If not `BEL A, then A 62 K?A. If A 2 K, then K  (K?A )+A. If `BEL A  B, then K?A = K?B . K?A \ K?B  K?A^B. If A 62 K?A^B, then K?A^B  K?A.

Let  correspond to the normal order on the rationals. Then the ordering of epistemic entrenchment satis es the postulates (C ) B  A i B 62 K?A^B. (EE 1) For any A; B; C, if A  B and B  C, then A  C. explicitly and (EE 2) (EE 3) (EE 4) (EE 5)

For any A and B, if A `BEL B, then A  B. For all A and B in K, A  A ^ B or B  A ^ B. When K 6= K? , A 62 K i A  B for all B. If B  A for all B, then `BEL A.

implicitly (see section 4.2 for details). In order to be able to satisfy (K? 7) and (C ) simultaneously given the restricted syntax of beliefs, the ordering of epistemic entrenchment must be strict. The proofs for the correspondence to the rationality postulates are straightforward and therefore omitted here.12

3.2 Belief changes These technical changes of belief sets are however not sucient to determine what happens when new information comes in, which is what a system using a belief revision component needs. It is assumed here that a system of beliefs can change if and only if new information comes in. What changes this triggers depends on the existing system of beliefs: If the new information is consistent with what is already believed, it can be added (expansion). If it is inconsistent with it, changes have to be made in order to arrive at a new consistent system

12 The postulates and a motivation for them can be found in [Gardenfors 88], pp. 48 . and 86 . The proofs can be found in [Berendt 92], pp. 134 . and 61 . The con ict between (K? 7)

and (C ) is explained and the choice motivated in [Berendt 92], pp. 65 .

Computing Belief Revision

12

of beliefs. If the new information is too weak to challenge existing beliefs, it is simply ignored. If it is strong enough to survive, other, weaker beliefs have to be dropped (contraction). This `strength' of beliefs is measured by the ordering of epistemic entrenchment over the assumptions. New information arrives in the form of a new assumption A. The new belief set UPDATE(K; A) is de ned recursively and non-deterministically as follows: Let BASE(K) be the belief base of K and K be a belief set de ned as K = K+A. There are two cases: 1. K 6= K? This is characterised by there being no a1 , a2 such that PREFER(a1; a2 ) is provable. A does not lead to an inconsistency, so it can safely be added to the beliefs already present: UPDATE(K; A) = K+A 2. K = K? This is characterised by PREFER(a1; a2 ) being provable for some a1, a2. Take some such a1 , a2 . A would lead to an inconsistency, so some belief has to be given up. (a) If a2 = A: UPDATE(K; A) = K 13 (b) Else: UPDATE(K; A) = UPDATE(K?a2; A)

4 Properties of the algorithm for belief changes 4.1 Logical properties Our algorithm proceeds by reasoning in the meta-theory to determine whether there is a contradiction, and by reasoning back from the contradiction to determine the assumption bases for the contradictory arguments. Epistemic entrenchment then determines the action to be taken. UPDATE will always terminate and lead to a new non-absurd belief state.14 Also, contractions will only be e ected if they are necessary, i.e. if the tentative belief set is inconsistent.

13 A 62 K, so K does not have to be contracted by A: K? = K. To see that A 62 K, note that if A A 2 K, K+A = K by (K+ 4), so K is consistent, since K is. 14

For proofs of the result in this chapter, cf. appendix.

Computing Belief Revision

13

So this algorithm is guaranteed to lead to a consistent belief system while assembling as much information as possible. The construction of belief sets from assumptions via the axioms for BEL ensures that every belief is justi ed. At the same time, belief changes observe the principle of conservatism - retain as many beliefs as possible - and the principle of maximum epistemic entrenchment retain the preferred consistent subset, if there is a choice. Belief changes satisfy the AGM postulates. The system thus combines foundational and coherence approaches.

4.2 Implicitly represented preferences and belief changes It can be shown how the data used de ne an implicit preference ordering over derived beliefs and contractions by derived beliefs. The decision criterion maximin explained above (section 2.2) implicitly de nes degrees of entrenchment for conjunctions and consequences: 



Conjunctions: Let B = A1 ^ : : : ^ An. Then assign to B the degree of epistemic entrenchment e^ = minj=1;:::;nfEjEE(Aj ; E)g: Consequences: Let there be m  1 Ai such that Ai `BEL B; i = 1; : : : ; m. Then assign to B the degree of epistemic entrenchment e^ = maxi=1;:::;mfEjEE(Ai ; E)g:

Similarly, UPDATE, in which one or more contractions are e ected until a new consistent belief set is reached, is de ned in terms of contractions by assumptions only. However, these can be shown to be similar to safe contractions as introduced by [Alchourron / Makinson 85]. Speci cally, the (series of) contraction(s) e ected by UPDATE can be compared to the safe contraction by the `loser' in the con ict as determined by the decision criterion, or, taking into account the derived degrees of entrenchment, even to the safe contraction by the contradictory conjunction f ^ :f. The essential di erence between safe contractions and UPDATE contractions is that the latter operates only on belief bases and then determines the resulting belief set as the closure of the smaller belief base. In this way, elements that are logically consistent and thus `safe' from a coherence viewpoint (Alchourron/Makinson's safe contractions belong to this tradition) will not be included any more in the new belief set if their justi cations have also been invalidated by the invalidation of the justi cations of the loser in the con ict - another way of expressing the foundational stance of the present approach.

Computing Belief Revision

14

4.3 Minimal data requirements Data requirements are lower than in many other belief revision schemes: A preference ordering only needs to be de ned over assumptions, which are a subset of all beliefs. In the general AGM setting as portrayed by [Gardenfors 88], arguments for beliefs and the degrees of entrenchment of the beliefs used in these arguments set limits for the derived beliefs' degrees of entrenchment. But they do not fully specify these degrees, so every belief must be explicitly assigned a degree of entrenchment. Moreover, these may change in every new belief set, so a huge amount of data are required. In the present foundational setting, on the other hand, a derived belief depends on the argument(s) for it and nothing else. And just as the question of whether a (non-assumption) statement is believed at all depends on whether statements in possible arguments for it are believed, so any evaluation of the `entrenchment' of this belief must depend on the arguments for it and the entrenchment of the beliefs in the arguments. Applying this recursively means that ultimately, only degrees of entrenchment of assumptions are needed. Taking this argument further, it becomes clear that an evaluation of the `entrenchment' of a belief is only needed if this belief becomes involved in a con ict in the course of reasoning. In other words, while on the logical level all assumptions have a degree of entrenchment, in a practical setting such as an interactive program, assignment of actual values of entrenchment to assumptions can be suspended; the user need only be prompted for a value if an assumption becomes involved in a con ict. This has been successfully realised in the program developed on the basis of this formal system.

5 Decision theory and belief revision Implicit in the above is a particular view of how the `decision' what to continue believing in a con ict is made. As was explained above, in the present foundational system, only the assumptions and their degrees of entrenchment are explicitly given to the system (which then reasons with them). They are thus the only data which can form a basis of decision between con icting derived beliefs. Compare this to decision theory as employed in, for example, economics. Recently, [Doyle 92b] has strongly advocated the use of decision theory in AI. He mentions some connections between decision theory and belief revision, but regrets that little has been done so far in order to integrate the two. The system presented here hopes to give ideas as to how to ll this gap. A decision is to be made or a preference ordering to be established between actions. Consider the simpler case of decision under `true' uncertainty: Only

Computing Belief Revision

15

di erent states of the world, but not their probabilities, are known. The outcomes of the actions in di erent states of the world and a preference ordering over the outcomes are known. The links can be simple (directly from the actions to nal outcomes) or complex (from the actions to further action choices, where nal outcomes are only reached as `leaf nodes' of these trees). Di erent decision criteria (stochastic dominance, maximin, maximax, ...) can then be employed in order to compare and evaluate the actions on the basis of the outcomes. In the present model, belief revision is modeled in a similar way: A decision is to be made or a preference ordering to be established between derived beliefs. All that is known are the arguments leading from assumptions to these derived beliefs, and a preference ordering (epistemic entrenchment) over the assumptions. The links can be simple (directly from the derived beliefs to assumptions) or complex (via further derived beliefs). So also di erent decision criteria can then be employed in order to compare and evaluate the derived beliefs on the basis of the assumptions. An obvious di erence seems to be the temporal relation between actions and outcomes on the one hand and derived beliefs and assumptions on the other. While actions precede outcomes, derived beliefs are by de nition only `found' (derived) after the assumptions are known. However, on closer inspection, in a di erent sense derived beliefs precede assumptions temporally: The decision which of the two contradictory derived beliefs to retain determines which assumptions one can hold on to in the future. The temporal references for evaluation of the original preference orderings (over outcomes and assumptions) can be compared: In decision theory, the preference ordering over the outcomes is determined by some estimate of their relative values in the future. Here, the preference ordering over the assumptions (epistemic entrenchment) is determined by the importance of a belief in the overall system of beliefs of the reasoner (see section 1.2). And this notion surely includes some component oriented towards the future: How important does one expect this particular belief to be? Caution is needed in simply transferring decision criteria from decision theory to belief revision. For example, using maximax instead of maximin would make it impossible to (even implicitly) satisfy the AGM postulate (EE 2). The rami cations of this need to be explored in further research. It would be interesting to explore possible parallels to decision under `risk', i.e. to cases where probabilities of states of the world are known. This would introduce another element of weighting into the decision scheme. Also, the question `How important does one expect this particular belief to be?' may suggest some links to expected utility. This too should be the subject of further research.

Computing Belief Revision

16

6 Conclusion A belief revision system has been introduced which combines foundational and coherence approaches. On the basis of a meta-logic which ensures that every belief is either an assumption or justi ed by other beliefs and that satis es the AGM postulates, an algorithm has been presented which provides a policy of dealing with incoming information. With the help of a program based on the formal system presented here and formulated as a truth maintenance system, examples of belief revision found in the literature have been tested. This shows how the system can represent the aspect of reasoning targeted in this paper adequately, and how it can be used to build a decision aid which makes choices in the face of con ict, bases these choices on an ordering of epistemic entrenchment and is able to explain every single step of its reasoning. Future research should investigate, among other things, possibilities of using a more expressive language for beliefs and di erent decision criteria.

References [Alchourron / Makinson 85] C.E. Alchourron and David Makinson, On the logic of theory change: Safe contraction, Studia Logica, 44, 1985, 405{422 [Alchourron et al. 85] C.E. Alchourron, Peter Gardenfors and David Makinson, On the logic of theory change: Partial meet contraction and revision functions, The Journal of Symbolic Logic, 50, 1985, 510{530 [Berendt 92] Bettina Berendt, Computing Belief Revision, M.Sc. Thesis, University of Edinburgh, Department of Arti cial Intelligence, 1992 [Doyle 79] Jon Doyle, A Truth Maintenance System, Arti cial Intelligence, 12, 1979, 231{272 [Doyle 92a] Jon Doyle, Reason Maintenance and Belief Revision: Foundations vs. Coherence Theories, in: Peter Gardenfors (ed.), Belief Revision, Cambridge: Cambridge University Press, 1992, 29{51 [Doyle 92b] Jon Doyle, Rationality and its Roles in Reasoning, in: Computational Intelligence, 8, 1992, 376{409 [Festinger 57] Leon Festinger, A theory of cognitive dissonance, Evanston / Illinois, White Plains / New York: Row, Peterson and Company, 1957

Computing Belief Revision

17

[Gardenfors 88] Peter Gardenfors, Knowledge in ux: modeling the dynamics of epistemic states, Cambridge / Mass.: Massachusetts Institute of Technology, 1988 [Gardenfors 92] Peter Gardenfors (ed.), Belief Revision, Cambridge: Cambridge University Press, 1992 [Harman 86] Gilbert Harman, Change in View: Principles of Reasoning, Cambridge / Mass.: MIT Press, 1986 [Harper et al 87] R. Harper, F. Honsell and G. Plotkin, A Framework for De ning Logics, in Proc. of the Second Symposium on Logic in Computer Science, 1987 [Konolige 80] Kurt Konolige, A rst-order formalisation of knowledge and action for a multiagent planning system, Technical Note 232, SRI International, Menlo Park / Calif., December 1980 [Quine 70] Willard Van Orman Quine, Philosophy of Logic, Englewood Cli s: Prentice Hall", 1970 [Wicklund / Brehm 76] Robert A. Wicklund and Jack W. Brehm, Perspectives on cognitive dissonance, Hillsdale / New Jersey: Lawrence Earlbaum, 1976

7 Appendix: Proofs of main results 7.1 Logical consequence The meta-theory for BEL is faithful The terms faithful and adequate are used here in the sense of [Harper et al 87]. The claim is that whenever BEL(f) is derivable in the meta-theory from a belief base, then f follows from the corresponding assumptions in rst order logic. The proof uses the fact that BELAX correspond to standard rst-order inference rules. The meta-theory for BEL is adequate for contradictions. By this we mean that if the assumptions made are contradictory in rst-order logic, then there will be some basic formula f such that BEL(f) and BEL(:f) will be derivable in the meta-theory from the corresponding belief base. Suppose that p1, p2, : : : are the positive literals such that BEL(pi ) is derivable in the meta-theory, and :q1, :q2, : : : the negative literals with this property. Suppose that fp1 ; p2 ; : : :g \ fq1 ; q2 ; : : :g = ;;

Computing Belief Revision

18

so that no contradiction is derivable in the meta-theory. We provide a model of the original assumptions as follows. Take all the pi above to be true and all other basic formulae to be false. Now consider the assumptions. For any positive literal l that is an assumption, BEL(l) is immediately derivable. Therefore l = pi for some i, and it is assigned the correct truth value. Similarly, negated literals are correctly treated. Finally, consider an assumption of the form

A1 ^ A2 ^ : : : ^ An  (:)B: If all the Ai are true in the above assignment, then BEL(Ai ) is derivable for each i. From this and BELAX we can conclude that BEL((:)B) is also derivable in the meta-theory, and so (:)B is assigned true, as is required.

7.2 Termination UPDATE(K; A) terminates.

There are three possible cases: 1. UPDATE(K; A) = K+A : This terminates after one step. 2. UPDATE(K; A) = K : This terminates after one step. 3. UPDATE(K; A) = UPDATE(K?a2; A) : The assumption base of K?a2 has one element less than that of K. So for a nite belief base (cf. de nition 2.3), the limiting case for a recursive call of UPDATE is BASE(K ) = ;. Now CnBEL(; [ fAg) = CnBEL(fAg) cannot be inconsistent if A is consistent. This latter condition is true as no well-formed formulae is inconsistent on its own.

7.3 Result: A non-absurd belief state UPDATE(K; A) leads to a non-absurd belief state. This holds if

1. any inconsistency can be determined, and

Computing Belief Revision

19

2. if there is an inconsistency in K , then PREFER(a1; a2 ) can be proved for some a1; a2 , and 3. PREFER(a1; a2 ) will start a chain of recursive calls of UPDATE that will eventually lead to a consistent belief state. (1) (completeness) was shown in section 7.1. (2) and (3) will be shown in sections 7.3.1 and 7.3.2, respectively. Recall that we assume our initial belief base is consistent before we attempt to integrate a newly acquired belief. 7.3.1 If BEL(f) and BEL(:f), there are a1; a2 such that PREFER(a1 ; a2). Let an inconsistency in K be that f as well as :f are believed. We will show that a PREFER(a1; a2 ) is then derivable. Remember that by (MINBEL), only assumptions and beliefs derived from these via the axioms for BEL are believed. As mentioned in section 2.2, the details of the de nition of PREFER are rather long and tedious. The proof of this section's lemma depends on the de nition of PREFER. It will therefore be omitted here. It suces to say that PREFER(a1; a2 ) must be de ned in such a way that it is derivable if there is an f such that BEL(f) and BEL(:f), and a1 and a2 are assumptions of which one is part of an assumption base of f and the other of :f.15 7.3.2 PREFER(a1; a2 ) will start a chain of recursive calls of UPDATE that will eventually lead to a consistent belief state We know from section 7.2 that the UPDATE procedure terminates. It is enough therefore to show that the non-recursive calls result in consistent belief states. There are two cases: 1. There are no a1 ; a2 such that PREFER(a1 ; a2). By sections 7.1 and 7.3.1, this means that the belief set returned is consistent. 2. There are a1 ; a2 such that PREFER(a1; a2 ). Again, given that meta-reasoning allows us to nd such a1 ; a2, the non-recursive case returns a subset of the input belief set, expanded by the new assumption i this expansion is consistent. This is therefore also consistent.

15 The details of the proof can be found in [Berendt 92], pp. 140 .

Computing Belief Revision

20

7.4 Principle of minimum change If a belief base and a new assumption are consistent, then this assumption will get added (case (1) of UPDATE). This ensures that contractions are only e ected if they are necessary, i.e. if the tentative belief set is inconsistent.