Belief revision - Semantic Scholar

1 downloads 0 Views 232KB Size Report
"Naming the unnamable: socionics or the sociological turn of/to Distributed. Artificial Intelligence". Autonomous Agents and Multi-Agent Systems 4, pp. 155-186.
Belief revision: cognitive constraints for modeling more realistic agents Fabio Paglieri Working paper - Amsterdam, December 2003

Abstract. In this paper, some of the current formal accounts of belief revision (cf. 2.1) are discussed from the point of view of their expressive power concerning cognitive features of realistic agents. The framework from which such features are derived is a well-known model of cognitive social action (cf. 2.2). Seven basic requirements are proposed to characterize belief revision in cognitive agents: a distinction between information and belief (cf. 3.1); an account of information as a highly structured domain (cf. 3.2); a description of information update, both external and internal, as a complex process (cf. 3.3); a definition of information's quality, and corresponding degree of belief, depending on factual credibility, epistemic importance, and pragmatic relevance (cf. 3.4); a well-defined procedure of belief selection, to be applied over informations (cf. 3.6); a multi-layered approach to contradiction management (cf. 3.7). These cognitive requirements are discussed in relation to the existing formalisms of belief revision, emphasizing limitations and drawbacks in the current models. An alternative account is partially outlined during the discussion, and shortly summarized at the end of this work (cf. 4). Finally, some guidelines for future researches are proposed (cf. 5), to the purpose of developing formal models of belief revision explicitly oriented toward cognitive and social issues. Keywords: belief revision, cognitive models of social action, logical formalisms.

1.

Introduction

This preliminary paper points at some cognitive features of belief revision, claiming that (1) such features have not received an exhaustive logical account so far, although (2) they actually can be modeled by logical formalisms, and moreover (3) it would be appropriate to do so. In spite of the impressive amount of theoretical studies devoted to the logic of belief revision in the last two decades, the results are not yet fully satisfactory, as far as realistic cognitive agents are concerned. The current formalisms, no matter their level of technical refinement, might be conveniently suited to deal with one or more of the relevant features highlighted in this paper, but systematically fail to account for other, no less relevant aspects of cognitive belief revision. This argument can be extended to the logical treatment of social action in general (Castelfranchi, 1997b; Castelfranchi, 1998a): despite the vast array of formalisms recently employed to model

groups of autonomous agents and describe their interaction (a framework best known as multiagent systems, MAS from now on; for a survey, see Wooldridge, 2002; Wooldridge & Jennings, 1995), these approaches still fall short of the target, when they are applied to model cognitive social action. Most of them have been developed for quite specific purposes (e.g. linguistic communication, coordination, knowledge update), therefore abstracting from other relevant cognitive features involved in social dynamics. Moreover, there is no evidence so far that different formalisms could be employed jointly to overcome such limitations: in other words, we do not yet know if different logical tools (e.g. BDI logics for planning and AGM-style belief revision systems) could be integrated in the same operational framework (e.g. MAS), and how such integration could be achieved. It should be understood that these remarks are not meant as criticisms or complaints toward the existing formalisms, that were never conceived to deal with the whole array of cognitive features involved in social action. But since to improve our formal understanding of social dynamics is one of the main reason for the interest in logical formalisms recently shown by social scientists (Conte & Gilbert, 1995; Conte, Hegsellman & Terna, 1997; Malsch, 2001; Sichman, Conte & Gilbert, 1998), it seems only appropriate to compare the expressive power of such formalisms with problems raised in a different context, i.e. cognitive social sciences, pointing out the most evident limitations, and hopefully opening the way for further improvements. This paper is specifically devoted to the topic of belief revision, trying to move a first step toward a ‘cognitive enhancement' of the existing logical formalisms. It is a preliminary survey of the field, which does not provide any technical solution, but only suggests some promising directions to overcome and by-pass current limitations. Therefore, it is meant to outline a general cognitive framework for my future researches, in which I would try to focus on some of the topics mentioned here (cf. 5), and to work out specific improvements and modifications in the corresponding logical formalisms. However, even working at a more fine-grained level of analysis, the general facts presented here should be taken in account, to ensure that specific solutions to a given problem remain compatible with other cognitive features involved in social action and belief revision. For instance, in considering changes in the set of the agent's beliefs, we cannot abstract from the more basic notion of information (cf. 3.1), otherwise we would not be able to express the distinction between memory and reasoning; similarly, we cannot describe beliefs without dealing with their pragmatic relevance for planning (cf. 3.4 and 3.8), otherwise we would fail to capture crucial interactions between beliefs and goals. This yields a methodological claim for this line of work: the general framework that we want to study (cognitive social action) forces some relevant constraints on any single formalisms at a more detailed level. These constraints are likely to be viewed, while working on the specific issue, like ‘extra-logical', since they depend on cognitive features other than the ones currently under consideration. However, as far as the ultimate goal is to improve our general understanding of social action in relation to individual cognition, these ‘extra-logical nuisances' should be endured: they are the price for developing a richer and more comprehensive account of social dynamics in cognitive systems.

2.

Preliminaries

In this section I shortly introduce the two main research traditions that will be compared throughout the rest of this work: the cognitive model of social action developed by Castelfranchi and colleagues (cf. 2.1), and the formal treatment of belief revision advocated in the last two decades by many different, but often convergent, approaches (cf. 2.2).

2.1.

A cognitive model of social action

The general framework of my research is a well-known cognitive model of social action (from now on, CMSA), first developed by Cristiano Castelfranchi and colleagues at the Institute for Cognitive Science and Technologies (ISTC-CNR, Rome). Over the years, the CMSA have been applied to many different features of social action: the emergence of social order from interaction among individuals (Castelfranchi, 1997b; Castelfranchi, 1999b; Castelfranchi, 2003; Castelfranchi et al., 2003; Conte & Dellarocas, 2001), the origin and development of normative behavior (Castelfranchi, 1998b; Conte & Castelfranchi, 1995), the dynamics of trust and deception (Castelfranchi & Falcone, 1998; Falcone & Castelfranchi, 1999; Falcone, Pezzulo & Castelfranchi, 2003), the effect of emotions on cognition and defense mechanisms (Miceli, 1992; Miceli & Castelfranchi, 1998a; 2000; 2002), verbal interaction and discourse analysis (Parisi & Castelfranchi, 1976; Castelfranchi, 1992), the integration of different sources of information and information update (Castelfranchi, 1997a), social mechanisms of mutual influence and goalsadoption (Castelfranchi, 1998a; Castelfranchi, 1999a), issues of social reputation (Conte & Paolucci, 2002), and agents autonomy (Castelfranchi, 1995). A comprehensive description of the CMSA is not needed for the purposes of this paper, since the relevant connections with belief revision will be highlighted and discussed in the following sections, every time the need arises. So here I will provide only a rough account of the basic features in the CMSA, referring the reader to Conte and Castelfranchi (1995) for a better introduction to the model. The CMSA is a symbolic architecture of (multi)agency, i.e. it describes the behavior of autonomous agents as determined by their internal states and their interaction with the environment and with each other. The internal states of the CMSA are mentalist features, mainly consisting in beliefs and goals. A belief in an internal representation of a state of things on which the agent bases its action, i.e. that it is considered reliable for planning and performing further actions. A goal is an anticipatory representation of a state of things which has the power of driving and shaping the agent's behavior, i.e. the agent is willing to behave in such a way to modify the state of the world accordingly to its anticipatory representation of it. Both beliefs and goals are gradual notions, characterized by different degrees of intensity: namely, strength for beliefs, value for goals (see also 3.4). On the whole, a rational agent is expected to act toward its goals, on the basis of its beliefs. Further refinements and internal distinctions are also employed in the CMSA, concerning both beliefs (e.g. distinguishing different levels of belief) and goals (e.g. distinguishing between proper goals and pseudo-goals, intentions and expected effects, uses, destinations, and functions, etc.). Since none of these technicalities will be needed in this preliminary work, I will not discuss them any further. Finally, the treatment of beliefs in the CMSA (e.g. Castelfranchi, 1997a) implies a more basic notion:

information. This notion will be explicitly defined in the following sections (cf. 3.1 and 3.2), and its role in the CMSA, with special reference to belief revision, will also be stressed. Despite the interest aroused by the CMSA in the multi-agent community (Castelfranchi & Werner, 1994; Castelfranchi & Müller, 1995; Müller et al., 1998; Sichman, Conte & Gilbert, 1998), there has been no thorough formalization of the model so far, except for the partial attempt (quite limited by admission of the authors) made by Conte and Castelfranchi (1995: 185190), applying a simplified version of the multi-modal logic devised by Cohen and Levesque (1990). Neither I do propose here to try to formalize the CMSA − indeed, I doubt it could be successfully done on the whole, since the model was not originally conceived to such purpose. I would rather suggest to use some valuable insights provided by the CMSA to put a ‘cognitive pressure' on the existing logics for belief revision, in order to narrow the gap between formal models and human performance in cognitive social action. To this purpose, my main (cognitive) reference concerning belief revision will be Castelfranchi's work on belief supports and the integration of different source of knowledge (Castelfranchi, 1996; 1997a).

2.2.

A short introduction to belief revision

Belief revision deals with changes in the beliefs of an agent, i.e. the update and revision of the set of propositions that the agent considers reliable bases for further reasoning, planning, and action. The formal treatment of belief revision has become, in the last two decades, an extremely active area, expecially in philosophy and artificial intelligence. The first formal approach to belief revision, and still one of the most influential, is the so called AGM theory, from the names of the authors who developed it (Alchourrón, Gärdenfors & Makinson, 1985; Gärdenfors, 1988; 1992) − although they just provided an elegant formalization of some claims first put forward by the american philosopher Isaac Levi (1967; 1980; 1991; for a thorough introduction to Levi's works, see Tamminga, 2001a). Shortly after the appearance of the AGM theory, a distinction was proposed between belief revision and belief update (Winslett, 1990; Katsuno & Mendelzon, 1991), and the more general expression ‘belief change' was adopted to include both (e.g. Liberatore, 2000). Although quite different approaches have been developed to deal with these two categories of belief change, conceptually their distinction is quite trivial. It boils down to distinguish the source of change in the epistemic state: belief revision deals with modifications that arise when the agent realizes that its own beliefs are somehow mistaken or incomplete, compared with the state of the world; on the other hand, belief update is triggered by actual changes in the world, which make the agent's beliefs obsolete and force it to reconsider its epistemic state. Even if such a distinction has come to be more or less standard in the literature, I agree with Boutilier that "one difficulty with the separation of revision and update is the fact that routine belief change, that is the change of an agent's belief state in response to some observation, typically involves elements of both. (...) A given observation often calls for belief change that reflects a response to changes in the world as well as incorrect or incomplete prior beliefs" (Boutilier, 1998: 282). More in general, the very definition of belief revision is hard to frame in a realistic context, if we do not assume that something is changed in the situation as well, and the agent has become aware of it. Otherwise, how would it be able to realize that its prior knowledge is now ‘somehow mistaken or incomplete'? Besides, I will suggest in 3.3 that it

is not difficult, at least in principle, to treat belief revision and belief update in the same conceptual framework, and the explanation of even the most simple process of information update requires such an integrated approach. In fact, several approaches in the literature capture both revision and update using the same formal machinery (Goldszmidt & Pearl, 1996; Boutilier, 1998; Friedman & Halpern, 1997; 1999a). Therefore, in this paper I will not assume the distinction between revision and update to be particularly relevant, and I will often use the expression ‘belief revision' as synonymous of ‘belief change', i.e. the whole process of updating new information and reconsidering prior knowledge. A comprehensive account of the AGM theory is beyond the aim of this paper, and the reader is referred to Gärdenfors (1992) for a detailed introduction − or, for a comparison with other models of belief revision, to Friedman and Halpern (1997; 1999a; 1999b) and Wassermann (2000: 19-38). Basically, the AGM approach distinguishes three kind of belief change: expansion, contraction, and proper revision − which roughly consists in an expansion of the belief state with a new proposition, followed by a contraction to remove all propositions inconsistent with the new one (the order in which expansion and contraction are performed gives raise to the distinction between internal and external revision, according to Hansson, 1992, but this is of no consequence here). For each operation of belief change, a set of postulates is proposed, such to ensure the rationality of the process. Here I will mention only the postulates concerning revision, by way of example. Other characteristic notions of the AGM framework (e.g. epistemic entrenchment, belief state, etc.) will be defined and shortly discussed in the following sections. Given a language L, a belief state K, a new formula φ, the expansion operator + and the revision operator *, the basic AGM postulates for revision are the following: K*1. K*2. K*3. K*4. K*5. K*6.

K*φ is a belief state (closure) φ ∈ K*φ (success) K*φ ⊆ K+φ (inclusion) If ¬φ ∉ K, then K+φ ⊆ K*φ (preservation) K*φ = L iff |- ¬φ (consistency) If |- φ ↔ ψ, then K*φ = K*ψ (equivalence)

A discussion of the rationale underlying these postulates is beyond the point, but it is worth mentioning that all of them have been in turn criticized, and numerous refinements proposed (e.g. Nebel, 1989; Doyle, 1991; Boutilier, Friedman & Halpern, 1998; Tamminga, 2001a). These criticisms usually emphasized the high degree of computational complexity that the AGM theory implies, which in turn makes difficult, if not impossible, any implementation in real computational systems. A shortlist of the most common criticisms can be found in Wassermann (2000: 27). However, although to ensure a manageable degree of computational complexity is without doubt a pre-condition for implementation, the reasons of the partial, but indeed remarkable, divorce between AGM approaches and realistic cognitive agents run deeper than that. The fact is, the AGM theory is idealistic in principle, not just by accident − a point which is not always given the due importance in the literature (an exception is Wassermann, 2000). So it should not surprise at all that the needs involved in modeling belief change in realistic cognitive

agents are quite different from the priorities expressed by the AGM approach − an approach first devised to capture the abstract requirements of rational belief change, not the cognitive dynamics (and flaws) underlying the integration of new information received from actual (and possibly misleading) sources. Nonetheless, the AGM theory is still the standard reference in this field of study. Such historical accident, due to the remarkable elegance of the formal approach proposed by Gärdenfors and colleagues, had the side effect of transforming the topic of belief change in a rather technical domain, more or less reserved to logicians, computer scientists, mathematicians, and some philosophers − usually extremely ‘logic-oriented'. In spite of the name ‘belief revision', researchers usually acquainted with cognitive phenomena in human agents (e.g. psychologists and social scientists) have been scarcely involved in the discussion − and most of them would have not been able to understand it anyway, due to the high degree of specialization to be found in most of the relevant literature. Recently, the all-embracing category of cognitive science has somehow managed to give the impression that there is indeed a lot of ongoing interdisciplinary cooperation in many converging fields, including logic and social science: this is true in many other topics (e.g. social software; Pauly, 2001), but it is not yet the case with belief revision (the most significant exception, as far as I know, is Dragoni & Giorgini, 2003). Obviously, this preliminary paper cannot aim to challenge such a well-established trend. The standard comparison with the AGM theory, as well as with other accounts of belief revision, will be undergone, though with more emphasis on expressive limitations of the existing models regarding cognitive features, rather than their formal advantages and drawbacks. However, I hope that a close comparison between formal models and cognitive requirements will help to reconsider the methodological habit of taking a fixed point (i.e. the AGM approach) as the standard reference for any further inquires in the matter of belief change. In fact, to the purpose of developing more realistic agents for social simulation (cf. 1), such premise could turn out to be quite misleading (cf. 5).

3.

Cognitive requirements for realistic belief revision

In this section I propose seven cognitive requirements for belief revision: for each of them, I explain why it is indeed relevant from a cognitive and social viewpoint, and how it is not yet covered by the existing logical formalisms. As I mentioned before (cf. 1), some of this requirements have already been discussed in the literature, so their connection with previous works will be highlighted as well. But I want also to stress that no formalism has yet taken in account all of these requirements, or even the most of them: typically, when one cognitive feature was addressed, other ones were utterly ignored. I try to show that this pattern is indeed widespread in the existing literature on belief revision (a partial exception is Dragoni & Giorgini, 2003). Finally, the last chapter of this section discusses some more general cognitive constraints, that are not specific of belief revision, but nonetheless have relevant consequences on it, therefore should be taken in account in its formalization.

3.1.

Informations and beliefs, memory and reasoning

There is a self-evident distinction between the set of informations that are available to us, and the set of the facts that we actually believe: in other words, there is a distinction between (stored) informations and beliefs. For example, I could have been told that the Earth is flat, that God does exist, that my girlfriend is unfaithful to me, without actually believing any of these claims ? since these informations were rejected (not accepted) as beliefs. Nevertheless, all these informations are still available to me: I do not forget about having been told that my girlfriend is betraying me, just because I do not believe it at present. Indeed, this information could play a crucial role in my reasoning (and in my life) at some time in the future, provided that I preserve it in my memory: I could develop beliefs concerning the source of this information (e.g. he was just trying to deceive me, because he is also in love with my girlfriend), or I could reconsider my rejection of this same information (e.g. after finding new incriminating evidences on the misconduct of my girlfriend, like suspicious presents wrapped in silk and bunches of red roses from anonymous admirers), and so on. More in general, the distinction between information and belief plays a crucial role in defining the different properties of memory, which roughly consists in all the informations stored and retrievable by the agent, and reasoning, which is based on beliefs, i.e. those informations that are accepted as reliable (Castelfranchi, 1997a). Such a distinction has been widely overlooked in the literature on belief revision, with few exceptions (e.g. the multi-layered architecture devised by van Eijk et al., 1998, and the principle of recoverability in Dragoni, Mascaretti & Puliti, 1995; Dragoni & Giorgini, 2003). In the standard AGM-style approach (e.g. Alchourrón, Gärdenfors & Makinson, 1985; Gärdenfors, 1988; Nebel, 1989; Doyle, 1991), the level of information is not mentioned at all, so there is no explicit theory concerning where new beliefs come from (cf. 3.3), and the rejected beliefs are treated as they never existed at all, as far as the agent is concerned. In other words, these approaches implicitly assume that rejected informations are lost (forgotten) by the agent, and can be restored only if the same belief comes up anew. This assumption places too heavy a limitation on the cognitive skills of the agent, depriving it of any memory of disbelieved informations (for an alternative solution to this problem, see Gomolinska & Pearce, 1999). Moreover, as I will discuss in 3.6, it prevents from considering any instance of gradual change, i.e. the progressive weakening or strengthening of a given information, and possibly of its corresponding belief. Boutilier, Friedman and Halpern (1998) mention a similar distinction, when they develop the notion of observation in comparison to the stronger notion of belief: "We assume that an agent has access to a stream of observed propositions, but that it is under no obligation to incorporate any particular observed proposition into its belief set. Generally, a proposition will be accepted only if the likelihood that the proposition is true given the agent's current sequence of observations ‘outweights' the agent's prior belief that it was false" (1998: 128). Although they do not provide any formal account concerning the way in which these observations are stored and retrieved, they hint to something similar to memory, commenting on the idea of dismissing an observation as incorrect: "‘Dismiss' is too strong a word, for an observation that is not incorporated into the agent's belief set will still have an impact on its epistemic state, for instance, by predisposing it to the future acceptance of that proposition" (1998: 127). However, for our purposes this approach presents a theory of information update which is still too general in the treatment of sources and noise (cf. 3.3), it makes only few claims concerning the structure

of the observation set (cf. 3.2), and it shares the general drawback of all probabilistic approach to degree of belief, i.e. it considers credibility alone as a source of belief strength (cf. 3.4). More recently, Tamminga (2001a; 2001b) advocated the need for two levels of explanation in dealing with belief revision, namely information and belief itself. Then he proceeds to describe belief revision as a two steps process: first, information revision (applying the paraconsistent monotonic logic of first-degree entailment); second, belief extraction (assuring nonmonotonicity, consistency, and closure under logical consequence). Here the main focus is placed on inconsistency of information vs. consistency of beliefs: "We argue for a distinction between information and belief. On the one hand, we shall set forth interrelated techniques for representing, expanding, contracting, and revising information. Information may, of course, be inconsistent. Henceforth, the devices representing our information can contain contradictory and even inconsistent sentences. (...) On the other hand, operations are offered to extract beliefs from the represented information. These beliefs will always be consistent and are closed under logical consequence" (Tamminga, 2001a: 63). In its general outline, Tamminga's approach is similar to the one proposed here, at least concerning the distinction between informations and beliefs, and the idea of conceiving belief revision as a two steps process: information revision plus belief selection. However, his description of information sets does not offer enough internal structure for our purposes (cf. 3.2), there is no explicit theory of the role played by information sources in information update (cf. 3.3), and the treatment of inconsistency is quite different from the one I will discuss here (cf. 3.7). Summarizing the content of this chapter, the distinction between informations and beliefs is claimed to be crucial for developing realistic cognitive agents. Information here roughly means a proposition stored in the agent's mind that is somehow supported by some evidence or reason; the source of such evidence or reason can be either external to the agent mind (perception and communication) or internal (reasoning), and both the cases must be considered in detail (cf. 3.3). A belief is an accepted information, i.e. an information that the agent selects (possibly automatically and without deliberate decision; Castelfranchi, 1997a; Dragoni & Giorgini, 2003) as a reliable basis for his action: the principles and dynamics of such selection must be explored as well (cf. 3.4 and 3.5). However, without a clear preliminary distinction between informations and beliefs, all these further refinements would be impossible. Moreover, such distinction helps to clarify the relation between automatic, unconscious aspects of belief revision (e.g. selecting informations to be believed), and more deliberate features of the same phenomenon (e.g. applying reasoning rules to our beliefs). Therefore, from now on I will discuss most of the typical claims concerning belief revision with special reference to the level of information, where new evidences are acquired, integrated, weighted, and finally accepted (or rejected) as beliefs.

3.2.

Internal structure of epistemic states

Belief revision usually deals with epistemic states, which in turn are described as collections of sentences: "a simple way of modeling the epistemic state of an individual is to represent it by a set of sentences" (Gärdenfors, 1988: 21). Although it has been argued that such definition is indeed too simple (e.g. Friedman & Halpern, 1999b), the proposed improvements were limited to making stronger assumptions on the ordering of the sentences in the set − or,

sometimes, constraining the ordering of the different sets of propositions that could result from revision. A partial exception is Nebel's distinction between belief bases and belief sets (1990), but such distinction is still quite poor: it mainly serves the purpose of reducing the size of the belief set (infinite and closed under consequence in the AGM theory), by identifying a subset of propositions (belief base) from which all the others included in the belief set can be derived. More in general, in AGM-style belief revision all the structural properties of an epistemic state boil down to some principles governing the internal ordering of sentences (or sets), which in turn constraints belief revision: such ordering has been represented in a variety of ways, from the original notion of epistemic entrenchment (Gärdenfors, 1988) to Nebel's theory of epistemic relevance (1989), from Doyle's partial preorderings based on economical preferences (1991) to Spohn's ranking functions (1987), and many more (cf. 3.4). While determining an internal ranking governing belief revision, i.e. assessing which informations have to be accepted as beliefs, and which beliefs are to be hold more dearly than others, still remains a crucial concern (and will be discussed thoroughly in 3.5), reducing all the structural properties of an information base to such ranking would be an oversimplification. Even the most straightforward cognitive account of information's credibility is based on the assumption that such value is positively affected by the number and credibility of all supporting informations, and negatively affected by the number and credibility of all contrasting informations (Castelfranchi, 1997a; Dragoni & Giorgini, 2003). As long as we do not want to rule out contradictions at the information level (cf. 3.7, and see also Tamminga, 2001a; 2001b), but instead allow the agent to weight them in its mind, some minimal structural properties are required: (1) a distinction between supporting and contrasting informations, and (2) specific algorithms for evaluating information's credibility depending on supports and contrasts. In this view, an information state is not just a set of sentences, but a set of sentences characterized by a specific network of mutual relations. This raises the question of what relations are appropriate for capturing the notion of informational support and contrast, and how such relations are generated in the agent mind. The second problem is shortly addressed in 3.3, where I will argue that, whenever a new piece of information is acquired (either from an external source or by way of reasoning), not only a new information node is generated, but also its characteristic links with other nodes. Concerning the most appropriate level of description for modeling relations between different informations, it depends on the level of cognitive refinement required. Let us start at a basic level: since the distinction between support and contrast has been introduced as a way of assessing the credibility of an information, it seems reasonable to define such relations in terms of the effect that the credibility of information α (in symbols: c(α)) has on the credibility of the related information β (in symbols: c(β)). So the fact that α supports β (in symbols: α ⇒ β) means that c(β) is proportional to c(α), while the reverse does not necessarily holds (e.g. c(β) could be increased by other supports, therefore without affecting c(α)). On the other hand, α contrasts β (in symbols: α ⊥ β) whenever c(β) is proportional to 1/c(α), and also the reverse is true (for the sake of simplicity, here we are assuming that contrast relations are always symmetrical). This formalism is still extremely poor, but it is enough to describe a case like the following. α : John has been killed β : John has been buried

χ : John is dead δ : Mary has just met John in the street ε : John is alive Relations: {α ⇒ χ, β ⇒ χ, δ ⇒ ε, χ ⊥ ε} In this example, an agent is confronted with some independent informations (α, β, δ) which support two mutually contrasting claims (χ, ε). How can be evaluated the credibility of the contrasting claims? Just by comparing the credibility of every supporting information with the credibility of every contrasting information, i.e. the credibility of its supports. Hence, c(χ) will be proportional to c(α), c(β) and 1/c(δ), while c(ε) will be proportional to c(δ), 1/c(α) and 1/c(β). However, not surprisingly, this basic account fails to capture many relevant features of informational structures. Consider the aforementioned example as happening in time. I have been told about John's assassination by a distant relative of mine; also received a written invitation to the funeral of my friend. Then Mary comes to visit me, and announces me that she has just met the same John in the street. Let us assume that Mary knows John very well, and she is not used to daydreaming and delusion, so I consider her quite reliable on the matter (cf. 3.3). Now the information that John is indeed alive and its support are suddenly brought to my attention. Even if I do not take for granted Mary's testimony, it will have the effect of weakening (cf. 3.6) the credibility of the previous information concerning John's dead. So far, our elementary model still holds: before meeting Mary, c(χ) was proportional to c(α) and c(β), while now it is also proportional to 1/c(δ), hence the weakening. But what should happen to c(α) and c(β), once c(χ) has been weakened for independent reasons (i.e. Mary's testimony)? More in general, given the support relation α ⇒ β, what should happen to the credibility of the supporting information α, once the credibility of the supported information β is weakened by some independent factor? In our model, so far nothing happens to c(α) when c(β) is changed, since the support relation only predicts the behavior of c(β) in terms of c(α), while no claim is made concerning the other direction. But then the support relation does not seem to be strong enough: in fact, we would expect that, when I am given reasons to doubt that John is dead, also the informations concerning John's assassination and burial service are severely questioned, and their credibility decreased. In other words, a negative feedback on the credibility of supporting information must be embedded in the support relation, each time the credibility of the supported information is weakened by independent reasons. Moreover, also the credibility of these ‘independent reasons', i.e. new informations contrasting with old ones, must be weighted against pre-existing evidence: in our example, I will have some reasonable doubt concerning Mary's claim (no matter how much I trust her), due to previous informations concerning John's dead. A numerical example, assigning positive numbers to credibility and using only elementary mathematical functions, will suffice to show that such dynamics are not captured by the current account. Assume c(α) = 1 and c(β) = 3, e.g. I have been told about John's assassination by a not too reliable friend, while I received a written invitation to the burial service. Assume c(χ) = (c(α) + c(β)), divided by the credibility of contrasting informations. Since here there is not yet any contrasting information, c(α) = 4. Then Mary arrives with her piece of news: assume c(δ) = 2, more or less a measure of Mary's reliability. Now c(χ) = ((c(α) + c(β)) / c(δ)) = 2, while c(ε) =

(c(δ) / (c(α) + c(β))) = 0.5. But notice that the credibility of all supports has not changed ? in fact, if it had changed, we would not have been able to determine c(χ) and c(ε). That means that the credibility of the new information δ has not been impaired by the existence of contrasting informations, nor the credibility of α and β has been shaken by the new contrasting evidence provided by Mary. Both results are highly implausible. A possible solution to this problem consists in pushing down contrast relations at the lowest level (bottom-line informations), i.e. reducing a contrast between two informations to a contrast between their supports. In 3.3 I will show that this leads to examine contrasts directly between different sources of information, evaluating their different degree of reliability (Castelfranchi, 1997a). We could argue as follows: whenever α and β, respectively, supports two contrasting informations φ and γ, also α and β are in contrast: e.g. the information that Mary has just met John is in contrast with the information that John has been killed, since they support contrasting claims (John is alive vs. John is dead). Moreover, considering contrast relations at the level of supporting informations does not affect the value of credibility for the corresponding supported informations − while it does affect the credibility of supporting informations, in such a way as to satisfy the aforementioned requirements. How do we determine the credibility of bottom-line informations? Since they are not supported by any other information, we must assume some prior value for their credibility, which intuitively represents the reliability assigned to the source which provided such information (this claim will be elaborated in 3.3). Notice that we had to assign prior values to unsupported informations also in our previous account, so here we are not making any further assumption. Bottom-line informations can be in contrast with other unsupported informations, and this affects their credibility in the usual way: therefore, given a bottom-line information α, c(α) is equal to its prior credibility, divided for the credibility of all contrasting (unsupported) informations. Now we apply this line of reasoning to our example, leaving the numerical values unchanged. What is changed, instead, is the set of relations used to represent the information structure, since now we have: {α ⇒ χ, β ⇒ χ, δ ⇒ ε, α ⊥ δ, β ⊥ δ}. Before meeting Mary, my prior for α is 1 and for β is 3, and since there are no contrasting informations so far, also c(α) = 1 and c(β) = 3, which results in c(χ) = 4. The prior for Mary's claim (her reliability as a source) is 2, but when I receive her information, I have to reconsider the credibility of all my bottom-line informations. Now c(α) = (1 / 2) = 0.5 and c(β) = (3 / 2) = 1.5, and also c(δ) = (2 / (1 + 3)) = 0.5. As a result, c(χ) = 2 and c(ε) = 0.5. Here the credibility of χ and ε is the same as before, and still fulfill the same basic principle − applied now to the priors of supporting and contrasting informations, rather than their credibility. But now the credibility of supporting informations is weakened as well, due to their contrasting claims − which is exactly the kind of negative feedback we wanted to ensure. Obviously, this basic model is not enough to capture many other interesting features which seem to be embedded in the informational structure. Imagine for instance that, at some time in the future, I find out without any doubt that John is indeed dead, e.g. because I am visited by his heartbroken mother. Applying our account, this would have a negative effect on c(δ), which is to be expected, but there would be no positive effect on c(α) and c(β) ? which sounds logical, since John could be dead without having been killed and even without being buried somewhere (maybe they dispersed his ashes on the ocean), but in fact is quite implausible in a

cognitive perspective. Since I first concluded that John was dead because I was told of his assassination and his funeral, now I would not expect him to have died for different reasons, nor to be at rest in the depths of the sea. Here we deal with the distinction between sufficient reasons to conclude something, and additional supports to the same information: such a distinction, and other equally relevant (e.g. the difference between making a guess and applying a universal rule), cannot be express in the rough account outlined here, although they are likely to play a crucial role in improving our understanding of belief revision, and they seem in principle treatable with logical tools (e.g. applying some notion of minimality or circumscription). Moreover, here the feedback from an information to its supports affects only their contingent credibility (i.e. prior credibility weighted against prior credibility of contrasting informations), not the priors itself. But we would want to allow intelligent agents to modify their priors as well, i.e. to change their trust in a given source of information according to past experience: that will require further refinements of the account outlined here. Finally, so far I focused only on the relation between informational structure and credibility: but I will argue in 3.4 that the selection of informations to be believed is not based on credibility alone, but also on epistemic importance and pragmatic relevance. Since the importance of an information φ will be defined as depending on the number and credibility of all the other informations that have to be changed, whenever φ is changed (Castelfranchi, 1997a), it is clear that such measure depends on the level of connectivity of the information φ − that is, on the structure in which such information is embedded. Therefore, the relation between informational structure and epistemic importance, that so far has been neglected, will have to be explored as well. However, the purpose of this section was not to develop a satisfactory account of the structural properties of information bases, but rather to claim that (1) such properties are worthy of further inquiry, since (2) the current notions of epistemic state are too vague in describing informational structure, although (3) it is this structure that gives us an insight into relevant cognitive features, like the integration of different sources of information (cf. 3.3), the evaluation of single information nodes (cf. 3.4), the gradual processes of mutual influence (cf. 3.6), the comparison between contrasting informations (cf. 3.7), and issues of supporting feedback (Castelfranchi, 1997a). If we look at the information base of an agent as a structured domain, rather than a simple set of items constrained by some ordering, we are also implying that informations are not collected at random, but rather stored in the agent's mind according to some organizational principles. Part of the next section is devoted to shed some light into the nature of such principles.

3.3.

Complex dynamics of information update

From a cognitive viewpoint, belief change is triggered when a new piece of information is made available to the agent: this information update can result in an expansion of the original set of beliefs (not necessarily including the new item, against AGM's success postulate; Boutilier, Friedman & Halpern, 1998; Hansson, 1999), or in its contraction, or in both − what is properly called a ‘revision' in the AGM approach. Notice that information update can even have no effect whatsoever on beliefs, i.e. the new information can be so unreliable and irrelevant, that it does not produce any change in belief selection (cf. 3.5 and 3.6). Nonetheless, it is stored in the

information state, and could be involved in further updates. Another obvious impulse to belief change is given when a stored information sinks into oblivion, i.e. it is forgotten by the agent − which is somehow an analogous of AGM contraction, applied to information bases. But here forgetting p has nothing to do with rejecting p as a belief (cf. 3.1), except for the very general claim that an information p cannot be forgotten, as long as p is currently believed by the agent. However, to investigate the dynamics of belief change yielded by oblivion is beyond the aim of this preliminary paper, so I will only mention it as a possible topic for further developments, in which cognitive considerations will play again a key role − for instance, requiring agents to be able to believe p, even once they have forgotten its supports (Castelfranchi, 1997a): e.g. "I am sure this proof is correct, but I do not remember what makes me so sure". In this paper, I assume that agents have perfect memory, i.e. they remember all the informations received during their life − although they can simultaneously access only a restricted subset of them, due to limitation in the attentional focus (cf. 3.8). This assumption is highly unrealistic, and it is made only provisionally and for the sake of clarity: future works should aim to describe belief change in agents with imperfect memory as well. However, assuming perfect memory, belief change is initiated only by positive information update, i.e. whenever the agent receives or finds out a new piece of information. A new information can be acquired by the agent basically in two different ways: either from an external source (perception or communication), or via internal reasoning, i.e. working out the consequences of previously stored informations. I will first consider the case of external evidences. In the original AGM approach, information sources are not considered at all. To begin with, the theory deals directly with beliefs, without considering the informations on which such beliefs are based (cf. 3.1). Moreover, the new belief considered by the agent in expanding its epistemic state is by assumption both truthful and preferred to any pre-existing contrasting belief − a condition expressed by the so called ‘success postulate'. Hence, the reliability of the source usually is not an issue in AGM-style belief revision. In contrast with this view, Boutilier, Friedman and Halpern (1998) propose to take in account the role played by ‘unreliable observations' in belief revision, and develop a formal treatment of this issue based on Spohn's krankings (1987) to assign different degree of plausibility to the observations made by the agent. While these authors succeed in presenting an account of belief revision more general and more expressive than the original AGM theory, their treatment of information reliability is still inadequate for a cognitive model of information update. In particular, the orderings used to determine plausibility are taken as primitives (cf. 3.5), without investigating why a particular observation should be considered more plausible by the agent − which means there is no explicit theory of the connection between the credibility of information and the reliability of its source (in contrast, see Galliers, 1992). Moreover, Boutilier, Friedman and Halpern does not draw any clear distinction between unreliable observations and noisy observations − in fact, they seem to consider the two categories identical. But the reliability of the source and the noise in the encoding are very distinct properties of the external input, and they produce different effects in the agent's mind. Finally, the model proposed by Boutilier, Friedman and Halpern assume that past observations are somehow memorized by the agent and can have future effects on its epistemic state, but it does not allow the agent to keep track of the origin (i.e. the sources) of its observations. This places an heavy constraint on the cognitive skills of the agent, since, as I

mentioned before (cf. 3.2), our assessment concerning the reliability of a source is deeply linked to the destiny of the informations received from that source: e.g. whenever such informations should prove to be wrong (or right), I want to be able to backtrack their source, and revise my judgment concerning its reliability. In order to do so, information sources must be represented in the agent's mind, and linked to every information they have provided. If we now turn to cognitive models of information update, we are faced with a fairly rich picture: whenever the agent receives from the environment an informational input, it produces a complex and structured mental object, consisting in (1) a trace of the input, e.g. a visual image of a scene, (2) an information assigning a source to the input, e.g. perception, (3) an information on the reliability of the source, and finally (4) an information concerning the content of such input (Castelfranchi, 1996). From now on, I will refer to 2 as S-information (information on the source), to 3 as R-information (information on the reliability of the source), and to 4 as Oinformation (object information, i.e. the information provided by the content of the input). Here I will not consider the role played by the perceptual trace of the input, since it is a crucial issue only if we consider agents interacting in the real world (e.g. robots), while social simulation is more often interested in artificial agents interacting in a virtual environment. Both S-information and R-information support the O-information: the credibility of the new O-information depends on the reliability of its source (R-information), and on the fact that such source is indeed the one responsible for the updating (S-information). The properties of the new informations are determined both by the nature of the input, and by the pre-existing information structure. Roughly speaking, an input can be conceived as a signal characterized by three properties: source, noise, and content. While the source of communication is another agent, the source of perception is obviously the real world, but its ‘reliability' will be a measure of the trust that the agent assigns to its own perceptual apparatus − which is usually extremely high, but not always: if I am drunk, or the environmental conditions are compromised (e.g. there is little light), I will be much less confident in my own senses. In general, inputs are labeled according to their source, that is, the source is known to the agent. There can also be anonymous inputs (e.g. Charlie Brown finding an unsigned valentine card in his mailbox), but I will not discuss this case here. However, the degree of certainty with which the source is determined (i.e. the credibility of S-information) may depend on the noise of the input: e.g. since the phone line is scrambled, I am not sure whether I am talking with Jack or Daniel. But most of the time the noise directly affects the understanding of the content: e.g. I am not sure whether Jack told me that he is ‘sad', or that he is ‘mad'. Here I will consider the effects of noise only concerning the identification of the source, not the content of the input. Now we have enough elements to sketch a rough model of information update from external sources: provided with an input from the source s with noise n and content p, inp(s, n, p), the agent encodes the informations {I(src(s, p)), I(rel(s)), I(p)}, and the support relation {(I(src(s, p)) & I(rel(s))) ⇒ I(p)} − where I(src(s, p)) is the S-information, I(rel(s)) the Rinformation, and I(p) the O-information. The support relation has the form "(S-information & Rinformation) supports O-information", to express the fact that such supports are dependent from each other: the reliability of s supports p only if s is acknowledged as the source of p, and being the source of p supports p only if s is considered reliable. Hence, the conjunction operator & will have to be defined for informations, linking the credibility of both conjuncts to the credibility of

their union: here this point is not further discussed, but probably a definition like c(α & β) = min(c(α), c(β)) could provide a suitable interpretation for &. The credibility of O-information depends on the credibility of its supports (and on the credibility of pre-existing informations concerning content p, if there are any): but how is determined the credibility of such supports, i.e. the credibility of S-information and Rinformation? S-information is (negatively) affected only by the noise of the input: the greater the noise, the lower the credibility of S-information. Assuming little or no noise at all, which in most contexts is a safe assumption, S-information will be taken for granted, i.e. the agent will assign it the maximum degree of credibility. What about the credibility of R-information? Here we must distinguish two cases: when the agent already has information concerning the reliability of that particular source, and when he has not. The first case is trivial: here the R-information is exactly the pre-existing information on the reliability of the source, which is already assigned a value of credibility, and obviously such value represents the credibility of R-information. The second case is much more interesting, since ‘there is a first time for everything' − that is, each source known to the agent must have been an unknown one, some time in the past. So the problem of assessing the reliability of a new source is a general one: if the agent does not make a choice at this stage, it will never be able to distinguish between reliable and unreliable sources. Since here we are assuming that the agent has no relevant knowledge concerning the new source, it is forced to make a leap of faith, assigning by default a given credibility to R-information. It is worth noticing that such value, which roughly describes the agent's degree of confidence in strangers, is a matter of individual variation (cf. 3.8): different agents will assign different values to unknown sources (i.e. a different default value will be specified in different agents). Moreover, we can exploit such parameter to express relevant cognitive features, e.g. how much suspicious the agent is toward new stimula from unknown entities. The emerging interaction between more trusting agents and more suspicious ones, especially concerning new sources of information, could lead to effective models of cross-cultural integration, xenophobia, and social conflict. So far, the discussion of information update has been confined to the case of new external inputs received by the agent: but information update can also be triggered by internal reasoning, when the agent draws new conclusions from old informations (internal information update). In order to ensure a realistic distinction between beliefs and informations (cf. 3.1), I will propose to confine explicit reasoning to beliefs, i.e. assuming there is no deliberate reasoning at the level of information. Here ‘reasoning' means the application of a set of axioms or derivation rules (not discussed in this paper, and possibly different for different agents; Cherniak, 1986) to a set of propositions, such to generate new propositions. If in our model we force the agent to apply exhaustively such axioms (i.e. all the valid conclusions are drawn from the set of available premises), then internal information update can only be a derived form of belief change, since it will always require a previous external stimulus, such to modify the informations accepted as beliefs, changing in turn the new beliefs generated by reasoning rules. If instead the agent is not committed to draw all possible conclusions from its current beliefs (which seems more realistic), internal information update can be a truly endogenous process: a belief q, which has not been generated at time t0 from the belief set B, is reasoned out at time t1 from the same belief set (see also Alechina & Logan, 2002; Cherniak, 1986). However, in both case the new belief must be also encoded at the level of information, in order to retain in the agent's memory the history of its origin, and all the relevant connections with its sources − in this case, the premises from which

the new belief has been drawn. Internal information update basically consists in this process of feedback mapping, from beliefs to information nodes, and from logical consequence to support relations. To show a simple interaction between external and internal information update, I will turn back to the example discussed in 3.2, concerning my discovery of John's death. Let us focus on the first information that I am provided with, the fact that John has been killed. Assume I was told so by a friend of John that I never met before: so I will assign him a default value of reliability, depending on my personal inclination toward new sources of information. Assume also there is no noise at all in this communication, so the S-information is beyond doubt. The information structure generated by the new input is the following: s-α : John's friend told me that John has been killed r-α : John's friend is reliable α : John has been killed Relation: {(s-α & r-α) ⇒ α} At this stage, assuming there is no previous knowledge on the matter, no more information is generated: in particular, since reasoning does not take place at the level of information, the claim that John is dead is not automatically generated. Such a conclusion can be drawn only if the information α is accepted as a belief, i.e. only if I consider its source enough reliable to accept the idea that John could indeed have been killed. If I do not accept such information as believable, I will have no reason to work out its consequences, and I will not do so, in order to save time and cognitive resources (on resource-bounded belief revision, see also Wassermann, 2000; Alechina & Logan, 2002). This constraint seems realistic, otherwise the agent would be forced to reason out all the consequences of any new information, no matter how much implausible, and its mind would be encumbered with a lot of useless, unreliable knowledge (a principle referred by Harman, 1986 as ‘clutter avoidance'; see also Cherniak, 1986). For the sake of the example, assume that in this case α is accepted as a belief, so now the agent can apply its reasoning rules to the new belief. To draw the conclusion that John is dead, the agent must have a belief (i.e. an accepted information) concerning the general relation between ‘being killed' and ‘being dead', and a rule of reasoning similar to the K axiom, such that, whenever the agent believe that p, and also that p implies q, it is able to conclude q. We can roughly represents the situation as follows: B(α) : John has been killed B(type(α) → type(χ)) : If someone is killed, then he is dead Hence, B(χ) : John is dead Now a new piece of information has been generated at the level of belief, and it must be updated also in the information structure; moreover, the way in which the new information has been generated must be preserved as well (internal information update). So, when the agent concludes B(χ), not only χ is mapped back in the information structure, but also the support relation (α & (type(α) → type(χ)) ⇒ χ). As the result of both an external and internal information update, now the information structure is the following:

s-α : John's friend told me that John has been killed r-α : John's friend is reliable α : John has been killed type(α) → type(χ) : If someone is killed, then he is dead χ : John is dead Relations: {(s-α & r-α) ⇒ α, α & (type(α) → type(χ)) ⇒ χ} It is important to stress the supporting link between the general knowledge applied (e.g., type(α) → type(χ)) and the specific consequence drawn from that knowledge, because future weakening of such consequence might lead the agent to challenge the validity of the general rule: e.g. if it turns out that John has been actually killed, but somehow managed not to die (quite unlikely, in the present case), than the agent should be able to see that the general rule must be flawed, and its confidence in such rule should be weakened. On the other hand, when the general rule is beyond doubt (how it is in this example), it will be the other premise to be affected by any negative feedback from its consequence, e.g. if I found out that John is indeed still alive, then I will conclude that the information concerning his assassination was false, and in turn I might reconsider the reliability of the source of such information. If we now apply this treatment of information update to the whole example sketched in 3.2, we obtain a (slightly) more complex picture of the agent's information structure: s-α : John's friend told me that John has been killed r-α : John's friend is reliable α : John has been killed type(α) → type(χ) : If someone is killed, then he is dead s-β : The burial office informed me that John has been buried r-β : The burial office is reliable β : John has been buried type(β) → type(χ) : If someone is buried, then he is dead χ : John is dead s-δ : Mary told me that John is walking in the street r-δ : Mary is reliable δ : John is walking in the street type(δ) → type(ε) : If someone is walking in the street, then he is alive ε : John is alive Support relations: {(s-α & r-α) ⇒ α, α & (type(α) → type(χ)) ⇒ χ, (s-β & r-β) ⇒ β, β & (type(β) → type(χ)) ⇒ χ, (s-δ & r-δ) ⇒ δ, δ & (type(δ) → type(ε)) ⇒ ε}

Contrast relations: {(s-α & r-α) ⊥ (s-δ & r-δ), (s-β & r-β) ⊥ (s-δ & r-δ)} Here all contrast relations have been pushed down at the level of S-informations and Rinformations, for the reasons explained in 3.2. If we do not allow in the model innate informations (i.e. informations hardwired in the agent's mind since the beginning of its life) and mystical intuitions (i.e. informations that, at some time, just pop out of the blue in the agent's mind), then it is easy to see that all informations can be tracked back to some external source, and the evaluation of conflicting informations is reduced to weighting against each other the reliability of contrasting sources. In the aforementioned example, the sources of general rules are not specified, so that they could seem to be actually innate. However, although nothing prevents from equipping an agent with a store of preliminary knowledge (in most cases, it is necessary to do so), agents should also be able to develop general rules from empirical observations (i.e. learning), after having witnessed for a certain number of times the same factual connection, e.g. the fact the whenever someone is killed it is also dead, while the converse is not necessarily the case. Moreover, other agents can often be the source of general rules − indeed, most of the teaching consists in informing other agents of the existence of general rules that they would not have suspected otherwise, and that they can be willing to accept without much questioning, as long as the teacher is considered reliable. To summarize the contents of this section, a preliminary account of information update, both external and internal, was presented. This account aims to stress the rich cognitive (and implicitly social) nature of information update, and to suggest the opportunity of representing in the information structure (i.e. storing in the agent's memory) not only the new piece of knowledge that the agent might acquire, but also the way in which such knowledge was achieved. This will enable agents to revise, should the need arise, the whole process of information acquisition, and not only its results. Intuitively, this view leads to focus on the most natural meaning of the expression ‘belief revision': the process of reconsidering one's own reasoning, possibly step by step, and possibly affecting the belief set, i.e. which available informations are to be considered worthy, and in which measure. As far as I know, the formal treatment of information update in the literature concerning belief change has not yet provided a comprehensive account of the features summarized in this chapter, so this seems a promising field of work, in the general attempt of developing logical formalisms for more realistic cognitive agents (cf. 1). However, by no means the outline described so far is claimed to provide a complete or satisfactory picture of the problem. First of all, the treatment of information update proposed here is far from being properly formal − in fact, to formalize some of these features will be one of the aim of my future work (cf. 5). Besides, this approach (deliberately) deals with information update in an oversimplified fashion − at least from a cognitive viewpoint. Many aspects of the phenomenon are not taken in account at all: e.g. the notion of reliability is considered a primitive one, while it is obviously the result of at least two distinct and interacting features, namely competence (how much do I think the source knows on the matter?) and trust (how much do I think the source really wants to help me?), which obey different dynamics; moreover, here internal information update is not subjected to any consideration concerning the reliability of one's own mental processes, which is not always the case, since different people can show different degrees of confidence in their reasoning skills; for

similar reasons, the reliability of communication should always be filtered by the reliability of perception, since there is no communication without perception (excluding telepathy); and so on. Regardless such limitations, which hopefully might be handled by future refinements of the model, the picture of information update presented here is claimed to be a reasonable compromise between oversimplified formal models of belief change, and unformalized cognitive theories of the same process. Since the main purpose of this preliminary paper is to give some hints concerning how to bridge the existing gap between these approaches, this brief outline of information update might suffice, both as a general overview and as a starting point for future work.

3.4.

Degree of belief as a complex notion

The notion of degree of belief expresses the fact that an agent does not considers all its beliefs as having the same value, but rather assigns to each of them a different shade of confidence, reflecting the trust it puts in them. The standard formal treatment of belief degree makes use of probabilities (Bacchus, 1990; Fagin & Halpern, 1991; 1994; van Frassen, 1995; Friedman & Halpern, 1997; 1999a; Goldszmidt & Pearl, 1996; Halpern, 1991; Kooi, 2003a; 2003b; Vickers, 1976). In this line of work, the degree of belief p is assumed to depend on the agent's uncertainty concerning fact p, and on nothing else. Such degree of (subjective) certainty is usually referred to as ‘plausibility', and it is represented and treated applying probabilities. A belief is conceived as the result of a belief function, i.e. a mapping from states of the world to a numerical value between 0 and 1. For instance, Friedman and Halpern (1997) define the belief function B on the set of worlds W as a function B : 2W → [0, 1] constrained by some specific axioms (in this case, the ones described by Shafer, 1976). Instead, the non-probabilistic approach to degree of belief consists in applying an ordering on the belief set, and constraining belief revision to such ordering, i.e. beliefs with a lower ranking will be revised before beliefs with greater ranking, and vice versa. This is the approach originally proposed in the AGM theory, under the name of epistemic entrenchment (Gärdenfors, 1988): henceforth, refinements and updates of the same notion have been put forward, like Nebel's idea of epistemic relevance (1989; 1990), Doyle's suggestion to use partial pre-orderings determined by economical preferences (1991), and Spohn's ranking functions applied to AGM (1999). A thorough discussion of these formalisms is beyond the aim of this paper, especially since I do not intend to express a definite preference between probabilistic and non-probabilistic accounts of belief degree. On the contrary, I will argue that so far they share the same cognitive limitations (with the partial exception of Doyle, 1991), which do not concern their different technical approaches, but rather their common assumptions. These problematic assumptions (at least, as far as realistic cognitive agents are concerned) are the following: (1) plausibility is assumed to be the only reason that motivates an agent to accept an information as a belief, and which determines the agent's degree of confidence in such belief; (2) the process of ranking formation is not represented ‘from the inside' , i.e. showing how a given value comes to be assigned to a certain information, but rather given as a pre-condition for belief revision − since probabilities distributions in probabilistic approach, as well as ordering criteria in non-probabilistic account, are treated as primitives: "given a specific probabilities

distribution (respectively, ordering criterion), the following is expected to happen concerning belief revision...". I will discuss the latter assumption in 3.5: now, I will turn to the first one. Since plausibility is a strict equivalent of what here is called ‘credibility', from now on I will refer to it as credibility − analogously, what I have argued so far concerning credibility is supposed to apply to plausibility as well. Why should be problematic to consider degree of belief as solely determined by credibility? Clearly, it is not problematic at all from a formal point of view − on the contrary, it makes things easier, since we have to handle a single value instead of many. But when the goal is to capture and represent the notion of belief in realistic cognitive agents, this assumption is an oversimplification. Human agents select their beliefs, i.e. the informations on which they will base their reasoning and action, not only considering how much they know about the fact that such informations are actually true (credibility), but also taking in account how useful such informations are for the agent's purposes (pragmatic relevance), and how many other informations the agent would be forced to reconsider and possibly reject, whenever discarding these ones (epistemic importance). These additional criteria have very little to do with credibility, and are best understood as separate, if not independent, features (Castelfranchi, 1996). The impact of (epistemic) importance and (pragmatic) relevance on belief selection should appear quite conspicuous, even without providing any specific experimental evidence. In fact, a proper account of importance and relevance is crucial to explain the well-known psychological phenomena of self-deception, cognitive fixation and tunnel vision, plus any obstinate refusal of self-evident facts (down to pathological denial; Miceli & Castelfranchi, 1998b), and several social processes of mutual influence and persuasion. Concerning relevance, one of the most clear example is the creation and maintenance of social status: to believe of ‘being in a certain way' (e.g. reliable, trustworthy, and compassionate − or, in a different culture, smart, ruthless, and selfish) is culturally linked to the agent's goal of being accepted and praised in its social context. Actually, to fulfill a given social status usually becomes a goal in itself, which can be viewed as an extreme case of relevance, since to achieve the goal p exactly amounts to believe p, and vice versa. Such relation with the agent's goals puts a pressure over the selection of the corresponding information as a belief, i.e. since it is subjectively desirable to believe p, I am more inclined to do so, regardless (or with little concern for) the credibility of the information p. Other examples of similar phenomena abounds in our daily experience: the prolonged refusal of an husband to acknowledge the existence of his wife's lover, despite overwhelming evidences, is a typical instance of the negative effect that relevance can have on belief selection; the same holds for our natural (and fallacious) tendency to avoid unpleasant informations, with the strong preference often accorded to good news, rather than bad ones; and so on. On the other hand, epistemic importance plays a role in enhancing the value of terminal beliefs, i.e. beliefs that are based on complex chains of reasoning, and possibly on many sources of information: since to reject or reconsider the information at the top of the chain would force to reconsider all the previous steps and supports, with considerable costs in terms of resources and possibly a lot of emotional distress, the agent naturally hesitates to do so. However, the effects of importance and relevance (and credibility as well) are often deeply intertwined in belief selection, as I will further discuss in 3.5. Therefore, most of the interesting cases emphasize their interaction, rather than their distinction: e.g. religious beliefs, and in general any matter of faith, have little to do with credibility − due to an intrinsic lack of

evidence on the subject. Instead, they heavily rely on both importance and relevance. Information concerning the existence of a caring, almighty, supernatural being, although not much grounded in empirical observation, can have a remarkable degree of belief (indeed, it can be one of the most cherished conviction in the agent's mind) both for pragmatic reason (for instance, it makes me feel deeply in touch with other people who share the same belief) and for its epistemic relevance, especially if complex arguments to support such belief has been developed − and the attention given to similar arguments through the whole history of western philosophy is well known. More in general, belief selection is conceived as resulting from the interaction of all the characteristic features of information, namely credibility, importance, and relevance. Focusing on credibility alone, the existing formalisms of belief revision simply overlook other relevant factors, therefore failing to describe accurately the behavior of realistic cognitive agents. Such claim could be disputed, on the ground that importance and relevance, after all, are neither important nor relevant − that is, we do not need them to model the behavior of rational agents. Since their main effect on human reasoning seems to be a misleading one, why should we bother to reproduce such shortcomings of natural selection and cultural development in our artificial agents? Importance and relevance, as they have been described so far, do nothing but interfere in belief selection, making the agent vulnerable to emotional, unsupported, mistaken pressures, and diverting its attention from what really counts − to evaluate how many chances there are that a given information turns out to be actually true. Therefore, there is no reason to include a treatment of importance and relevance in our account of belief revision, since they would only encumber the agent with unwanted flaws and biases. As far as rational behavior is concerned, this line of reasoning could still be justified. But it does not apply at all to the present discussion, since the focus here is not on abstract rational agents, but rather on realistic cognitive ones. In fact, we do want to model agents with flaws and biases, as far as these flaws and biases are proved to give raise to interesting cognitive and social dynamics, like the ones mentioned before. Moreover, the idea that the effects of importance and relevance are just nuisances, and nothing more, is questionable in itself. Epistemic importance expresses a principle of economy, in the sense that agents try to avoid expensive revisions (i.e. revisions that would involve many different beliefs) in favor of simpler ones − a principle which is quite similar, in the general formulation if not in application, to the idea of minimal change, or informational economy, underlying AGM-style approaches to belief revision (for a critical review, see Rott, 1999). On the other hand, pragmatic relevance, as the name suggests, focus on the relevance of a given information for the goals of the agent, implying that an agent should not waste time and energy in reasoning out, or acting on, a piece of knowledge which is of no consequence for its purposes − again, an obvious principle of economy (see also Cherniak, 1986; Harman, 1986). Since realistic agents are, by definition, resource-bounded (Wassermann, 2000; Alechina & Logan, 2002), to include importance and relevance in belief revision seems indeed a natural way to bridge the existing gap between formal models and cognitive theories of individual and social action. In the literature on belief revision, as far as I know, the only explicit attempt to consider issues of importance and relevance has been made by Jon Doyle (1991), but not much attention seems to have been paid to his work, since this particular article is almost never quoted by other authors interested in belief revision. More recently, also Dragoni and Giorgini (2003) mentioned the need for more comprehensive accounts of informational properties, but they did not inquire

further on the subject of importance and relevance. Doyle summarizes the problem as follows: "It would be valuable to have some more flexible way of specifying preferences for guiding contraction and revision [of belief states]. If we look to the usual explanations of why one revision is selected over another, we see that many different properties of propositions influence whether one proposition is preferred to another. For example, one belief might be preferred to another because it is more specific, or was adopted more recently, or has longer standing (was adopted less recently), or has higher probability of being true, or comes from a source of higher authority. These criteria, however, are often partial, that is, each may be viewed as a preorder (...). For example, there are many different dimensions of specificity, and two beliefs may be such that neither is more specific than the other. Similarly, probabilities need not be known for all propositions, and authorities need not address all questions. Moreover, none of these are comprehensive criteria that take all possible considerations into account. If we want contraction and revision to be truly flexible, we need some way of combining different partial, noncomprehensive orderings of propositions into complete global orderings of belief states" (1991: 171-172). In his article, Doyle represents such preorderings as set of preferences concerning propositions, and their integration is achieved by means of an aggregation policy, i.e. a way of electing a winning preordering, which is inspired to Arrow's desiderata for social choice (1963). I will further discuss this solution in the next section. Now I want to stress similarities and differences between Doyle's account and the one outlined in this paper. Doyle considers sets of economical preferences to be natural candidates to represent preorderings over propositions, since they do not constrain at all the choice of such preorderings: in his model, we can assume as many (and as different) sets of preferences as we want. The approach suggested here is different, not only because it is focused on the properties that can give raise to the orderings, instead that on the orderings themselves, but also because such properties are claimed to be general and exhaustive − that is, credibility, importance and relevance are supposed to be taken in account in all instances of belief selection (possibly in different ways), and nothing else should be considered (for a weakening of this claim, cf. 3.5). This restriction fails to capture some of the possible criteria mentioned by Doyle (e.g. criteria concerning the temporal order of received informations), but these criteria seem to lack true generality − as Doyle himself points out. So we could always recover them, under specific conditions, by imposing some independent constraint over belief selection, e.g. imposing a temporal bias on the agent's choice, with a strong preference either for the latest or the oldest information. In fact, this seems equivalent to the standard solution in Doyle's approach, since imposing a set of preferences, without an explanation of how such preferences were generated, it amounts to define a constraint over the agent choice. The model outlined here aims to be a little more expressive: external constraints can be used to represent factors influencing the agent's belief selection in specific situations, but a more comprehensive theory of the rationale underlying the acceptance of informations as beliefs is provided. In particular, we commit to the claim that the main reasons for believing or disbelieving an information amount to credibility, importance, and relevance, and a formal treatment of their interaction is advocated (cf. 3.5) as the most natural way to express degree of belief as an emerging notion. In this section, I tried to show that the usual approach to the degree of belief is too simplistic for cognitive purposes, mainly because it does not take in account other features

involved in belief selection, i.e. epistemic importance and pragmatic relevance. Since in this approach belief selection is performed over the set of informations stored by the agent (and currently included in its attentional focus; cf. 3.8), importance and relevance must apply to informations, and not directly to beliefs. An information is selected as a belief depending on its credibility, importance, and relevance, but the resulting belief (if any) does not retain these features as separate; instead, they are integrated in a single numerical value, defined as the strength of that particular belief, i.e. the global degree of confidence that the agent puts in that piece of knowledge (cf. 3.5). So far, importance and relevance have been defined only intuitively, but we can also provide a (slightly) more formal definition. Given an agent x and its information structure I, the importance of information φ ∈ I is defined as a measure of the number and credibility of all the informations in I whose credibility is affected (either positively or negatively) by changes in the credibility of φ. Given an agent x and its goals structure G, the relevance of information φ ∈ I is defined as a measure of the number and value of all the goals in G which would be negatively affected by changes in the credibility of φ. Both definitions give raise to similar problems (not addressed here), basically concerning how to determine such measures for each information. However, it is clear that complete and finite procedures for calculating both credibility, importance, and relevance are needed in this approach, and they will require both a more detailed account of the relational properties in the information structure (for credibility and importance), and the parallel discussion of the goal structure and its feedback on informations (to assess relevance). While a general outline of the resulting model is summarized in 4, all these topics still remains open for future work (cf. 5).

3.5.

Processes of ranking formation

In AGM-style approaches, the ordering of propositions in an epistemic state is assumed, not explained. Regardless the specific notion involved (epistemic entrenchment, epistemic relevance, k-rankings, etc.), the basic line of reasoning is the following: assuming the ordering r over the set B, we can constrain the results of the belief revision of B with proposition φ according to postulates concerning the ranking of φ in r (to be more precise, the ordering is often given not over propositions, but over different outcomes of belief revision, i.e. over different sets of propositions; however, the general approach does not change, since such ordering is still simply assumed and not explained). This is an elegant way of connecting degree of belief and belief change, but it does not offer any explanation concerning how an agent is supposed to develop a ranking of propositions, i.e. why and how some beliefs come to be considered more worthy than others. Moreover, it is not always clear how AGM-style theories propose to deal with modifications induced in the orderings by belief revision itself − a problem that was especially severe in the original AGM approach, where belief revision mapped a belief state with its ordering into a new belief state without any definite ordering, precluding the possibility of iterated revision (on the problem of iterated revision in the AGM framework, see Boutilier, 1996; Darwiche & Pearl, 1997; Lehmann, 1995; Friedman & Halpern, 1999b). Probabilistic account of belief degree are usually better equipped to deal with ranking modification, therefore with iterated revision as well, since they easily update probability measures (e.g. degree of belief) by conditioning the prior value to the probability assigned to a

new relevant observation (Boutilier, Friedman & Halpern, 1998; Fagin & Halpern, 1991; Friedman & Halpern, 1997; 1999a). However, probabilities give us an insight on the dynamics of ranking update, but not on its origin. Where do the priors come from? This depends on the assumptions made by each model, and the problem is that such assumptions are often quite unrealistic, at least in a cognitive perspective. For instance, many probabilistic models of knowledge imply that "the agents have a common prior, which means that if they were to forget everything they have learned, then they would agree on all the probabilities" (Kooi, 2003b: 392). Unfortunately, this requirement is far too strong for any realistic social simulation: there is no reason why all agents should share the same ‘innate view of the world', i.e. a common distribution of priors. On the contrary, individual differences in belief ranking does not depend only on experience, but also on different inclinations expressed by the agent toward new sources of information (cf. 3.3) and learning processes. In the approach suggested here, the reliability assigned to new sources of information is somehow analogous to prior probabilities, and the adjustments made in the information structure to compare supporting and contrasting informations (cf. 3.2) serve the same purposes of Bayesian conditioning in probabilistic approaches. However, there are also several differences − beside the simple fact that importance and relevance are taken in account along with credibility, and belief selection (and ranking) results from their interaction. Moreover, source reliability does not depend on any primitive probability distribution, completely or partially shared by the agents; instead, it depends on what is known to the agent (if any) concerning both the source and the content of the new evidence, and also on the subjective inclinations of the agent itself. As I already mentioned, more trustful agents will assign greater reliability to new sources, while suspicious agents will be much more careful − including all possible cases in between. Finally, also further adjustments in credibility assessment through experience and learning do not need to be identical, in scope and pace, between different agents. In other words, we do not want to apply always the same function (e.g. Bayesian conditioning) in updating the information structure of every agent. The reason is that different agents can have different learning strategies, more or less successful, and an understanding of such range of variation is needed to model and compare different patterns of individual adaptation to the same environmental conditions − which is a basic requirement for any social simulation based on evolutionary dynamics (cf. 3.8). More in general, a computational model of social action should be committed to frequently allow different quantitative specifications of the same qualitative architecture, in order to capture not only broad generalizations, but also their interaction with individual features. This last concern seems to be underestimated by the current probabilistic approach to belief revision. The idea of assigning to information several different ‘reasons to be believed' (Castelfranchi, 1996), and then to explain the interaction of these measures in producing belief selection and ranking, is the core of the approach presented here. A similar intuition has already been applied to belief revision by Doyle (1991), with different methodologies and different results. Since I have already discussed Doyle's account of ranking criteria as sets of preferences (cf. 3.4), now I will turn to his notion of aggregation policy, i.e. the way in which different ranking criteria are supposed to blend together. The problem here concerns how to make emerge a single global ordering from a set of partial orderings. This can be considered analogous to the problem of determining belief selection and ranking in our model, given the values of credibility, importance and relevance assigned to each information. Doyle conceives an aggregation policy

as a typical case of social choice, assuming each partial preordering as the expression of an individual set of preferences, and therefore applying Arrow's desiderata for social choice (1963): collective rationality, unanimity (Pareto principle), indipendence of irrelevant alternatives, nondictatorship, and conflict resolution (Doyle, 1991: 172). Not surprisingly, having so restricted the aggregation function, Doyle is able to prove that Arrow's theorem holds also concerning aggregation policies in belief revision: that is, if we consider more than two alternative preorderings (which is often the case in belief revision), then no aggregation policy for such partial preordering will be able to satisfy all Arrow's desiderata. However, Doyle claims that there are a number of possible ways around such theorem, but they are not discussed concerning belief revision; instead, Doyle refers the reader to his joint work with Wellman on nonmonotonic reasoning (Doyle & Wellman, 1991). Although such connection between belief revision and Arrow's theory of social choice is quite fascinating, I am not able to fully appreciate its practical uses. In particular, I see no precise reason why Arrow's desiderata for social choice should necessarily be the proper criteria for capturing internal cognitive processes of belief selection and ranking. For instance, let us consider the principle of nondictatorship, which states that no partial preordering solely determines the global ordering − in Doyle's words, "there is no ‘dictator' whose preferences automatically determines the group's, no matter how the other individual orderings are varied" (1991: 172). While this principle is perfectly sensible as far as social choice and individual preferences are concerned, there is no reason to extend its validity to the integration of different ranking criteria for informations. On the contrary, in principle we want to allow ‘dictatorial functions' as well, since they will express the most single-minded cognitive attitudes concerning belief selection: e.g. the ultimate fanatic can be conceived as an agent utterly devoted to delusion, whose belief selection and ranking are determined by the sole relevance of the corresponding information to its goals; on the other hand, the evangelical figure of the Apostle Thomas, who wanted to put his finger in the wound of Jesus before believing in his resurrection, is a paradigmatic example of an individual dominated by an overwhelming concern with credibility; and so on. More in general, there seems to be no reason to constrain belief selection and ranking at all, once we have identified the variables involved in the process (credibility, importance, and relevance) and the specific dynamics of their interaction (represented by a function of acceptance, that I will presently discuss). In a sense, all the necessary constraints are already embedded in the mathematical function we use to determine belief selection and ranking, and no additional requirement is needed − unless the contrary is proved, or we want to impose some specific condition on the resulting ordering, which cannot be captured by the interaction of credibility, importance, and relevance (as shortly discussed in 3.4). On the whole, in comparison with other approaches in the literature, the account presented here shows some peculiar differences. In contrast to probabilistic theories, here belief selection and ranking are determined by a variety of factors, not by credibility alone, and the outcome may depend not only on objective conditions and previous experience, but also on the individual inclinations characteristic of each agent. In contrast to Doyle's approach, no external restriction is imposed over belief selection and ranking, and the attention is focused on the features which characterize single information nodes (credibility, importance, and relevance) and their interaction in mapping informations to beliefs, rather than on partial preference preorderings

and their integration. More precisely, such interaction is supposed to be expressed by a particular function, called ‘function of acceptance'. The function of acceptance is here conceived as a function having information's credibility, importance and relevance as arguments, and belief's strength as result. In other words, a function of acceptance is a mapping from informations to beliefs. Different agents can have the same function of acceptance, but this is not necessarily the case. Moreover, the same agents can have more than one function of acceptance, each to be applied in different contexts or under different conditions − although the dynamics governing the switching from one function to another, i.e. from one preferential criterion of belief selection to another, are beyond the aim of this paper. However, despite this broad range of variation, each function of acceptance must present the following characteristics: (1)

(2)

(3)

it must map an information φ, with its credibility, importance, and relevance, in a belief φ, with its strength − that is, the contents of the information are preserved in the corresponding belief, but three different indexes are integrated in a single value, called ‘strength', which expresses the agent's overall degree of confidence in φ; it must have a threshold over a given condition, concerning either credibility, importance, relevance, or a combination of them, which qualitatively determines its behavior, i.e. the function will give raise to belief φ only if the corresponding information φ is such to score over the threshold, otherwise the information will not be accepted as a belief (it will not be believed at all, though it will remain stored in the information structure); the resulting belief set must be a subset of the corresponding set of information nodes, i.e. the function cannot determine any belief φ, without the corresponding information φ being provided.

A fourth requirements, implicit through all this paper, is that the function of acceptance is concerned with information's credibility, importance, relevance − and nothing else. However, such claim reflects a commitment to a specific theory of belief selection, which could be disputed or improved, without abandoning the general approach outlined here. In principle, other ‘reasons to believe' (i.e. different criteria of belief selection) could be added to the arguments of the function, still preserving the same theoretical framework − possibly including in the function an ‘internal representation' of some of Doyle's alternative criteria (cf. 3.4). Therefore, so far the general form of the function of acceptance can be summarized as follows: given a set of informations I, where each information φ ∈ I is characterized by cφ, iφ, rφ, and a set of belief B, where each belief φ ∈ B is characterized by sφ, a function of acceptance A with threshold k over condition C is a mapping I → B that respects the following properties: if C(cφ, iφ, rφ) ≤ k if C(cφ, iφ, rφ) > k

then then

φ∉B φ ∈ B and sφ = A(cφ, iφ, rφ)

where C indicates the combination of credibility, importance, and relevance which is required to exceed the threshold k. Usually, condition and threshold are expressed together, e.g.

C: (c + r) > 1 means that the credibility of an information plus its relevance must be greater than 1, to give raise to the corresponding belief. Condition C and function A can have different forms, obviously: e.g. the pair C: c > k and A: (c + r + i) / 3 describes belief selection in an agent that will not believe any information with credibility lesser or equal to k, but which will pay considerable attention also to relevance and importance, once the credibility minimum is satisfied (a prudent attitude, on the whole). In order to represent a variety of cognitive attitudes, we are free to use different combinations of function, condition, and threshold, and different mathematical integrations of credibility, importance and relevance in both function and condition. Therefore, this approach should allow a fair degree of flexibility in capturing individual variation within the same general architecture (cf. 3.8). Finally, a short remark on the expression ‘function of acceptance' is in order. Here the word ‘acceptance' simply refers to the fact that some informations are accepted as beliefs, while other are (provisionally) rejected. But the resulting selection is by no means conceived as the expression of any deliberate choice, like if the agent was free to literally decide which informations would be better to believe. Quite the contrary: belief selection and ranking is usually understood as an automatic process, and often even an unconscious one. The individual inclinations concerning belief revision expressed by the function of acceptance are not the result of the agent's ‘strategy' or ‘choice', but just an emerging effects of cognitive features (mainly preferences), of which the agent could or could not be aware, and that are not usually under its control. More in general, the automatic, non-deliberate nature of belief selection and ranking is one more reason to support the distinction between informations and beliefs (cf. 3.1). In fact, such distinction provides a framework to capture the separation between automatic and deliberate features of belief revision, since the former are described at the level of information, while the latter take place only for beliefs; moreover, systematic interactions between these different levels are also accounted for in the model (e.g. internal information update; cf. 3.3). However, although belief selection is often automatic, this is not always the case. The most remarkable instance of deliberate belief selection is a well-known piece of logical thinking, but crucial also to holiday planning, idle daydreaming, and children's play − that is, of course, the process of making assumptions. To assume something as an hypothesis means exactly to accept an information φ as a belief ‘for the sake of the argument', i.e. to the purpose to check out some of its consequences. Psychologically, φ is not exactly believed, just assumed − a distinction roughly mirrored in our account by the fact that here φ has been forced in the belief set, bypassing the function of acceptance, and the agent seems to be aware of the special nature of this ‘belief without proper credentials'. However, here the point is not to provide an account of hypothetical thinking, but to stress that its very nature strongly supports the claim that logical rules are applied to deliberate reasoning at the level of beliefs, not of informations (cf. 3.3). In fact, in theorem proving as in daily life, we must assume something, i.e. upgrade it to the level of beliefs, if we want to derive whatever follows (possibly, a contradiction), since a minimal degree of commitment to the premises is always required, to be able to draw their consequences. At the level of information, where there is no similar commitment, logical derivation will not be performed.

3.6.

Gradual change: weakening and strengthening of beliefs

In the original AGM theory, belief revision boils down to adding or subtracting (or both) one or more propositions from a given belief set. The issue of gradual change, i.e. the weakening or strengthening of prior beliefs, is not specifically addressed. When changes in the ordering criteria are accounted for, they could be viewed as reflecting the effect of belief revision over the degree of belief, since they indirectly express such notion (cf. 3.5). However, in AGM approaches this is usually conceived as a side-effect, needed only for allowing the treatment of iterated belief revision (Boutilier, 1996; Darwiche & Pearl, 1997; Lehmann, 1995; Friedman & Halpern, 1999b). An explicit account of belief degree is proposed by probabilistic approaches to belief revision, which therefore are naturally concerned with issues of gradual change. In fact, probability theory has proved to be well suited to deal with this kind of gradual modifications in the agent's beliefs. However, I have already argued about the limitations of these approaches, as far as realistic cognitive agents are concerned: first, we need to be able to model gradual change in the high-structured domain of information (cf. 3.2), while probabilistic approaches do not commit to any structural description of observations and sources (Boutilier, Friedman & Halpern, 1998); moreover, gradual changes should affect directly informations, and only indirectly beliefs (cf. 3.1 and 3.5), while probabilistic approaches lack this distinction; finally, gradual changes are expected to affect all different facets of information, i.e. credibility, importance, and relevance (cf. 3.4), while probabilistic approaches only consider plausibility. The model outlined by this paper is strongly focused on the issue of gradual change, both in informations and in beliefs − the latter being understood as a consequence of the former. More precisely, the addition of new information to the agent's knowledge base, i.e. information update (cf. 3.3), is claimed to affect always the degree of credibility, importance, and relevance of all related informations. Therefore, information update implies gradual change, and vice versa − since the value of information (and the strength of the corresponding belief) is not supposed to change without reason, and this ‘reason' can only be a new piece of evidence, external or internal (Castelfranchi, 1996). An instance of this dynamic was already presented by the example in 3.2: when a new piece of information is acquired by the agent (e.g. the fact that Mary claims to have just seen John in the street), not only a new information is generated, but it has also an effect over all the other related informations − in this case, weakening the prior information concerning John's death, his assasination and burial service, and the reliability of the corresponding sources of information. Strengthening and weakening an information may effect or not its selection as belief, depending on the prior situation and the function of acceptance applied by the agent. Given an information φ, assuming it was already accepted as the corresponding belief φ°, a strengthening in φ (no matter whether in credibility, importance, or relevance) will result in a strengthening of φ°, while a weakening of φ will result either in a weakening of φ°, or in a full-blown rejection of it as a belief (depending on condition and threshold expressed by the function of acceptance). If we instead assume that information φ was not yet accepted as a belief, its strengthening could either give raise to the corresponding belief φ°, or not, again depending on condition and threshold; its weakening, on the other hand, will have no direct effect at the level of beliefs. The rough account of information structure provided in this paper is by no means adequate to fully capture the cognitive dynamics of gradual change involved in belief revision

(cf. 3.2). Many desired features of mutual change are not accounted for: e.g. the positive feedback from a confirmed information to its supports, and the effects of further observations on the prior reliability assigned to different sources. In fact, to develop a better account of similar issues is one of the expected outcome from future investigations on belief revision in cognitive agents (cf. 5), and a mayor concern of my research − which, in a sense, is entirely based on the idea of capturing spreading mechanisms of gradual change over structured domains, opposed to set-theoretical approaches to belief revision.

3.7.

Contradiction management at different levels

Most of the current approaches to belief revision share the same view concerning contradictions: they simply should not be allowed to arise in the agent's belief state (a remarkable exception is Wassermann, 2000). This requirement is particularly strong in the original AGM theory, as it is to be expected, since this theory was first devised to describe not just any instance of belief revision, but rational belief revision − and it is clear that a revision resulting in a contradictory belief state cannot be regarded as being particularly ‘rational'. However, the same prescription against contradictions is also present, in a weaker form, in more realistic accounts of belief revision: e.g. Boutilier, Friedman and Halpern (1998) allow observations to be possibly inconsistent (although they do not elaborate on the matter), but insist on the fact that all contradictions must be ruled out by the revision function that maps an observation sequence in a belief state; similarly, Tamminga (2001a) emphasizes that information states can well be inconsistent, but again all contradictions are supposed to be kept out from belief states, and specific extraction operators are devised to guarantee such consistency (Tamminga, 2001a: 8188). Realistic accounts of contradiction management in cognitive agents are forced to be more complex, since the treatment of contradiction in human reasoning is not so straightforward. In the model proposed here, contradictions are considered at two different levels: informations and beliefs. Contradictory informations are not only permitted, but positively exploited: it is the knowledge of the existing contrast between, say, φ and ψ that allows the agent to weight against each other all φ-supports and ψ-supports, in order to achieve a well-founded judgment concerning the credibility of both. Since c(φ) is proportional to 1/c(ψ), the very nature of their contradictory relationship often prevent them to be believed together − at least as long as condition and threshold of the function of acceptance heavily rely on credibility. But contradictory informations can happen to be simultaneously believed, for instance when the agent is more concerned with importance and relevance, rather than with credibility alone. In fact, it is easy to show that the importance of contradictory informations is by definition always the same, and in many cases this is true also for their relevance. So the question arises: what happen when contradictory informations are actually believed by the agent? The basic idea underlying this approach is simple, and not even new (see for instance Harman's principle of recognized inconsistency, 1986, and also Levi, 1991; Gomolinska & Pearce, 1999; Wassermann, 2000): to be rational, an agent does not need to be spared in principle of any contradictory beliefs; instead, it needs to be able to manage such contradictions, once they have arisen − and once it has become aware of them. In other words, specific axioms

must be devised to deal with contradictions at the level of beliefs. In this view, it would be irrational for the agent just to ignore contradictory beliefs, pretending they are not a problem − they are, since they cannot be used as a basis for further reasoning and action. So a rational agent is expected to deal with them, making up its mind concerning the contradiction: either it retains one of the contradictory beliefs and reject the other, or it rejects both. To exclude in principle the emergence of contradictory beliefs is not, from a cognitive viewpoint, a safeguard against irrationality, but rather an undue limitation of the agent's reasoning. Cognitive agents know better − that is, they can handle contradictions. The axioms of contradiction management, as any other reasoning rule (cf. 3.3), will not be universal, but rather characteristic of each agent. All axioms will have in common an underlying temporal dimension, since contradiction management, as any other inferential process, takes place in time: at t0 the agent is faced with a contradiction, at t1 the solution has been provided. Different strategies of contradiction management, expressed by different axioms, can be applied. Consider for instance the following axioms, given with a provisional notation Bip (reading: ‘the agent believes p with strength i'), only for the sake of the example: 1. 2. 3.

(Bnp ∧ Bm¬p) → ∀n∀m (¬Bnp ∧ ¬Bm¬p) (Bnp ∧ Bm¬p) → ∀n≥m (Bnp ∧ ¬Bm¬p) ∧ ∀n(m+k) (Bnp ∧ ¬Bm¬p) ∧ ∀m>(n+k) (¬Bnp ∧ Bm¬p)

The first axiom describes the behavior of an agent that always rejects both contradictory beliefs, no matter their relative degrees of strength. The second axiom expresses the opposite attitude: here the agent always makes a decision, accepting the belief with the highest strength and rejecting the other, no matter how small is the difference in strength. The third axiom explicitly considers such difference ∆, compares it with a given value k (possibly different in different agents), and predicts the agent's behavior accordingly: when ∆ smaller or equal to k, i.e. the contradictory beliefs have similar or identical strength, the agent distrusts both of them; when ∆ is greater than k, the agent chose the belief with the highest strength. The third axiom is a clear generalization of the previous two: axiom 1 covers the case when k = ∞, while axiom 2 (with minor adjustments) is equivalent to the case when k = 0. Finally, contradiction management needs to apply only to beliefs of which the agent is currently aware: beliefs (and corresponding informations) which fall outside the attentional focus of the agent are not supposed to undergone any process of contradiction management − in a sense, they are free to be as inconsistent as they wish, as long as the agent does not focus its attention on them. This claim clearly implies a notion of limited awareness, that will be introduced in the following section.

3.8.

Asides: general constraints interacting with belief revision

Here I will shortly discuss some general constraints required for modeling knowledge in realistic cognitive agents: such constraints are not specific of belief revision, but they applied to it

as well. Some of them have already been mentioned in previous sections, therefore the following list will provide only a short summary of their main features. Attentional focus and limited awareness. This is a well-known issue in belief revision (Cherniak, 1986; Fagin & Halpern, 1988; Wassermann, 2000), often linked to, and sometimes confused with, the distinction between implicit and explicit knowledge (Harman, 1986; Lakemeyer, 1991; Levesque, 1984). Realistic cognitive agents are by definition resourcebounded: they have limitations not only in available time, reasoning skills, and mental energies, but also in the set of informations and beliefs that they can access at a given time. No agent ever performs its reasoning over its whole information base: a more limited portion of such base is selected and exploited, the one currently included in the attentional focus of the agent − a process of selection similar, but not identical, to the distinction between long-term and short-term memory. The attentional focus is usually governed by automatic rules of contextual relevance, i.e. the agent automatically focus on the subset of informations which are likely to be more useful in the current situation. However, sometimes the agent must be allow to revert to a form of deliberate control over its attentional focus, e.g. when a new context is faced, and an appropriate information subset must be located and retrieved from the global knowledge base. Describing the dynamics of the attentional focus is usually considered beyond the aims of belief revision − although higher processes of belief revision typically involve ‘changing perspective on the matter', i.e. trying a different focalization concerning the information needed to solve a given problem. However, my purpose here is only to remark that the account of belief revision presented so far must be framed in the more general notion of limited awareness. Belief revision is neither completely local, since it involves always structural relations, nor global, since the whole knowledge base is not accessible to the agent: in fact, if we are willing to borrow a neologism from political science, belief revision should be usually conceived as a ‘glocal' process. Individual variation via parameters setting. Social simulation is naturally interested in modeling a wide range of different ‘cognitive types', since human communities are not formed by clusters of anonymous, undifferentiated replicants. While large populations of identical, simple-minded agents can be surprisingly effective and ‘smart' in their performance as a group (e.g. the outstanding applications of ‘swarm intelligence' to complex problems of resource management), their interaction does not shed much light on social and cognitive dynamics among humans. This is the reason why the issue of individual variation has been so often mentioned in this paper, and some example of such variation given (cf. 3.3, 3.5 and 3.7). More in general, the formal architectures in which social scientists are interested must include both well-defined universal requirements, identical for every agent (from now on, principles), and more specific features, which are matter of individual variation (from now on, parameters). In the previous sections, introducing a cognitive-oriented model of belief revision, both principles and parameters were implicitly discussed. The distinction between informations and beliefs; the notions of support and contrast; the definition of credibility, importance, and relevance; the confinement of logical reasoning to the level of beliefs; the multi-layered treatment of contradictions; the definition of the accepted parameters, and their range of variation − all of these are principles of the model, assumed to hold for every agent. On the contrary, examples of

parameters are the following: the specific functions to assess information's credibility, importance and relevance; the degree of reliability assigned by default to new sources; the function of acceptance, and its threshold and condition as well; the axioms of contradiction management; the scope and flexibility of the attentional focus; and so on. Given a universal architecture of the qualitative features involved in belief revision (shortly summarized in 4), we are interested in different quantitative specifications of most of these features. This will allow us to describe agents that can be compared to each other, since they share the same basic architecture, but which remain also distinct, since their ‘cognitive equipment' is different. In particular, we can exploit parameters in two mayor ways. First, we can try to define parametrical settings which mirror specific ‘cognitive types', e.g. selfish, trustful, overcautious, delusional, generous agents (and many more), in order to study their interaction in different contexts, therefore addressing classical problems in social studies (e.g. given a group of n agents working together to reach goal p, how many cheaters the group can afford, before becoming unable to achieve p?). On the other hand, parameters give us the possibility of exploring evolutionary dynamics in highly symbolic systems − a point further discussed below. Evolutionary dynamics (randomness over parameters). Evolutionary dynamics have gained increasing popularity in social simulation over the last two decades, since they enable to understand and model cognitive skills involved in social action as the result of evolution, rather than as innate abilities, simply hard-wired by the researchers in the agent's mind. The nature and functions of human brain, and the corresponding basic properties of our mind, have clearly been shaped by evolution: to be able to describe artificial minds in the same theoretical framework is widely recognized as a mayor improvement brought about by cognitive science in social studies (Cummins & Allen, 1997; Pinker, 1997; Nolfi & Floreano, 2000). Usually, evolutionary dynamics are linked to simulations with neural networks, since it was in this field they were first extensively applied to populations of artificial agents (Rumelhart & McClelland, 1986; McClelland, 1989; Belew, McInerney & Schraudolph, 1991). Being based on the idea of roughly reproducing the brain's biological structure in the agent's architecture, neural networks are naturally suited to deal with evolutionary pressures. On the contrary, symbolic architectures like the one presented here, i.e. architectures based on representational notions like ‘beliefs' and ‘goals', and modeled using logical formalisms, are usually considered at loss with evolutionary dynamics, since they have no clear way of representing the effects of environmental feedbacks over the agent's architecture (for a critical review of this claim, see Broda, d'Avila Garcez & Gabbay, 2002). However, I believe this conception, and the underlying opposition between subsymbolic and symbolic models, to be quite misleading − and not only because hybrid approaches are both possible and useful (e.g. Copycat and Metcat by Hofstadter and colleagues, and DUAL by Kokinov and colleagues; Hofstadter et al., 1995; Kokinov, 1994; Kokinov, Nikolov & Petrov, 1996), but also because evolutionary dynamics can be exploited in symbolic systems as well. Evolution, reduced to its core, is a stochastic process, which basically requires two interacting forces: a pool of random variations, and a selection principle over such pool. Without randomness, there will be no endogenous change in the genetic heritage, hence no evolution; without natural selection, evolution will lack direction, and no functional advantage would be derived from it. Therefore, a stochastic process does not necessarily need a neural network to be applied over artificial agents: what is needed is to allow a range of random

variation over specified cognitive features of the agent, and a way of determining the fitness of each agent depending on such features. In the model outlined here, the parameters are natural candidates as deputed loci of random variation, while well-known techniques can be used for the selection of the fittest (e.g. genetic algorithms over populations). In such a framework, different settings are randomly assigned to the agent's parameters at the beginning of its life; then the agents are confronted with a common environment, where they are supposed to perform a given task (individual or social), and their survival depends on their performance concerning the task. After a given time, the agents who are still alive are allowed to reproduce before dying, i.e. a genetic copy of them is introduced in the environment, with some variations in parametrical settings to account for mutations; then a new cycle begins, and so on. After enough generations have come to pass, an evolutionary pattern will emerge: some cognitive features (i.e. some parameters) will have been shaped and honed by repeated selection for the task, while others will have proved to be unessential, or even harmful. More in general, researchers will be able to precisely compare (1) the nature of the task, (2) the environmental conditions, and (3) the complex evolution undergone by the cognitive features of the agents. The artificial setting will ensure an high degree of control over all the variables involved, and an adequate evolutionary rate − since significant evolutionary changes in artificial agents can be reproduced in a matter of hours, rather than the ages required by natural selection. Applying the same line of reasoning to belief revision, it should be possible to study it on experimental bases as well, addressing a variety of specific questions: e.g., what kind of belief revision is best suited for a given task? Under which conditions a belief revision strategy is to be preferred over another? When different agents apply different strategies, how do they interact with each other? What is the evolutionary outcome of belief revision systems equipped with supposedly rational axioms, e.g. AGM postulates? Do they fare better or worse than less ‘rational' agents? What are the social factors (e.g. group dimension, cooperation policies, rate of interaction, etc.) which mainly affect belief revision? However, quite obviously, the mere suggestion that parameters should be allowed to vary at random in evolutionary simulations is not enough to claim we have devised a model for evolving different strategies of belief revision. So far, I have presented at least three different types of individual parameters: numerical values (e.g. the reliability assigned by default to new sources), mathematical functions (e.g. the function of acceptance), and logical formulae (e.g. the axiom of contradiction management). While numerical values lend themselves easily to random variation, the same is not true for functions and formulae: here we would have to assume a given set of admissible alternatives, and apply random specification over it − a procedure that poses severe limitations on the usual course of evolution. Moreover, as soon as we want to compare evolution of the species with individual learning, we will have to find a way of encoding environmental feedbacks in the agent's parameters, i.e. to make him able to understand when its actions are successful or pointless, and to track back the causes of this outcome in its own internal architecture. In neural networks, this is achieved by a variety of methods (e.g. back propagation), and similar solutions will have to be devised for parameters adjustment in the symbolic architecture outlined here − a task open for further developments (cf. 5). Connections with planning (goals). The relation between epistemic processes and the agent's goals has already been introduced through the notion of relevance (cf. 3.4), which

expresses the degree of pragmatic salience of an information, i.e. how much that information is needed for the purposes of the agent. However, the link between epistemic notions (informations and beliefs) and goals is likely to be deeper than that. A thorough discussion of this topic will involve the interaction between beliefs and goals in planning, which is beyond the aims of this paper. However, it should be clear that belief revision in cognitive agents is not to be understood as idle speculation, but rather as a purposeful behavior, as much goal-oriented as it is data-driven. We do not revise our beliefs just for the sake of it, nor only depending on a general commitment to achieve a better knowledge of true facts − although such commitment is likely to be a useful concern in most of the cases, and a common guideline for belief revision (as expressed by the notion of credibility). Cognitive agents often have more specific reasons to revise their beliefs, and such reasons are bounded to affect the outcome of belief revision itself (e.g. the ‘knowledge seeking behavior' shortly introduced in 4). Therefore, the connection with the agent's goals should not be underestimated in any formal model of belief revision (including the one suggested here), as far as realistic cognitive agents are concerned.

4.

Outline of the model

Throughout the previous discussion of the existing formalisms for belief revision, I have tried to suggest some features of a possible alternative model, more explicitly cognitive-oriented. A general outline of such architecture (far from being complete) is provided in Figure 1. (Figure 1 is provided in a separate sheet: fig1.pdf) The basic mentalistic features of this model are informations, beliefs, and goals (for further details on beliefs and goals, see the reference in 2.1). Information is defined as an internal representation of a state of things, stored in the memory of the agent, regardless the fact that its contents are believed or not (cf. 3.1). Each information is characterized by a degree of credibility, importance, and relevance (cf. 3.4), and different informations are connected to each other via supporting and contrasting relations (cf. 3.2). The definition of beliefs and goals are given in 2.1: each belief is characterized by a degree of strength, while each goal is assigned a certain value for the agent. In this paper, belief revision has been discussed as mainly concerned with the updating of new data from the environment, the interaction between informations and beliefs, and the influence of goals over informations. External information update (cf. 3.3) describes the encoding of new environmental data, via perception and communication. The function of acceptance described in 3.5 is the core element in belief selection and ranking. Internal information update (cf. 3.3) takes care of mapping back in the information structure new beliefs inferred from previous knowledge. The relevance of information is determined comparing the content of information nodes with goal states (cf. 3.4). In this seminal proposal, the interaction between beliefs and goals in determining reasoning and action is assumed to have little impact on belief revision, at least directly. However, this assumption is only made for the sake of clarity, and it should be regarded as

provisional (cf. 3.8). More complex dynamics of interaction between different modules of the architecture will have to be investigated, once the basic notions are sufficiently clear and welldefined. In fact, planning often requires a certain base of informations: when such base is not already available to the agent, deliberate actions are likely to be performed to achieve the relevant data. In this case, the corresponding belief revision will be triggered and oriented by the explicit goals of the agent, beyond the sole effect of relevance. Therefore, similar instances of ‘knowledge-seeking behavior' will require a richer account of the interplay between goals, beliefs, informations, and action. However, further refinements in this direction are beyond the rather limited scope of this preliminary work: the reader interested in the part of the model concerning beliefs, goals, and action is advised to refer to the general framework of the CMSA (cf. 2.1), and possibly compare it with the vast literature on formal models of epistemic and intentional states (see for instance Bacon, 1975; Bell, 1995; Brewka, 1996; Cohen & Levesque, 1990; Dunin-Keplicz & Verbrugge, 2003; Fagin et al., 1995; Georgeff & Ingrand, 1989; Georgeff et al., 1998; Hintikka, 1962; Konolige, 1986; Konolige & Pollack, 1989; Meyer & van der Hoek, 1995; Rao & Georgeff, 1991; Reiter, 2001; Singh, 1994; Singh & Asher, 1993; Wooldridge, 2000; Wooldridge & Parsons, 1998).

5.

Conclusions and future work

Due to the introductory nature of my work, I do not have much to offer in terms of concluding remarks − indeed, my ‘conclusions' will be much more concerned with future openings, rather then well-established results. However, this general survey of belief revision ‘through the eyes of a cognitive scientist interested in social action', so to speak, should have at least confirmed that this field is remarkably rich, complex, and fascinating. Many well-known formalisms (sometimes converging, sometimes competing) have been devised in the last twenty years to deal with belief revision, and more are likely to come, so that the topic is without any doubt one of the most lively at the intersection between logic and cognition. On the other hand, none of the existing formalisms turned out to be adequate to our purposes, i.e. they never fulfill all, or even most, of the cognitive desiderata proposed here. On the whole, there seems to be a peculiar detachment between the formal treatment of belief revision as a technical issue, and its meaning in a cognitive and social perspective. This gap probably reflects disciplinary boundaries: cognitive psychologists and social scientists are likely to have an hard time in deciphering the formal languages employed in technical papers (and most of them will never try), while logicians, mathematicians, and computer scientists might fail to consider as relevant features of belief revision that would result self-evident to the former. This is not to say that the existing formal models are not concerned at all with issues of ‘cognitive realism', because clearly this is not the case − as this long overview should have emphasized. Nonetheless, stronger interdisciplinary collaborations are likely to be required, in order to improve the existing formalisms of belief revision as plausible tools for cognitive social modeling. Hopefully, my future work will try to explore further such interdisciplinary perspective. In particular, jointly with Cristiano Castelfranchi and colleagues, I am planning to develop a better account of information structure (cf. 3.2), focusing more on information relations and their

impact on the properties of single information nodes (cf. 3.4). More typologies of information relations will have to be introduced, in order to account for different instances of gradual change (cf. 3.6). On the other hand, I am aware that similar distinctions (e.g. introducing different kinds of support and contrast), although needed, are bounded to affect the way in which credibility, importance, and relevance are determined (cf. 3.2 and 3.4). Therefore, much attention will have to be given to the problem of ensuring a finite procedure to assess these properties for every information stored by the agent − otherwise, it would be impossible to apply the function of acceptance (cf. 3.5), and the whole model would be useless. More in general, a proper formal language will have to be provided, in order to specify all these different features in the same framework. However, these immediate priorities only cover a fraction of the future work needed to bring about the model of belief revision outlined here. Many open research questions were just suggested in passing in the previous sections, like the opportunity of studying agents with imperfect memory (cf. 3.1), the interest for comparing different cognitive types and their emerging strategies of belief revision, and the possibility of assessing the effect of social pressure and evolution over belief revision (cf. 3.8). Moreover, other interesting developments were not even mentioned in this work, like the possible connections between belief revision and dynamic logic (van Benthem, 1996), with special reference to dynamic epistemic logic (Baltag, 2002; van Ditmarsch & Labuschagne, 2003; Gerbrandy & Groenevald, 1997; Reiter, 2001) − a topic of extreme interest, and for the most part yet to be explored (Segerberg, 1999). In the end, the only possible conclusion of any preliminary work is that much more work is still needed: a perspective that might leave the reader unsatisfied, but that usually sounds promising to the writer. In fact, after twenty and more years of studies, belief revision can still be regarded as an open problem − and a very crucial one, if we want to understand better why and how people change their minds.

References Alchourrón, C., Gärdenfors, P., Makinson, D. (1985). "On the logic of theory change: partial meet contraction and revision functions". Journal of Symbolic Logic 50, 510-530. Alechina, N., Logan, B. (2002). "Ascribing beliefs to resource bounded agents". In: Proceedings of AAMAS'02 . Arrow, K. J. (1963). Social choice and individual values. Yale University Press, 2nd edition: New Haven (CT). Bacchus, F. (1990). Representing and reasoning with probabilistic knowledge: a logical approach to probabilities. The MIT Press: Cambridge (MA). Bacon, J. (1975). "Belief as relative knowledge". In: Anderson, A. R., Marcus, R. B., Martin, R. M. (eds.), The logical enterprise, Yale University Press, New York-London, pp. 189-210. Baltag, A. (2002). "A logic for suspicious players: epistemic actions and belief updates in games". Bulletin of Economic Research 54, pp. 1-45. Belew, R. K., McInerney, J., Schraudolph, N. N. (1991). "Evolving networks: using the genetic algorithm with connectionist learning". In: C. G. Langton et al. (eds.), Proceedings of the second conference on artificial life, Addison-Wesley, Reading (MA).

Bell, J. (1995). "Changing attitudes". In: Wooldridge, M. J., Jennings, N. R. (eds.), Intelligent agents: ECAI-94 workshop on agent theories, architectures, and languages, SpringerVerlag, Berlin, pp. 40-55. van Benthem, J. (1996). Exploring logical dynamics. CSLI Publications: Stanford, CA. Boutilier, C. (1996). "Iterated revision and minimal change of conditional beliefs". Journal of Philosophical Logic 25, pp. 262-305. Boutilier, C. (1998). "A unified model of qualitative belief change: a dynamical systems perspective". Artificial Intelligence 98, pp. 281-316. Boutilier, C., Friedman, N., Halpern, J. Y. (1998). "Belief revision with unreliable observations". In: Proceedings of the fifteenth national conference on Artificial Intelligence (AAAI'96) , pp. 127-134. Brewka, G. (ed.) (1996). Principles of knowledge representation. CSLI Publications: Stanford, CA. Broda, K. B., d'Avila Garcez, A., Gabbay, D. (2002). Neural-symbolic learning system: foundations and applications. Springer: London. Castelfranchi, C. (1992). "No more cooperation, please! In search of the social structure of verbal interaction". In: Ortony, A., Slack, J., Stock, O. (eds.), Communication from an AI perspective, Springer-Verlag, Berlin, pp. 205-227. Castelfranchi, C. (1995). "Guarantees for autonomy in cognitive agent architecture". In: Wooldridge, M. J., Jennings, N. R. (eds.), Intelligent agents: ECAI-94 workshop on agent theories, architectures, and languages, Springer-Verlag, Berlin, pp. 56-70. Castelfranchi, C. (1996). "Reasons: belief support and goal dynamics". Mathware & Soft Computing 3, pp. 233-247. Castelfranchi, C. (1997a). "Representation and integration of multiple knowledge sources: issues and questions". In: Cantoni, Di Gesu', Setti, Tegolo (eds.), Human & Machine Perception: Information Fusion, Plenum Press. Castelfranchi, C. (1997b). "Principles of individual social action". In: Holmström-Hintikka, G., Tuomela, R. (eds.), Contemporary action theory, vol. II, Kluwer Academic Publishers, Dordrecht, pp. 163-192. Castelfranchi, C. (1998a). "Modeling social action for AI agents". Artificial Intelligence 103, pp. 157-182. Castelfranchi, C. (1998b). "Simulating with cognitive agents: the importance of cognitive emergence". In: Sichman, J. S., Conte, R., Gilbert, N. (eds.), Multi-agent systems and agent-based simulation, Springer-Verlag, Berlin, pp. 26-44. Castelfranchi, C. (1999a). "Prescribed mental attitudes in goal-adoption and norm-adoption". AI and Law 7, pp. 37-50. Castelfranchi, C. (1999b). "From conventions to prescriptions. Towards an integrated view of norms". AI and Law 7, pp. 323-340. Castelfranchi, C. (2003). "Formalising the informal? Dynamic social order, bottom-up social control, and spontaneous normative relations". Journal of Applied Logic 1, pp. 47-92. Castelfranchi, C., Falcone, R. (1998). "Towards a theory of delegation for agent-based systems". Robotics and Autonomous Systems 24, pp. 141-157. Castelfranchi, C., Giardini, F., Lorini, E., Tummolini, L. (2003). "The prescriptive destiny of predictive attitudes: from expectations to norms via conventions". In: Proceedings of CogSci 2003, 25th Annual Meeting of the Cognitive Science Society, Boston (MA).

Castelfranchi, C., Müller, J. P. (eds.) (1995). From reaction to cognition : 5th European workshop on modelling autonomous agents in a multi-agent world. Springer-Verlag: Berlin. Castelfranchi, C., Werner, E. (eds.) (1994). Artificial social systems : 4th European workshop on modelling autonomous agents in a multi-agent world. Springer-Verlag: Berlin. Cherniak, C. (1986). Minimal rationality. The MIT Press: Cambridge (MA). Cohen, P. R., Levesque, H. J. (1990). "Intention is choice with commitment". Artificial Intelligence 42, pp. 213-261. Conte, R., Castelfranchi, C. (1995). Cognitive and social action. UCL Press: London. Conte, R., Dellarocas, C. (2001). Social order in multiagent system. Kluwer Academic Publishers: Dordrecht. Conte R., Gilbert N. (1995). Artificial societies: the computer simulation of social life. UCL Press: London. Conte, R., Hegsellman, R., Terna, P. (1997). Simulating social phenomena. Springer: Berlin. Conte, R., Paolucci, M. (2002). Reputation in artificial societies: social beliefs for social order. Kluwer: Boston (MA). Cummins, D., Allen, C. (eds.) (1997). The evolution of mind. Oxford University Press: New York (NY). Darwiche, A., Pearl, J. (1997). "On the logic of iterated belief revision". Artificial Intelligence 89, pp. 1-29. van Ditmarsch, H. (2003). "Prolegomena to dynamic logics for belief revision". Revised unpublished draft, http://www.cs.otago.ac.nz/staffpriv/hans/prolegomena.pdf. Consulted on December, 2003. Doyle, J. (1991). "Rational belief revision". In: J. Allen, R. Fikes, E. Sandewall (eds.), Principles of knowledge representation and reasoning: proceedings of the second international conference (KR91), Morgan Kaufmann Publishers, San Mateo (CA), pp. 163-174. Doyle, J., Wellman, M. P. (1991). "Impediments to universal preference-based default theories". Artificial Intelligence 49, pp. 97-128. Dragoni, A. F., Giorgini, P. (2003). "Distributed belief revision". Autonomous Agents and MultiAgent Systems 6, pp. 115-143. Dragoni, A., Mascaretti, F., Puliti, P. (1995). "A generalized approach to consistency-based belief-revision". In: Gori, M., Soda, G. (eds.), Topics in Artificial Intelligence, SpringerVerlag, Berlin. Dunin-Keplicz, B., Verbrugge, R. (eds.) (2003). FAMAS'03 - Formal approaches to multi-agent systems. Proceedings of an international workshop at ETAPS 2003, April 5-13, Warsaw, Poland. van Eijk, R. M., de Boer, F. S., van der Hoek, W., Meyer, J.-J. Ch. (1998). "Information-passing and belief revision in multi-agent systems". In: Müller, J. P., Rao, A. S., Singh, M. S. (eds.), Intelligent agents V: agent theories, architectures, and languages, SpringerVerlag, Berlin, pp. 29-45. Fagin, R., Halpern, J. Y. (1988). "Belief, awareness, and limited reasoning". Artificial Intelligence 34, pp. 39-76. Fagin, R., Halpern, J. Y. (1991). "Uncertainty, belief, and probability". Computational Intelligence 6, pp. 160-173.

Fagin, R., Halpern, J. Y. (1994). "Reasoning about knowledge and probability". Journal of the ACM 41, pp. 340-367. Fagin, R., Halpern, J. Y., Moses, Y., Vardi, M. Y. (1995). Reasoning about knowledge. The MIT Press: Cambridge, MA. Falcone R., Castelfranchi C. (1999). "The Dynamics of trust: from beliefs to action". Autonomous Agents ‘99 workshop on "Deception, fraud and trust in agent societies" , Seattle, pp. 41-54. Falcone, R., Pezzulo, G., Castelfranchi, C. (2003). "A fuzzy approach to a belief-based trust computation". In: Falcone, R., Barber, S., Korba, L., Singh, M. (eds.), Trust, reputation, and security: theories and practice, Springer-Verlag, Berlin, pp. 73-86. van Frassen, B. C. (1995). "Fine-grained opinion, probability, and the logic of full belief". Journal of Philosophical Logic 24, pp. 349-377. Friedman, N., Halpern, J. Y. (1997). "Modeling beliefs in dynamic systems. Part I: foundations". Artificial Intelligence 95, pp. 257-316. Friedman, N., Halpern, J. Y. (1999a). "Modeling beliefs in dynamic systems. Part II: revision and update". Journal of AI Research 10, pp. 117-167. Friedman, N., Halpern, J. Y. (1999b). "Belief revision: a critique". Journal of Logic, Language and Information 8, pp. 401-420. Galliers, J. R. (1992). "Autonomous belief revision and communication". In: P. Gärdenfors (ed.), Belief revision, Cambridge University Press, Cambridge (MA), pp. 220-246. Gärdenfors, P. (1988). Knowledge in flux: modeling the dynamics of epistemic states. The MIT Press: Cambridge (MA). Gärdenfors, P. (ed.) (1992). Belief revision. Cambridge University Press: Cambridge (MA). Georgeff, M. P., Ingrand, F. F. (1989). "Decision-making in an embedded reasoning system". In: Proceedings of IJCAI-89, pp. 972-978. Georgeff, M., et al. (1998). "The Belief-Desire-Intention model of agency". In: Müller, J. P., Rao, A. S., Singh, M. S. (eds.), Intelligent agents V: agent theories, architectures, and languages, Springer-Verlag, Berlin, pp. 1-10. Gerbrandy, J., Groeneveld, W. (1997). "Reasoning about information change". Journal of Logic, Language, and Information 6, pp. 147-196. Golszmidt, M., Pearl, J. (1996). "Qualitative probabilities for default reasoning, belief revision, and casual modeling". Artificial Intelligence 84, pp. 57-112. Gomolinska, A., Pearce, D. (1999). "Disbelief change". In: B. Hansson, S. Halldén, N.-E. Sahlin, W. Rabinowicz (eds.), Spinning ideas: internet festschrift for Peter Gärdenfors, http://www.lucs.lu.se/spinning/. Consulted on December, 2003. Halpern, J. Y. (1991). "The relation between knowledge, belief, and certainty". Annals of Mathematics and AI 4, 301-322. Hansson, S. (1992). "Reversing the Levi identity". Journal of Philosophical Logic 22, pp. 637639. Hansson, S. (1999). "A survey on non-prioritaized belief revision". Erkenntnis 50, pp. 413-427. Harman, G. (1986). Changes in view: principles of reasoning. The MIT Press: Cambridge (MA). Hintikka, J. (1962). Knowledge and belief: an introduction to the logic of the two notions. Cornell University Press: Ithaca-London. Hofstadter, D., et al. (1995). Fluid concepts and creative analogies. Basic Books: New York (NY).

Katsuno, H., Mendelzon, A. O. (1991). "On the difference between updating a knowledge base and revising it". In: J. Allen, R. Fikes, E. Sandewall (eds.), Principles of knowledge representation and reasoning: proceedings of the second international conference (KR91), Morgan Kaufmann Publishers, San Mateo (CA), pp. 387-394. Kokinov, B. (1994). "The DUAL cognitive architecture: A hybrid multi-agent approach". In: A. Cohn (ed.), Proceedings of the eleventh European Conference on Artificial Intelligence (ECAI), John Wiley & Sons, London, pp. 203-207. Kokinov, B., Nikolov, V., Petrov, A. (1996). "Dynamics of emergent computation in DUAL". In: A. Ramsay (ed.), Artificial intelligence: methodology, systems, applications. IOS Press: Amsterdam, pp. 303-311. Konolige, K. (1986). A deduction model of belief. Morgan Kaufmann Publishers: Los Altos, CA. Konolige, K., Pollack, M. E. (1989). "Ascribing plans to agents". In: Proceedings of IJCAI-89, pp. 924-930. Kooi, B. P. (2003a). Knowledge, chance, and change. ILLC dissertation series DS-2003-01: Amsterdam. Kooi, B. P. (2003b). "Probabilistic dynamic epistemic logic". Journal of Logic, Language and Information 12, pp. 381-408. Lakemayer, G. (1991). "On the relation between explicit and implicit belief". In: J. Allen, R. Fikes, E. Sandewall (eds.), Principles of knowledge representation and reasoning: proceedings of the second international conference (KR91), Morgan Kaufmann Publishers, San Mateo (CA), pp. 368-375. Lehmann, D. (1995). "Belief revision, revised". In: Proceedings of the fourteenth International Joint Conference on Artificial Intelligence (IJCAI'95), pp. 1534-1540. Levesque, H. (1984). "A logic of implicit and explicit belief". Technical report n. 32, Fairchild Laboratory for AI Research: Palo Alto (CA). Levi, I. (1967). Gambling with true. Alfred A. Knopf: New York. Levi, I. (1980). The enterprise of knowledge. The MIT Press: Cambridge (MA). Levi, I. (1991). The fixation of belief and its undoing. Cambridge University Press: Cambridge (MA). Liberatore, P. (2000). "The complexity of belief update". Artificial Intelligence 119, pp. 141-190. Malsch, T. (2001). "Naming the unnamable: socionics or the sociological turn of/to Distributed Artificial Intelligence". Autonomous Agents and Multi-Agent Systems 4, pp. 155-186. McClelland, J. L. (1989). "Parallel distributed processing: implications for cognition and development". In: R. Morris (ed.), Parallel distributed processing: implications for psychology and neurobiology, Oxford University Press, New York (NY), pp. 9-45. Meyer, J.-J. C., van der Hoek, W. (1995). Epistemic logic for AI and computer science. Cambridge University Press: Cambridge, UK. Miceli, M. (1992). "How to make someone feel guilty: strategies of guilt inducement and their goals". Journal for the Theory of Social Behaviour 22, pp. 81-104. Miceli, M., Castelfranchi, C. (1998a). "How to silence one's conscience: cognitive defenses against the feeling of guilt". Journal for the Theory of Social Behaviour 28, pp. 287-318. Miceli, M., Castelfranchi, C. (1998b). "Denial and its reasoning". British Journal of Medical Psychology 71, pp. 139-152. Miceli, M., Castelfranchi, C. (2000). "Nature and mechanisms of loss of motivation". Review of General Psychology 4, pp. 238-263.

Miceli M., Castelfranchi C. (2002). "The mind and the future: the (negative) power of expectations". Theory & Psychology 12, pp. 335-366. Nebel, B. (1989). "A knowledge level analysis of belief revision". In: R. J. Brachman, H. J. Levesque, R. Reiter (eds.), Proceedings of the first international conference on principles of knowledge representation and reasoning, Morgan Kaufmann Publishers, San Mateo (CA), pp. 301-311. Nebel, B. (1990). Representations and reasoning in hybrid representation systems. SpringerVerlag: Berlin. Nolfi, S., Floreano, D. (2000). Evolutionary robotics: the biology, intelligence, and technology of self-organizing machines. The MIT Press: Cambridge (MA). Parisi, D., Castelfranchi, C. (1976). The discourse as a hierarchy of goals. Documenti di lavoro e pre-pubblicazioni, Università di Urbino, Centro internazionale di semiotica e di linguistica: Urbino. Pauly, M. (2001). Logic for social software. ILLC dissertation series DS-2001-10: Amsterdam. Pinker, S. (1997). How the mind works. Norton: New York (NY). Rao, A. S., Georgeff, M. (1991). "Modeling rational agents wihin a BDI-architecture". In: J. Allen, R. Fikes, E. Sandewall (eds.), Principles of knowledge representation and reasoning: proceedings of the second international conference (KR91), Morgan Kaufmann Publishers, San Mateo (CA), pp. 463-484. Reiter, R. (2001). Knowledge in action: logical foundations for specifying and implementing dynamical systems. The MIT Press: Cambridge (MA). Rott, H. (1999). "Two dogmas of belief revision". In: B. Hansson, S. Halldén, N.-E. Sahlin, W. Rabinowicz (eds.), Spinning ideas: internet festschrift for Peter Gärdenfors, http://www.lucs.lu.se/spinning/. Consulted on December, 2003. Rumelhart, D. E., McClelland, J. L. (1986). Parallel distributed processing: explorations in the microstructure of cognition. The MIT Press: Cambridge (MA). Segerberg, K. (1999). "Two traditions in the logic of belief: bringing them together". In: Ohlbach, H. J., Reyle, U. (eds.), Logic, Language, and Reasoning, Kluwer Academic Publishers, Dordrecht, pp. 135-147. Shafer, G. (1976). A mathematical theory of evidence. Princeton University Press: Princeton (NJ). Sichman, J. S., Conte, R., Gilbert, N. (eds.) (1998). Multi-agent systems and agent-based simulation. Springer-Verlag: Berlin. Singh, M. P. (1994). Multiagent systems: a theoretical framework for intentions, know-how, and communications. Springer-Verlag: Berlin. Singh, M. P., Asher, N. M. (1993). "A logic of intentions and beliefs". Journal of Philosophical Logic 22, pp. 513-544. Spohn, W. (1987). "Ordinal conditional functions: a dynamic theory of epistemic states". In: Harper, W. L., Skyrms, B. (eds.), Causation in decision, belief change and statistics, vol. 2, D. Reidel Publishing Company, Dordrecht, pp. 105-134. Spohn, W. (1999). "Ranking functions, AGM style". In: B. Hansson, S. Halldén, N.-E. Sahlin, W. Rabinowicz (eds.), Spinning ideas: internet festschrift for Peter Gärdenfors, http://www.lucs.lu.se/spinning/. Consulted on December, 2003. Tamminga, A. (2001a). Belief dynamics: (epistemo)logical investigations. ILLC dissertation series DS-2001-08: Amsterdam.

Tamminga, A. (2001b). "Expansion and contraction of finite states". Studia Logica 68, pp. 1-16. Vickers, J. M. (1976). Belief and probability. D. Reidel Publishing Company: Dordrecht. Wassermann, R. (2000). Resource-bounded belief revision. ILLC dissertation series DS-2000-01: Amsterdam. Winslett, M. (1990). Updating logical databases. Cambridge University Press: Cambridge (MA). Wooldridge, M. (2000). Reasoning about rational agents. The MIT Press: Cambridge, MA. Wooldridge, M. J. (2002). An introduction to multiagent systems. John Wiley & Sons: Chichester, UK. Wooldridge, M. J., Jennings, N. R. (1995). "Agent theories, architectures, and languages: a survey". In: Wooldridge, M. J., Jennings, N. R. (eds.), Intelligent agents: ECAI-94 workshop on agent theories, architectures, and languages, Springer-Verlag, Berlin, pp. 139. Wooldridge, M., Parsons, S. (1998). "Intention reconsideration reconsidered". In: Müller, J. P., Rao, A. S., Singh, M. S. (eds.), Intelligent agents V: agent theories, architectures, and languages, Springer-Verlag, Berlin, pp. 63-79.