Groups and Societies: One and the Same Thing? - Semantic Scholar

1 downloads 0 Views 210KB Size Report
make decisions about plans and/or deals that satisfy di erent subgoals or that satisfy their goals only partially. This ability to relax one's goal opens up new.
Groups and Societies: One and the Same Thing? Eduardo Alonso 

Department of Computer Science, University of York York YO10 5DD, United Kingdom e-mail: [email protected]

Abstract. To answer this question, we propose a general model of coordination in Multi-Agent Systems. Autonomous agents rst recognise how they depend on each other (they may need or prefer to interact about the same or di erent goals), and then, in the negotiation phase, exchange o ers in the form of commissive speech acts. Finally, agents adopt social, interlocking, commitments if an agreement is reached. Joint plans are seen as deals and team activity as a special case of social activity in which, having agents the same common goal, every possible deal is pro table. Consequently, notions traditionally involved in Cooperative Problem Solving such as help and joint responsibility are applied to any social interaction. Therefore, the answer to our question is yes.

1 Introduction The main concern in Distributed Arti cial Intelligence (DAI) is how to design interaction protocols so that agents coordinate their behaviour. In Multi-Agent Systems (MAS), autonomous agents are devised by di erent designers, and have individual motivation to achieve their own goal and to maximise their own utility. Thus, no assumptions can be made about agents working together cooperatively. On the contrary, agents will cooperate only when they can bene t from that cooperation. Most of the formal models presented in MAS are centered in analysing isolated aspects of the coordination problem, such as dependence nets [5, 14], joint intentions [11, 6], social plans [12], or negotiation models [17, 18, 10, 15]. As far as we know, only Wooldridge and Jennings [21] have tried to represent the whole process in Cooperative Problem Solving (CPS) domains, where autonomous agents happen to have a common goal and then acquire social attitudes before forming a group. A more comprehensive coordination framework has been presented in [3]: Agents with probably disparate and even con ict goals reason about their dependence relations and exchange o ers following \social" strategies until they reach a deal. The resulting conditional commitments oblige them to abide by  This research has been supported by the Ministerio de Educaci on y Cultura del Gobierno Espa~nol (EX 97 30605211).

the agreements in societies. Therefore, there is no need for the agents to swear to act as a group in a team-formation stage. The purpose of this paper is to prove that this model of coordination accounts for teams as well as for societies, and that the rules governing team action do not diverge from those controlling societies. We will illustrate that social notions traditionally related to CPS domains such as help or joint responsibility are applicable both to teams and societies. Therefore, all kind of groups can be represented with a single model. The remainder of the paper is structured as follows. In the second section we introduce our concept of autonomous agent; in the third section an analysis of dependence relationships is presented; in the fourth section, the negotiation process is described; nally, we de ne societies as the result of the coordination process and show that the terms of the agreements explain any \social" concept. For simplicity, the model is presented in two-agents task oriented domains. Due to space restrictions we are not de ning the language in full here, but readers are referred to [2].

2 Autonomous Agents Agents are autonomous but probably no-autosucient entities. Each agent is viewed as an independent \cognitive object" with his own beliefs, abilities, goal, and utility function. In our model goals are not xed. Agents can compare and make decisions about plans and/or deals that satisfy di erent subgoals or that satisfy their goals only partially. This ability to relax one's goal opens up new opportunities for agreement, and enlarge the space of cooperation. In order to model agents' behaviour we use a branching tree structure [7] (as it is common in MAS literature [12, 20]), where each branch depicts an alternative execution path, i : Each node in the structure represents a certain state of the world and each transition an action. Formally, i = (s0 ; :::; si?1 ; ai ; si ; :::; sn ). The set of actions associated to a path is de ned as act(i ) = fa1 ; :::; an?1g. We can identify goal/subgoal structures with particular paths through the tree, each leaf labeled with the utility obtained by traversing this path. Those leaves with the highest worth might be though of as those that satisfy the full goal, while others, with lower worth values, only partially satisfy the goal. The rationality of a behaviour is understood according to its utility in the scale of preferences given a maximising policy. Nevertheless, in order to avoid uncontextualised decisions, utilities are de ned with regard to agents' (sub)goals1. Following [13],

De nition 1 The utility of a (sub)path for an agent is the di erence between the worth of the (sub)goal achieved executing this path and its cost. Therefore, if GOAL(x; i ), ACH(i ; i ), and cost(x; i ) = fai : ai 2 i ^ Ag(ai ; x)g 1 Haddawy showed in [8] that for simple step utility functions, choosing the plan that achieves a goal leads to choose the plan that maximises utility.

utility(x; i ) = worth(x; i ) ? cost(x; i )

De nition 2 The solution set for a goal: Sol(i ) = fi jACH(i ; i )g. That is,

Sol(i ) de nes the space of all possible solutions of i . These solutions are ordered according to their utility in the scale of preferences.

3 Dependence Analysis In the rst phase of the coordination process agents try to recognise how they depend on each other. This recognition stage is crucial since it expresses agents' motivations and explains why they might be interested in coordinating their actions. Consequently, this dependence analysis establishes the rules under which deals are arranged and guides all the coordination process. First, we consider agents' condition. De nition 3 An agent is autosucient for a given goal according to a set of paths if each path in this set achieves this goal and the agent is able to execute every action appearing in it. Henceforth Sol(i ) = f1 ; :::; n g and fw ; :::; x g  f1 ; :::; n g, AUTOSUFFICIENT(x; fw ; :::; x g; i ) i 8i 2 fw ; :::; x g 8ai 2 act(i ) Ag(ai ; x)

On the other hand, there will probably other paths in the solution set of the goal such that the agent is not able to execute. For these paths, the agent is said to be de cient. DEFICIENT(x; fw ; :::; x g; i ) i 8i 2 fw ; :::; x g 9ai 2 act(i ) :Ag(ai ; x)

We de ne now how agents depend on each other: N(x; y; faj ; :::; ak g; i ) and W(x; y; faj ; :::; ak g; i ) mean that x needs or \weak" depends on y with regard to faj ; :::; ak g for achieving i . On the other hand,  and are the \charge" of the relation:  means that the dependence relationship is about the execution of the actions related to i , whilst is about its omission. There are four possible basic dependence relations2 1. N(x; y; faj ; :::; ak g; i ) $ 8i 2 f1 ; :::; n gDEFICIENT(x; i ; i )^ 9i = f ; faj ; :::; ak g  act(f1 ; :::; n g)g, 2 For simplicity, we have constrained the model to one relation each time, but agents can depend on each other in many di erent intermixed ways. For example, two agents with the same goal can need each other about two subpaths and depend weakly on one another about the rest of the path at issue. Moreover, di erent relations can arise if alternative paths are taken into consideration.

That is, an agent needs positively the other agent to execute a set of actions if and only if he is not able to execute any subpath in his goal's solution set and there exists a deal such that y executes a subset of the actions associated with this goal. This is a purposely vague de nition, because the actions involved in the deal depends on how y is a ected by x. The important thing is to realise that x needs y not only because of his own ineciency but also because there is space for cooperation. The deal guarantees the required space of cooperation since every deal is supposed to be individual- rational, that is, it must improve both agents' position. 2. W(x; y; faj ; :::; ak g; i ) $ AUTOSUFFICIENT(x; fw ; :::; x g; i )^ 9i = f ; faj ; :::; ak g  act(f1 ; :::; n g)g, faj ; :::; ak g can be the actions associated with any path in the solution set. The agreed path will be a subset of a path satisfying x's goal. The only condition is that utility(x; k ) > utility(x; i 2 fw ; :::; x g). 3. N(x; y; faj ; :::; ak g; i ) $ AUTOSUFFICIENT(x; fw ; :::; x g; i )^ INHIBIT(faj ; :::; ak g; fw ; :::; x g)^ 9i = f ; :faj ; :::; ak gg, where a set of actions inhibits a set of paths if and only if there is no extension of these paths containing members of the set of actions. Probably, stand-alone paths are preferred to the ones resulting from the deal, but x has no choice. Therefore, faj ; :::; ak g must be optional in y's goal solution set. Otherwise, there is an open con ict. Usually, these actions will be adopted to be used as threats. 4. W(x; y; faj ; :::; ak g; i ) $ AUTOSUFFICIENT(x; fw ; :::; x g; i )^ INHIBIT(faj ; :::; ak g; fj ; :::; k g  fw ; :::; x g)^ 9i ; i = f ; :faj ; :::; ak gg, In this case, the set of actions inhibits some of the autosucient paths. However, there is a deal from which the inhibited paths can be executed and whose utility is greater than the utility of non-inhibited paths. i

One signi cant feature of our model is that only bilateral relations are allowed. Unlike in [5, 14, 21], an agent cannot act in society according exclusively to its individual needs or preferences, or o er deals to achieve his goal without taking into account others' motivations. We can analyse the space of interaction according to the \charge" of the relations as follows:

1. We can say that social interaction takes place in three possible cases (a similar approach is presented in [13]): (a) a cooperative situation is one in which each agent welcomes the existence of the other agent. That is, when they depend positively on each other. This is usual in mutual relations (when agents share the same goal) because it is always pro table for both agents to share the load of executing the associated plan; (b) a compromise situation is one in which both agents would prefer to be alone (they depend on each other negatively). However, since they are forced to cope with the presence of the other, they will agree on a deal. Typically when one agent's gain always entails the other's loss; (c) a neutral situation is one in which one agent is in a cooperative situation and the other is in a compromise one. 2. On the other hand, we say that social co-action happens in two circumstances (a) a con ict situation is one in which agents come across but there is no deal that resolve their possible interactions. For example, when agents have parallel goals or in killer-victim relations. In such cases, although they do depend on each other, their relation is not subject to coordination. It is true that they need to reason about each others' behaviour (the victim will try to anticipate the killer's behaviour in order to escape, and vice versa) but the nature of their dependence is a-social; (b) agents are independent if there is at least one of them whose goal is not a ected by other's goal. As an example, we have the parasite agent, who waits the other to achieve his goal. According to the \weight" of each agent in the interaction, there are two types of social interaction: Symmetric situations, SYM, in which both agents need or \weak" depend on each other; or Asymmetric situations, ASYM, in which x needs y, and this second agent only \weak" depends on the rst. In that case, it is said that y has power (POW) over x. Accordingly, an agent x has not power over other agent y just because y needs x (unlike in [4]), but the powerful agent must have some motivation to exploit his dominating status. If either of the agents is not interested in interacting, talking about dependence relationships is pointless. If we compare now our model with others in MAS literature, we have that we enlarge substantially the space of cooperation, as  we allow agents to negotiate in need or preference (only [21] studies both cases);

 agents can negotiate not only about common goals, but about disparate

and even con ict goals. This is because agents are allowed to relax their initial goals and negotiate about subgoals and/or degrees of satisfaction of their respective goals. Imagine two hunters trying to get the same hare. They have common compatible subgoals, to catch the prey, but two parallel nal goals, to eat it. So, they will cooperate about that subgoal (because coordinating their attack increases their chances), and then compete openly;  deals can be about the execution or the omission of actions (the possibility of \non-negative contribution" is just pointed out in [9]).

4 Negotiation Process Once agents have a model of the interaction situation, they exchange o ers directly. Negotiation is a process through which in each temporal point one agent, say x, proposes an agreement from the negotiation set (NS), and the other agent, y, either accepts the o er or does not. If the o er is accepted, then the negotiation ends with the implementation of the agreement. Otherwise, the second agent then has to make a countero er, or reject x's o er and abandon the process. We are not introducing here a detailed model of the negotiation procedure (see [3, 2]). In this paper our only concern is about those aspects of the model that will help us to show that groups are just societies of agents that share a goal. One of these aspects has to do with how joint commitments are understood. We consider that in order to avoid references to irreducible \social" notions the contents of joint intentions must be tracked throughout the coordination process, and that the conditions of individual social commitments must be expressed as arguments in the o ers. We say that an agent x o ers a deal if he requests the other agent y to be socially committed to execute some action and asserts that if y con rms such commitment, he will commit himself to execute another action. Formally (the speech acts operators come from [16]), 1. OFF(x; y; i = f ; g)  REQ(x; y; SOC ? COM(y; x; ; ))^ ASS(x; y; (CONF(y; x; SOC ? COM(y; x; ; )) ! SOC ? COM(x; y; ; )). 2. Using this de nition, countero ers are easily de ned as a refusal followed by other o er. The content of such commitments must specify the social conditions under which these engagements persist or are dropped out. We say that an agent is socially committed with other agent to execute an action if he has the intention of executing it until he believes that his part of the deal is true or will never be true, when he adopts the goal of having this situation mutually believed.

Moreover, an agent's social commitment will be also abandoned if he believes that his pattern has failed in executing his part. In this case, the goal of having this fact mutually believed is interpreted as acknowledgement.

De nition 4 SCOM(x; y; ; ) =def UNTIL BEL(x; :SCOM ? C(x; ; )) INT(x; ) WHEN GOAL(x; MBEL(x; y; :SCOM ? C(x; ; ))), De nition 5 SCOM ? C(x; ; )  [BEL(x; ) ^ BEL(x; : ) ^ BEL(x; : )] where is other agent's part. Therefore, social commitments are conditioned. This is because in MAS, where benevolence is not assumed, negotiation is only understood according to this quid pro quo policy. When an agent x makes an o er, the actions requested are those he believes he depends on y; on the other hand, the actions he promises to be committed to if y accepts the o er are those actions he believes y depends on him. That is, negotiation steps are created according to dependence relationships. Thus, unlike in Wooldridge and Jennings's proposal [21], there is no need for a team-formation phase. To what agreements can agents come? As we study asymmetric relations, the only a priori condition for a bargain to be in the space of deals is that it has to be individual rational. It would be \unfair" to ask a dominant agent to accept a Pareto-Optimal deal if he can obtain more pro t from another deal. By individual-rational we mean that both agents must improve their position with the deal. So, if an agent has a stand-alone plan, the deal must not decrease his utility; if the agent has no such an alternative to the negotiated agreement, the deal must give him non-negative utility. The search for \fair" deals is presented in two ways:

Strict Mechanism: Firstly, we de ne a \fairness" one-to-one function from the set of situations to the set of deals, f : SIT ! NS, giving the values depicted in Fig.1. (where SYMN means symmetric necessity relation, and so on): 1. If the agents are in a symmetric situation and they need each other, then the only \fair" solution consists in exchanging the actions involved. 2. If the agents are in a symmetric situation and they depend weakly on each other, then the deal can be: 2.1. Pareto-Optimal deal, if it is unique. 2.2. If there are several Pareto-Optimal deals, then the resulting agreement will be a random variable among the set of \fair" deals. For example, when autosucient agents with a common goal meet, they may be indi erent about how tasks should be distributed. Or when there are two ParetoOptimal deals implying an odd number of actions

= 8i = f ; g < i i P ? Optfx;yg (NS) = i 2:f (SYMW(x; y; ; )) = : i i P ? Optfx;yg (NS) = fg^ r(fg) = i 8  i greatest >> i x (NS) = i < i i maximalx(NS) = fg^ greatesty (fg) = i 3:f (POW(x; y; ; )) = > >: i i maximalx(NS) = fg^

at(x; fg) ^ r(fg) = i 1:f (SYMN(x; y; ; ))

Fig.1. The \fairness" function. 3. If one agent has power over the other, then the \fair" deal will be: 3.1. The one maximising his utility, the greatest in his scale of preferences. 3.2. If it is not unique, then the \fair" deal will be the most preferred by the dominated agent among the dominant's set of maximals. 3.3. If the dominated agent is indi erent, then one of the deals in this set is chosen at random.

Tolerant Mechanism: It is more useful to apply a tolerant mechanism in dynamic environments in which failures in the execution of the agreements will occur quite often. We introduce a set of ordered deals (; ) representing the \fair" deal associated to each possible situation derived from the actual one. As a result, agents agree not only on the speci c deal which is carried out at the rst time, but also about the deals in reserve. So, in case of failure agents will use the following automatic rule: RENEG: In case of failure in the execution of a speci c deal, eliminate it from the set of deals and apply the \fairness' function according to the situation generated. Example 1 Imagine that the two agents share the same goal and that there is a SYMW relationship between them. The deal is, therefore, assumed to be a simple task distribution. We have the following set of possible deals  = ff ; g; f ; g; f;; ( ; )g; f( ; ); ;gg. As agents depend weakly on each other, the \fair" deal must be Pareto-Optimal, one of the two rst deals in . Imagine that f ; g is chosen by random and that FAILS(x; ). Then, there will be two possible deals left, namely f ; g and f;; ( ; )g. Moreover, the relation between the agents has changed, and now y will have power over x. Therefore, the rst deal, the one maximising y's utility, is chosen. We can go one step further, and suppose that x fails again executing . In this case, the only possible deal to be carried out is f;; ( ; )g.

5 After Negotiation In our proposal each agent is still considered independent after negotiation. Joint plans are understood as deals through which agents make social commitments to execute particular actions, not to act as a group. We can de ne now joint commitments simply as the conjunction of the social commitments involved.

De nition 6 JCOM(x; y; i ) =def SCOM(x; y; ; ) ^ SCOM(y; x; ; ) Societies are seen as groups of agents with a joint commitment to execute the agreed deal. For us, a group of agents forms a society when they reach an agreement, not when they decide to act as such and try jointly to reach an agreement. That is, the notion of society is a result of coordination and not its condition. Our approach explains certain unclear aspects of teamwork. It is common in MAS literature to refer to the team as a whole when there is no way of attaching individual attitudes to its members. Example 2 Consider a team of two pilots whose goal is to carry as many helicopters to a point as possible. Following the joint intentions framework it is sucient if the team reaches that point, ech individual need not do so individually. In so doing, the team is rei ed and the task distribution enigma arises [1]. We think that our approach is more natural. In this case, we have three possible deals, ff ; g; f ; ;g; f;; gg, where each part means an agent reaching that point. The order of preference is f ; g > f ; ;g = f;; g. The RENEG rule is applied automatically, each deal satisfying the goal to some degree: f ; g satis es completely the goal, as all the pilots reach the point, whereas f ; ;g and f;; g satisfy the goal only partially. In any case, each agent achieves the goal if one of the deals is accomplished. We adopt this individualistic approach to stress the bargaining nature of social interactions, and that agents coordinate their behaviour and form societies with regard to common interests, not to common goals, as their motivations can be very disparate. From this point of view, agents do not agree and form groups to achieve a goal, but to execute deals that will achieve (perhaps partially) their (common or not) goals. Therefore if the negotiation protocol ends in agreement agents will adopt a joint commitment and form a society.

De nition 7 SOCIETY(fx; yg; i ) =def MBEL(x; y; SIT(x; y; i ))^ MBEL(x; y; f (SIT) = i )^ MBEL(x; y; (JCOM(x; y; i )))

Of course, this notion of society is quite basic. Things are more complicated in real environments, where team action can involve notions of social justice and social welfare. In the end, groups will behave according to the ideology or

interests of the designer. However, it is worth pointing out that we are working with systems (MAS) where the utilitarian point of view is closely related to liberalism. If notions of global utility are taken into account, agents will stop being autonomous, and some kind of community spirit will control our designs.

6 Groups In this nal section we exemplify how \help" and \joint responsibility" are understood in our model, and conclude that, all in all, both concepts are applicable both to teams and societies. According to Tuomela [19] one of the most important notions of cooperative activity is the one of help, in the sense of (extra) actions strictly contributing to other participants performing their parts well. In MAS agents are not assumed to be benevolent and will therefore cooperate and help each other when they can bene t from that cooperation (that is, when the cost of \helping" actions does not exceed the gains accruing from them). So, as far as agents have common interests, they will keep executing RENEG. Since everything is arranged before execution starts, no action can be interpreted as altruistic help. Why are groups so special? We have seen in Example 1 that when agents share a common goal their preferences are highly positively correlated. Whenever a collection of agents has the same goal, it is in their own interest to help each other. This is because in teams the problem of coordination is in practice the problem of how to distribute the goal tasks. Even if x is suddenly unable to execute any action, y will execute the entire plan, because he has nothing to lose: the deal will be equal to his stand-alone plan, the goal's plan-structure. He cannot refuse to execute it because of x's failure. Otherwise, he himself will not achieve the goal. Having a common goal they are in a situation in which they are destined to cooperate and act jointly. However, help is not unique to groups. In societies where agents have di erent goals, the renegotiation rule will be applied until the corresponding deal is not individual-rational. What about joint responsibility? Agents with joint responsibility are supposed to be equally rewarded or blamed for the actions they execute as a collective.

Example 3 Imagine a football team playing a crucial match. Each player will

receive a medal if the team wins or will be red if they lose. Suppose that they have never played before, so that they agree on a set of deals, the rst consisting of eleven actions meaning the usual distribution of tasks (goalkeeper, defender, sweeper, striker, etc...) and ve di erent empty sets for the substitutes. Suppose now that the striker, number 9, is sent o , so that agents must apply the renegotiation rule and execute the second plan according to the new circumstances. For example, the wings must center their positions in the pitch and try to score. Then, if the team wins, should the number 9 be awarded? And, if they lose, should he be red? Should the substitutes be awarded or blamed? The intuitive answer to these questions is \yes". Now, we have a model that explains why: as

agents agreed on the other deals as part of the \general" deal, everyone in the team is responsible for the outcome.

7 Conclusions In this paper we have presented a model of coordination in which agents rst recognise how they depend on each other and then exchange o ers and countero ers until they reach an agreement. Agreements can be executed following a strict mechanism, or can involve the use of a renegotiation rule that provides di erent deals. In this last case, agents agree on a set of deals, so that deals are executed in a given order according to the changing conditions an the ability of the agents. Using this mechanism we have illustrated how notions linked to CPS, such as help and joint responsibility are explained without mentioning any social attitude. Everything is settled in the terms of the agreements. Therefore, we have concluded that there is no point in adopting di erent mechanisms of coordination for teams or societies. Any collective will follow the same coordination mechanism, regardless of whether agents share the same goal or not. There are several issues to be addressed in future work, the most obvious of which is the need for re nement of the model. Moreover, the model should cope better with uncertainty: Agents can have incomplete knowledge and di erent points of view, so argumentation turns to be an essential part in coordination. Finally, the tolerant mechanism works well in games in which the rules are wellknown, but real-life social interactions are usually far more complicated than a football match. The study of multiple-encounters and roles will hopefully allow us to identify and characterise the constant environmental factors required to nd equilibria between eciency and stability.

References

[1] E. Alonso. An uncompromising individualistic formal model of social activity. In M. Luck, M. Fisher, d'Inverno M., N. Jennings, and M. Wooldridge, editors, Working Notes of the Second UK Workshop on Foundations of Multi-Agent Systems (FoMAS-97), pages 21{32, Coventry, U.K., 1997. [2] E. Alonso. Agents Behaving Well: A Formal Model of Coordination in MultiAgent Sytems. Technical Report YCS-98-?, Department of Computer Science, University of York, York, YO10 5DD (UK), July 1998. [3] E. Alonso. How individuals negotiate protocols. In Proc. ICMAS-98. IEEE Computer Science Press, 1998. [4] C. Castelfranchi. Social power: A point missed in Multi-Agent, DAI and HCI. In Y. Demazeau and J-P. Muller, editors, Decentralized A.I., pages 49{62, Amsterdam, The Netherlands, 1990. Elsevier Science Publishers B.V. [5] C. Castelfranchi, M. Miceli, and A. Cesta. Dependence relations among autonomous agents. In E. Werner and Y. Demazeau, editors, Decentralized A.I. 3, Proc. MAAMAW-91, pages 215{227, Amsterdam, The Netherlands, 1992. Elsevier Science Publishers.

[6] P.R. Cohen and H.J. Levesque. Teamwork. Nous, 25:487{512, 1991. [7] E.A. Emerson and J. Srinivasan. Branching time temporal logic. In J.W. de Bakker, de Roever W.P., and G. Rozenberg, editors, Linear Time, Branching Time, and Partial Models in Logics and Models for Concurrency, pages 123{172, Berlin, Germany, 1989. Springer-Verlag. [8] P. Haddawy. Representing plans under uncertainty. Springer-Verlag, Berlin, Germany, 1994. [9] N.R. Jennings. On being responsible. In E. Werner and Y. Demazeau, editors, Decentralized A.I. 3, Proc. MAAMAW-91, pages 93{102, Amsterdam, The Netherlands, 1992. Elsevier Science Publishers. [10] S. Kraus, J. Wilkenfeld, and J. Zlotkin. Multiagent negotiation under time constraints. Arti cial Intelligence, 75:297{345, 1995. [11] H.J. Levesque, P.R. Cohen, and J.H.T. Nunes. On acting together. Technical Report 485, SRI International, Menlo Park, CA 94025-3493, May 1990. [12] A.S. Rao, M.P. George , and E.A. Sonenberg. Social plans: A preliminary report. In E. Werner and Y. Demazeau, editors, Proc. MAAMAW-91, pages 57{76, Amsterdam, The Netherlands, 1992. Elsevier Science Publishers. [13] J.S. Rosenschein and G. Zlotkin. Rules of Encounter. The MIT Press, Cambridge, MA, 1994. [14] J.S. Sichman, R. Conte, Y. Demazeau, and C. Castelfranchi. A social reasoning mechanism based on dependence networks. In A. Cohn, editor, Proc. ECAI-94, pages 173{177. John Wiley and Sons, 1994. [15] C. Sierra, N.R. Jennings, P. Noriega, and S. Parsons. A framework for argumentation-based negotiation. In M.P. Singh, A. Rao, and M.J. Wooldridge, editors, Proc. ATAL-97, pages 177{192, Berlin, Germany, 1998. Springer-Verlag. [16] I.A. Smith and P.R. Cohen. Toward a semantics for an agent communications language based on speech- acts. In Proc. AAAI-96, pages 24{31, Cambridge, MA, 1996. AAAI Press/MIT Press. [17] R.G. Smith and R. Davis. Frameworks for cooperation in distributed problem solving. IEEE Transactions on Systems, Man and Cybernetics, 11(1):61{70, 1981. [18] K.P. Sycara. Persuasive argumentation in negotiation. Theory and Decision, 28:203{242, 1990. [19] R. Tuomela. What is cooperation? Erkenntnis, 38:87{101, 1993. [20] M. Wooldridge. Coherent social action. In A. Cohn, editor, Proc. ECAI-94, pages 279{283. John Wiley and Sons, 1994. [21] M. Wooldridge and N.R. Jennings. Towards a theory of cooperative problem solving. In J.W. Perram and J-P. Muller, editors, Proc. MAAMAW-94, Workshop on Distributed Software Agents and Applications, pages 40{53, Berlin, Germany, 1996. Springer-Verlag.