On Helping Behavior in Cooperative Environments - Semantic Scholar

0 downloads 0 Views 108KB Size Report
(b) a norm of reciprocity, which imposes to help those who previously helped us. ... happen to come into conflict, unless one assumes that the norm of reciprocity.
Published in Proceedings of the International Workshop on the Design of Cooperative Systems (COOP'95). Le Chesnay (France), 1995, pp. 96-108.

On Helping Behavior in Cooperative Environments Paola Rizzo, Amedeo Cesta and Maria Miceli IP-CNR National Research Council of Italy Viale Marx 15, I-00137 Rome, Italy Fax: +39-6-824737 E-mail: {paola | amedeo | maria}@pscs2.irmkant.rm.cnr.it

Abstract This paper concerns the helping behavior in multi-agent systems. Helping actions represent an interesting and rich testbed for examining the reasons for cooperative behavior in multiagent systems. Starting from previous studies on multi-agent dependence relations, a basic definition of help is given, and some of the motivations for helping are briefly discussed. Moreover, an experimental environment for simulating agents that can perform helping behavior is described, and some preliminary results are presented.

Résumé Ce travail concerne le comportement d'aide dans les systèmes multi-agents. Les actions d'aide représentent un testbed intéressant pour examiner les raisons des comportements coopératifs dans les systèmes multi-agents. En partant d'études précédents sur les relations de dépendance multi-agents, on donne une définition de base de l'aide, et on discute brièvement de quelques motivations pour aider. On décrit en suite un environnement expérimental employé pour simuler des agents qui peuvent s'aider, et on présente des résultats préliminaires.

Keywords: Dependence, Help, Cooperation, Multi-Agent Systems

1

Introduction

As already pointed out by many authors --see for instance [Martial 92, Durfee & Rosenschein 94, Rosenschein & Zlotkin 94a]--, Distributed Artificial Intelligence (DAI) has two distinct sub fields: Distributed Problem Solving (DPS) and Multi-Agent Systems (MASs). Sometimes it is not easy to make a clear-cut division since a continuum exists between the two extremes along which the DAI research can be classified. However, the two approaches might be distinguished in broad outline as follows: • DPS is mainly concerned with distributing tasks and plans among modules in order to better exploit their different knowledge and resources, in view of a more efficient satisfaction of a common goal. However, in order to do so, the existence of common goals among the agents and their reciprocal benevolence are generally given for granted. Agents' autonomy, if present, is limited to the ways and kinds of execution of the given goals: while agents can choose the appropriate course of actions to take, they cannot choose whether to pursue or not the common goal. • Conversely, in MASs attention is focused on heterogeneous agents interacting in a common environment, on the reasons for their interactions, for their choice to cooperate, negotiate, and exchange. Their level of autonomy is extended to the possibility of having goals of their own,

distinct from, and even in conflict with others'. So, the pursuit of a common goal can result from a choice by the agent, and often depends on negotiation with other agents. In a word, while DPS adopts a goal-centered view (and consequently is interested in building a system to better solve a particular problem), in MASs an agent-centered view is preferred [Martial 92], and the task or problem to solve is that of modeling real agents and contexts --teams, organizations, etc. Cooperation among agents is considered to be the core of DAI. But relevant differences exist within the area that influence the view and definition of cooperation. In DPS, the existence of a common goal (and also a common designer [Durfee & Rosenschein 94]) makes the benevolence assumption quite plausible: since a goal is already given as common, agents can be viewed as benevolent toward each other, i.e., each one should want that the other achieves the shared goal, or some sub-goal in view of the common one. In MASs, cooperation and benevolence are not so obvious, and should deserve a more careful analysis, in order to provide a theory of why and under what conditions autonomous agents should cooperate. It is also worth stressing that in MASs autonomy does not imply that agents are self-sufficient. Autonomy implies that agents have their own goals, which do not necessarily coincide or positively interact with others', and they are endowed with decision criteria to choose whether to interact (cooperate, negotiate, etc.). However, autonomous agents are also resource-bounded systems and may need external resources or actions performed by other agents: such need for others is a substantial reason for interacting. Our main topic of interest are the conditions and constraints able to influence cooperative behavior in MASs. We are interested in principles, knowledge representation and reasoning techniques, for endowing agents with the ability of taking autonomous decisions to cooperate, exchange, negotiate. Our model is grounded on a theory of dependence [Castelfranchi et al. 92] where dependence relationships basically frame the interactions among agents. According to this theory, agents look for cooperation only when they need others' resources; before asking to somebody, they reason about others' abilities; they cooperate on request only if they are in particular relations with the requesting agent. In this view autonomy is bounded and constrained by both objective conditions --e.g., agents' lack of power-- and agents' current goals and plans. Similar problems have been considered in other works: Huberman [Huberman & Hogg 88, Glance & Huberman 93] uses dynamical systems theory to describe collective behaviors in organizational systems; Rosenschein and Zlotkin [94b] use game-theoretic tools to develop a prescriptive theory for negotiation in MASs. Although in a preliminary stage, our research is aimed at obtaining a more descriptive view of MASs grounded on a model of agents both cognitively plausible and knowledge-based. As a testbed for our model of multi-agent systems we have chosen the helping behavior, which we view as an interesting and rich case study for examining the reasons for cooperative behavior. We are interested in a particular scenario exemplified as follows: In a given world, agents have their own goals and continuously attempt to achieve those goals; however, it is likely that an agent is not able to do an action and may ask other agents to perform the same or some other actions to help him. Some questions arise: who are the right agents to ask for help (help-seeking)? Why should other agents help him (help-giving)? The goal of this paper is to illustrate the basic tools we have built so far to analyze constrained help. In previous work [Cesta & Miceli 93, Miceli & Cesta 93] we have investigated strategic knowledge to be used in help-seeking, while in this paper we mainly consider help-giving. The instruments we

illustrate are both theoretical --a basic vocabulary to describe social interactions and help in term of dependence theory--, and simulative --an experimental setting to explore MASs scenarios. The paper is structured as follows: Section 2 presents examples of helping scenarios that clarify possible applications of the theory; Section 3 briefly outlines the basics of the dependence theory; Section 4 presents a model of help-giving and discusses the link between help and cooperative behavior; Section 5 describes a simulative environment, and presents some preliminary results that show the capability of the experimental setting. A concluding Section closes the paper.

2 Scenarios for Constrained Help This paper introduces a particular definition of help, and of cooperation too, as constrained activities. The idea is that, being help a costly activity, it should be done, for example, when the environmental variables do not leave choices to the agent or when there is a reasonable expectation of some reward. Our investigation may seem purely speculative; on the contrary, several situations that can be classified as cooperative may give rise to cases that fall within the above mentioned scenario. For example: • In a work environment (e.g., an office) several people are performing their tasks, and are committed to particular goals that let them obtain a desired reward; one of the agents could require some help because he has run out of some resources or temporarily is not able to perform an action. Other agents could help him because all of them are part of the same organization and could apply a default cooperation rule. But, even when not considering cases of competitive behavior, it is plausible that agents waste time while helping and eventually may lose their own rewards. In such a case, they should apply some decision criteria to assess whether to help or not. • In a completely artificial scenario, some software agents are performing their own tasks in a distributed environment. If we suppose that agents have knowledge about each other's capabilities, it may happen that one asks another to do something, but each agent is built so as to maximize some expected utility function, and, reasonably enough, helping others does not increase that utility. If in the meantime agents are also built in such a way as not to harm others [Weld & Etzioni 94], then how can they distinguish when it is appropriate to help? The two examples are similar in structure. We are more interested in the latter, but the similarity between the two situations may suggest that some common terminology could be helpful to describe both. Furthermore, the artificial example may also be seen as a general and abstract frame for several real-life situations (e.g., manufacturing environments, distributed decision making, etc.). Thanks to technological improvements in Information Technology, men and machines can interact in a network, and each interactant can be endowed with an artificial agent in charge of the interactions with the network [Pan & Tenenbaum 91]. Also, problems related to the cooperation of different humans in a network can be studied with a similar modelization [Lux et al. 93].

3 Dependence and Social Behavior Our approach to multi-agent interactions is grounded on the notion of social dependence among agents [Castelfranchi et al. 92]. Agent x depends on agent y with regard to an act a useful for realizing a state p when p is a goal of x and x is unable to realize p while y is able to do so. In this context, y's action is a resource for x's achieving his goal. We view dependence relationships as a powerful tool for both rational interaction and problem solving, and as a way for providing both justification for and solutions to the problem of interaction

and communication control, well-known in DAI research. In our approach, social interaction (or some of its relevant aspects -- namely benevolence, or common goals) is neither given for granted nor traced back to varying degrees of overlapping among the agents' mental attitudes (which is the dominant view in AI social studies; see [Grosz &Sidner 90, Pollack 90, Cohen & Levesque 90]). When in need of help, a cognitive agent does not resort to anybody, nor does it limit itself to applying standard protocols for interacting. On the contrary, it is likely to reason about its knowledge of social and in particular dependence relations, and to act according to this reasoning. In the same vein when deciding to give help, an agent analyzes the needs of the other and a basic motivation for helping is given by recognizing the actual dependence of the other agent. In should be also noted that information about one's dependence relationships is necessary, but not sufficient for an agent to achieve his goals. For example, once known who are the right agents to resort to, one needs that those agents do the required action(s). That is, the agent's knowledge about its dependence on others is likely to produce new goals in its mind. In the specific case, x will have the goal that y does the required action, since it is a resource for x's goal. But for doing something, one should want to do it; so, the former goal induces in x the goal that y has the goal of performing the required action (a goal about someone else's mental state), i.e., the goal of influencing y to do what x needs. However, the goal of influencing is not yet sufficient for x to succeed in influencing y. The latter should in fact be likely to be influenced, or (from the perspective of the would-be influencer) x should be endowed with some power of influencing y (both context- or otherdependent and "intrinsic" -- i.e., persuasive abilities). In any case, knowledge and beliefs about dependence relationships are the basic motivations for agents' subsequent goals.

4

Help-giving

Dependence relations are often on the grounds of helping behavior, which in its most basic form refers to a situation of dyadic interaction between an agent, that contributes to the achievement of a goal by another. The latter is generally assumed to be dependent on the former for the achievement of that goal. The main questions to answer are the following: • What is help? • Why does an agent help another? Here the first question is dealt with by giving a definition of help grounded on dependence, and a brief analysis of the relation between help and cooperation; then a broad description of the motivations for helping is given. Finally, in Section 5, a simulative approach to the study of helping behavior is described. 4.1 What is help? Helping is often viewed as synonymous either with prosocial or with altruistic behavior. However, while the notion of prosocial behavior is too broad, that of altruistic behavior is too narrow to cover the various instances of helping actions. In fact, • prosocial behavior is a generic concept that involves all the behaviors producing benefits in a society of agents, independent of the intentions of the behaving agents; • altruistic behavior is a special kind of helping which is motivated only by the desire to increase the other's welfare and does not consider the anticipation of rewards. Helping behavior, on the contrary, is intentional, and it includes the anticipation of possible rewards (as well as pure altruistic actions). Our interest is focused on this intermediate category.

A possible definition of help is the following: help is an action or a sequence of actions done by the helper (y) in order to satisfy a recipient's (x’s) goal. This definition comprises both the helper's mental state and its behavior, and implies a series of assumptions, that can be explained by means of a simple example in which John lends Mary his car because she needs it to go to the airport: • y believes that x has the goal p: John believes that Mary's goal is to go to the airport. Even if John's belief were false, according to our definition his behavior should still be considered as an instance of helping. • y believes to be able to attain or contribute to attain the fulfilment of p: John believes to have the car needed by Mary. Otherwise, he could not pursue his goal to help Mary (see next point). • y has the goal q that x achieves p, i.e. y adopts p: John has the goal that Mary gets to the airport. If John just had the goal of having someone test the car after some repairs, his behavior should not be considered as an instance of helping. In this case Mary would be just a means for John to achieve his goal to test the car, not a social agent with a goal that John wants to be achieved. For the notion of goal adoption see [Conte et al. 91]. • y does one of the following kinds of action in order to achieve its goal q: a. supplies x with the means to achieve p; b. directly performs the actions necessary to achieve p. In our example, John supplies Mary with the means (i.e. the car) to achieve her goal. John could also choose to take Mary to the airport. It is worth noting that attempted but failed actions are also considered in our definition of helping: for example, if John gives the car to Mary and it breaks down, John's behavior could still be considered as a helping act. • y's behavior is not caused by x's exercise of its power over y: If John gives the car to Mary under her threat, we would not say that John helps Mary (for a definition of “power over” see [Castelfranchi 90]). • y believes that x does not possess or cannot access the means (actions or resources) necessary to achieve p; in other words, y believes x depends on y with regard to p. Generally speaking, in fact, help-giving (and help-seeking) is grounded on assumed dependence: y will be likely to help x if it assumes x is not able to achieve p by itself (in the same vein, x will look for y's help if x believes to need it, depending on y's skills and actions for achieving p). One might assert this assumption is not strictly necessary for defining help, in that helping might include a form of strategic behavior (see section 4.3) where y helps x even though y believes x to be self-sufficient; in fact, y's help can be given just in view of some reward (be it a generic advantage or reciprocation). However, apart from such peculiar forms of helping behavior, which imply a subtler and more sophisticated social planning, helping is generally grounded on assumed dependence. In particular, the reasons for helping, as we shall see, are likely to presuppose, and derive from, y's belief that x depends on y with regard to a certain goal. 4.2 Help and Cooperation To immediately point to the relation between help and cooperation, we can define cooperation as mutual help in view of a common goal. Here, a number of specifications are needed. First of all, by common goal we mean a goal with respect to which there is mutual dependence [Conte et al. 91]. In our terminology, in fact, two agents, x and y, have an identical goal when each of them has the same goal (say, to have spaghetti cooked), while an identical goal is a common one if

x and y depend on each other for achieving it, by means of a plan including at least two different acts such that x depends on y’s doing a1, and y depends on x’s doing a2 [Castelfranchi et al. 92]. Thus, since cooperation implies the existence of a common goal, it also implies mutual dependence. Such a notion of cooperation might appear too “strong” or “constrained”. In this perspective, in fact, cooperation and mutual dependence are strictly tied to each other: no cooperation would occur without mutual dependence. However, one might wish to leave some room for some form of “free” cooperation, unconstrained by dependence. Tuomela [in press], for instance, argues exactly this, that there might be cooperation without dependence, and that just a common (i.e., in our terms, an identical) goal would suffice for cooperation to occur. He takes the example of joint cooking, where each agent could do the cooking alone, but they prefer company. Here, at least a couple of answers can be given, according to the angle one takes. On one side, it can be argued that in fact those two agents are not mutually dependent with regard to the cooking, but they might depend on each other relative to some other goal: the goal of cooking under certain conditions -- say, within a given time (half an hour, rather than an hour) or with a given amount of resources (effort, fatigue, etc.). On the other side -- and this is what we wish to stress -- it can be answered that the two “cooks” are neither mutually dependent with regard to the cooking, nor are they cooperating. They are just “doing something together”. In our view, not all the cases of “doing something together” are cases of cooperation. If you and I have the goal to listen to some music together and do so, even jointly preparing for that (I look for our favourite record, while you turn on the record player...), we are not cooperating (unless we are mutually dependent), but just doing something together. In other words, we see cooperation as a specific kind of (pro)social interaction, the one grounded on mutual dependence. We are in fact interested in finding a “place” for cooperation within the various forms of social behavior, without risking that cooperation becomes an empty word, a sort of synonym with prosocial behavior. So, we see cooperation as always constrained. There is no “free” cooperation, unless one is addressing another possible meaning of the term. As with “doing things together”, not all the cases of goal adoption and help are necessarily forms of cooperation (though they often prelude it). In fact, help is implied by cooperation (mutual help in view of a common goal), but it can occur without cooperation. This is the case of reciprocal help in view of different individual goals (which occurs in exchange contexts), as well as of unilateral help. As far as unilateral help is concerned, it might be worth observing that its possible relation with cooperation is closer than one might expect. So far, we have in fact referred to a notion of full, mutual cooperation. But other possible “levels” of cooperative behavior can be identified (see [Conte et al. 91]), among which the unilaterally intended cooperation, where just one agent believes there is mutual dependence between him and another, and helps the other in view of the common goal. In spite of such similarities, however, the two kinds of behavior are to be kept distinct: while mutual dependence is still implied by unilaterally intended cooperation, “pure” unilateral help implies just the recipient’s dependence. 4.3 Why Should an Agent Help? Social studies (see [Brehm & Kassin 93]) identify at least three types of factors that can motivate human helping behavior: • Biological factors. Natural selection could have produced the tendency to assistance among relatives and to reciprocal helping; furthermore, an altruistic personality could be genetically based.

• Emotional factors. For instance, the empathic concern produced by taking the perspective of a person in need makes people help in order to reduce the other person's distress. • Social normative factors. They are based upon norms which constrain help-giving to specific contexts. From the literature about social exchange, we would add a fourth motivation for helping: • Strategic factors. A person could help others in order to gain personal advantages. While biological and emotional factors cannot be easily modeled in a world of artificial agents, the last two factors appear quite immediately useful in practical systems. 4.3.1 Social Normative Factors Two general norms appear to regulate helping behavior: (a) a norm of social responsibility, which imposes to help those people who are dependent on us; (b) a norm of reciprocity, which imposes to help those who previously helped us. The two norms may happen to come into conflict, unless one assumes that the norm of reciprocity should already imply that of social responsibility. This would mean that the reciprocated agent should be in need of the helper's help, i.e., it should depend on the helper. In such a case, the norm of reciprocity would always be stronger than the simple norm of social responsibility. So, if a would-be helper were to choose among a number of dependent agents who ask for help, it would prefer those that have previously helped it. 4.3.2 Strategic Factors A person could help others in order to achieve personal goals, such as obtaining some benefits resulting from the other's reciprocation, or getting into the recipient's good graces, or improving his self-image toward others, etc. In other words, a reason for helping another can be a merely selfish advantage, independent of either the helper's interest in the recipient's actual welfare or the helper's compliance with some social norm. However, interestingly enough, strategic helping behavior often shows some connection with normative helping behavior. In particular, it can take into account, or better take advantage of, the norm of reciprocity. In fact, in a social setting where the norm of reciprocity is in force, an agent can help another in order to create a "debt" on the latter’s part, so as to increase the probability to receive its help in the future.

5 A Simulative Approach to the Study of Helping Behavior As already mentioned, we are also interested in testing experimentally in a simulated environment some of our intuitions about the helping behavior. Such an environment should represent the MAS scenario described in the introduction; that is, it should represent the situation in which some agents, behaving in a common environment with limited shared resources, can pursue different and autonomous goals, find themselves in dependence relations with one another, and use various decision criteria concerning help-seeking and help-giving. With this aim, we constructed a simulative scenario consisting of a two-dimensional grid where some food is randomly located; this world is populated by simple agents that need to look for food and eat in order to survive, and that can interact with one another. The dependence relationships among agents consist of the differences among their powers, which continuously change as a side-effect of their actions. Currently our agents have a set of built-in characteristics which make the helping behavior possible without resorting to a complex

belief system; in the future, we are interested in augmenting the agents' capabilites, to make their behavior more complex and to bring it closer to the previously outlined model. In the following the experimental setting is described and some preliminary results are illustrated. 5.1 The Experimental Setting The agent architecture is quite simply composed of a visual sensor, a set of effectors, a goal generator, and a planning module. The sensor lets the agent perceive both the food units and the other agents within a limited sensorial area (currently set to a 7 x 7 square). The goal generator chooses a goal to pursue on the basis of the sensorial information and the agent's internal state; the latter is related to the energy level, which ranges with integer values from 0 to 100, and has a lower threshold at 20 and an upper threshold at 60. Agents die when some action causes their energy to go under 0. Actually the internal states are symbolic labels attached to the various intervals of energetic values. The relationship between the energetic levels and the internal states can be represented in the following diagram: Internal State

Danger

Hunger

Normality

Energetic Level 0

20 Lower Threshold

60 Upper Threshold

100

Figure 1. Relationship between Energetic Level and Internal State

The planner is charged with the tasks of generating a plan suitable to pursue the agent's goal, and of controlling the successful execution of the plan. At present, the planning module limits itself to choose the right plan from a set of pre-established ones; each plan is composed of a sequence of actions, as f.i. in the plan for giving help to a needy agent, which requires to look for food, choose a food, go toward it, take it, go to the recipient, and give the food to the recipient. Actions have either a procedural implementation or a direct translation in terms of commands for the effectors. Finally, the effectors can execute the following elementary actions: moving (one location at a time), taking, giving, and eating (one food-element at a time), and signalling a needy state to the other agents (by changing one's own appearance). Each action affects the agent's internal state by lowering the energetic level in a specified amount, except for the action of eating which increases it; "being still" (which occurs for example when an agent waits for being helped) is also considered to be an action, even if it does not imply the operation of any effector, because it decreases the energetic level. Two types of agents have been defined: • "lonely" agents, that just ignore one another, so that there is no interaction among them; their goal is always to individually find food; • "social" agents, that have different goals according to their different internal states; more precisely, in case of danger, their goal generator activates the goal of looking for help; when hungry, their goal is to find food; and finally, in case of normal state, if there are any visible needy agents, the goal of giving help is activated; otherwise, they go on looking for food. In other words, in different times of a simulation the same social agent could either look for help, or find food, or give help (if help is needed by other agents), depending on the variations of its energetic level.

The relationships among internal states, type of agents and goals is summarized in the following diagram: Type of Agent Lonely

Social Look for help

Danger

Internal State

Hunger

Find food

Find food

(if needy agents:) Give Help

Normal

(if no needy agents:) Find food Figure 2. Relationships among Internal State, Type of Agent and Goals

The simulation is implemented using the MICE testbed [Montgomery & Durfee 90], which allows the creation of bidimensional worlds populated by user-defined agents. Our world is a 15 x 15 grid that contains 60 food units and 30 agents, all randomly located. The food units keep constant until the end of the simulation by randomly reappearing on the grid each time one or more agents eat some food. The action of choosing a food unit among the set of perceived ones is performed in a slightly different way depending on the pursued goal: when executing the plan for finding food, an agent simply selects the food unit nearest to itself, while in case of giving help an agent selects the food unit that is nearest wrt both itself and the recipient. 5.2 Preliminary Results When simulating the helping behavior a first variable that can be easily studied is the agents' survival; it is important because it gives an indication about the utility of the helping behavior for the whole social network. In order to test the effect of helping behavior on the agents' survival, we realized a set of preliminary simulations by varying the type of agent (social vs. lonely). The results obtained by setting to 20 the energetic value of food (i.e. the value that increase agents' energy each time they eat some food) are presented in the following figure, where the percentage of alive agents is plotted against time; each point represents the mean value across 10 simulations.

95

Type of Agent

90

lonely social

t500

t450

t400

t350

t300

t250

t200

t150

80

t100

85 t50

% of Alive Agents

100

Time Figure 3 - Percentage of alive agents as a function of time

In this figure, it is apparent that the social condition increases the percentage of survived agents compared with the non social condition. Furthermore, the high percentage of alive agents in the social condition remains stable since the start, whereas the percentage of alive lonely agents continues to decrease. Similar results have been obtained also by changing the energetic value of food, as illustrated in the following figure, where the percentages of alive social and lonely agents after 500 time steps are plotted against the food energetic values; each point represents the mean value across 10 simulations.

% of Alive Agents

100 80

Type of Agent lonely social

60 40 20 0 v5

v10

v15

v20

v40

v60

v80 v100

Food Energetic Value Figure 4 - Percentage of alive agents at the end of the simulation as a function of the food energetic value

In this figure, it is interesting to notice that the social condition increases (in a statistically significant way) the percentage of survived agents compared with the non social condition, for every food energetic value but the lowest one. From these results, it can be concluded that lonely agents find themselves in competition over limited food resources; in other words, agents located near one another probably choose to go toward and try to eat the same food unit, therefore wasting energy and decreasing the probability of survival because only one agent will succeed in eating the chosen food. On the other hand, the advantage of social agents over lonely agents seems to have two reasons: firstly, in case of danger, needy agents do not move until they die or receive some food from another agent, thus decreasing the number of agents competing over the food resources; secondly, it is evident that the helping acts performed by normal agents in favour of the needy ones increase the probability of survival of the latter. And since each agent can become a needy one in given circumstances, helping behavior turns into a powerful strategy for increasing the probability of survival of the entire social network. 5.3 Further Developments The previous simulations describe what happens in a simplified artificial world in which agents help each other almost automatically (in fact, the helping behavior has been constrained just by the existence of a request for help and by the amount of energy possessed by the would-be helper). Such simulations have shown that in particular conditions the help-giving behavior is useful for the survival of the whole social network, and that the testbed is rather sensible to several parameters (e.g. the food energetic value, but also the number of food units, the sensorial range, etc.) that can be experimentally manipulated.

Apparently, a gap exists between our model of help and the current simulation; however, the latter is to be considered just as a basis for an experimental study of help where to gradually introduce several variables (social knowledge, cost/benefit considerations, planning, and so on). In the future it will be necessary to further constrain the helping behavior by introducing different conditions for helping derived from the previous theoretical analysis, and to see how they affect both the helping behavior (for instance, in terms of frequency of the helping instances) and the social network, in terms of life length of the agents and stability or increase of their power. Another improvement of our simulation will be to endow our agents with previsional abilities. At present, both at the moment of its commitment to help and at each step of its helping behavior, the helper checks its amount of energy, and in the case of a dangerous decrease, it gives up helping (and possibly eats the food initially intended for the recipient). However, one can easily see the risks of ineffectiveness and waste of energy of such a behavior (the helper can spend energy for nothing, because its "help" is not carried through, and even risk a personal injury). The helper's calculation of the amount of energy required by all the steps of a planned helping behavior before embarking on it, would of course avoid such risks. A further constraint we intend to introduce in the near future is the motivation for reciprocating, according to the norm of reciprocity. While in fact the norm of social responsibility (to help those who depend on us) is already in force in our simulation (though in a very simplified and noncognitive form, in that it is built in the agents' behavior), the norm of reciprocity does not play any role in it. To allow the application of such a norm, our agents should be endowed with a memory of the "helps" given and received. Moreover, they should have a "notion" of help which includes at least some aspects of the definition of help we have suggested, such as intentionality. In fact, an agent should be able to distinguish between accidental advantages received by another, and real, intentional help; only the latter would be recorded as an instance of help received. Finally, in addition to the agents' survival, other quantitative and qualitative variables should be studied. For example, it would be surely important to qualitatively differentiate dependence within our social network, by introducing different skills and abilities in the agents, besides their different levels of energy. This will allow to endow the agents with more stable "powers" (while at present, as we know, the individual amount of energy -- which is the agents' only kind of power -- is continuously changing). Stabler "powers", implying stabler dependence relations, can in turn favor recurring and specific interactions among certain individual agents (for instance, an agent's offer of a particular performance or resource in exchange for the performance or resource possessed by another agent, and vice versa).

6

Conclusions

Although preliminary in its results, our investigation touches on some interesting aspects. A basic vocabulary to describe the agents' autonomous decision to help has been devised. Such interest in the topic of help is justified by its crucial role in MASs contexts: help is in fact a basic component of most social interactions, both in cooperation and in exchange. Our model requires further improvement, regarding especially the analysis of the motivations for helping. Consequently, the simulation should be improved by introducing more sophisticated criteria concerning the autonomous decision to help, and by measuring the global performance of the multi-agent system according to different variables. The general aim of this research is to synthetize some computationally simple decision rules that improve the behavior and performance of agents in multi-agent environments.

Acknowledgements A preliminary version of this paper has been presented at the Cooperative Knowledge Based Systems Working Conference (CKBS'94) held on June 15-17, 1994 at Keele (UK). The authors would like to thank Cristiano Castelfranchi for encouraging this research, the CKBS '94 conference participants for their stimulating comments and discussions, and the COOP '95 anonymous reviewers for useful remarks. The authors participate in the "Project for the Simulation of Social Behavior" at IP-CNR. The work is realized in the framework of the ESPRIT III Working Group No.8319 "A Common Formal Model of Cooperating Intelligent Agents (ModelAge)". Paola Rizzo is supported by a scholarship from CNR "Comitato Nazionale per la Scienza e le Tecnologie dell'Informazione".

References Brehm, S., Kassin, S.M. (1993). Social Psychology, 2nd ed. Houghton Mifflin: Boston. Castelfranchi, C. (1990), Social Power: A Point Missed in Multi-Agent, DAI and HCI. In Y. Demazeau, J.-P. Muller (Eds). Decentralized A.I.. North Holland: Amsterdam, The Netherlands. Castelfranchi, C., Miceli, M., Cesta, A.(1992). Dependence Relations among Autonomous Agents. In E.Werner, Y.Demazeau (Eds), Decentralized A.I. - 3, North Holland, Amsterdam, The Netherlands. Cesta, A., Miceli, M. (1993). In Search of Help: Strategic Social Knowledge and Plans. Proceedings of the 12th International Workshop on Distributed AI, Hidden Valley, PA. Cohen, P. R., Levesque, H. J. (1990). Intention is choice with commitment. Artificial Intelligence, 42, 213-261. Conte, R., Miceli, M., Castelfranchi, C. (1991). Limits and levels of cooperation: Disentangling various types of prosocial interaction. In Y. Demazeau, J.P. Muller (Eds.), Decentralized A.I.2. Elsevier: Amsterdam, The Netherlands. Durfee, E.H., Resenschein, J.S. (1994). Distributed Problem Solving and Multi-Agent System: Comparisons and Examples, Proc. of the 13th International Workshop on Distributed AI, Seattle, Washington. Glance, N.S., Huberman, B.A. (1993). Organizational Fluidity and Sustainable Cooperation. PreProceedings of 5th European Workshop on Modeling Autonomous Agents in a Multi-Agent World, Neuchatel, Switzerland. Grosz, B. J., Sidner, C. L. (1990). Plans for discourse. In P. R. Cohen, J. Morgan, M. E. Pollack (Eds.). Intentions in Communications. MIT Press: Cambridge, MA. Huberman, B.A., Hogg, T. (1988). The Behavior of Computational Ecologies. In B.A. Huberman (Ed), The Ecology of Computation, Elsevier: Amsterdam, The Netherlands. Lux, A., de Greef, P., Bomarius, F., Steiner, D. (1993). A Generic Framework for Human Computer Cooperation. Proceedings of Int. Conf. on Intelligent and Cooperative Information Systems (ICICIS 1993), IEEE Computer Society Press: Piscataway, NJ. Martial, F. v. (1992). Coordinating plans of autonomous agents. LNAI 610, Springer-Verlag: Berlin. Miceli, M., Cesta, A. (1993). Strategic Social Planning: Looking for Willingness in Multi-Agent Domains. Proceedings of the 15th Annual Conference of the Cognitive Science Society, Lawrence Erlbaum Publ.: Hillsdale, NJ. Montgomery, T. A., Durfee, E. H. (1990). Using MICE to study intelligent dynamic coordination. Proceedings of the IEEE Conference on Tools for Artificial Intelligence, IEEE Computer Society Press: Piscataway, NJ.

Pan, J.Y.C., Tenenbaum, J.M. (1991). An Intelligent Agent Framework for Enterprise Integration. IEEE Transactions on Systems, Man and Cybernetics, Vol. 21, N. 6, pp.1391-1408. Pollack, M. E. (1990). Plans as complex mental attitudes. In P. R. Cohen, J. Morgan, M. E. Pollack (Eds.). Intentions in Communications. MIT Press: Cambridge, MA. Rosenschein, J.S., Zlotkin, G. (1994a). Designing Conventions for Automated Negotiation, AI Magazine, Fall, pp. 29-46. Rosenschein, J.S., Zlotkin, G., (1994b). Rules of Encounters. Designing Conventions for Automated Negotiation among Computers. MIT Press: Cambridge, MA. Tuomela, R. (in press). Cooperation: A Philosophical Study. Stanford University Press: Stanford, CA. Weld, D., Etzioni, O. (1994). The First Law of Robotics (a call to arms). In Procedings of AAAI-94, Seattle, WA, AAAI Press: Menlo Park, CA.