Anticipating and Resolving Conflicts During ... - Semantic Scholar

3 downloads 0 Views 274KB Size Report
Jun 21, 2005 - can create conflicts – the agent may not be able to determine whether .... those roles the agent has adopted by signing contracts with members ...
Anticipating and Resolving Conflicts During Organisational Change Martin J. Kollingbaum Timothy J. Norman Department of Computing Science University of Aberdeen Aberdeen, AB24 3UE {mkolling, tnorman}@csd.abdn.ac.uk Technical Report AUCS/TR0505 June 21, 2005 Abstract Virtual organisations can be regarded as sets of roles that introduce normative directives for agents joining such organisations by adopting these roles. An agent in a specific role holds a set of obligations, permissions and prohibitions that govern its practical reasoning. In adopting such a role, it commits to fulfill its obligations. A specific problem poses organisational change. In such a situation, the normative directives within the organisation may change and an agent may experience normative changes to the role (or set of roles) it has adopted. Such changes can create conflicts – the agent may not be able to determine whether the actions it deploys for fulfilling its obligations are allowed or forbidden options. Means for disambiguating the normative position of an agent become a necessity. In this paper, the NoA model of norm-governed agency is described. Agents based on this model are governed by norms in their practical reasoning. This model specifically deals with organisational change with the goal to make an agent robust against possible norm conflicts. Based on the characteristics of the NoA model, a classification of possible conflicts is presented and means for detection and resolution provided.

1

Introduction

Virtual organisations can be modelled in terms of a set of roles that are characterised by normative directives for agents joining such organisations by adopting such roles (see, for example, [4, 34]. The role of a social individual within such an organisation may be characterised by a set of obligations, permissions and 1

prohibitions that determine its responsibilities and privileges within such a social context. In this view, therefore, the essential building blocks of a virtual society are roles described as sets of norms. In our model of virtual organisations, agents establish agreements between each other that assign roles to agents. By adopting such a role, an agent commits to act in accordance to the norms that characterise its role (or set of roles). The agents recruited into such roles bring their own capabilities into the community of agents. Whereas the set of roles determines the structure of an organisation, the capabilities of the agents determine the overall behaviour of the community. Virtual organisations are situated in a changing world and may, therefore, need to adapt to changes that may occur. This dynamic nature of organisations has to be taken into account in the design of the agents recruited into such an organisational structure. A common change within an organisation is a change in the responsibilities and privileges of an agent in a specific role. Specific actions the agent could deploy previously may suddenly be forbidden – the social or normative position of the agent changes. If in the course of such an organisational change new norms are introduced that create a conflict – the agent suddenly holds at the same time a prohibition and a permission for a specific act it decides to perform to fulfil its obligations, then it is rendered incapable to determine if its options for action to fulfil its obligations are all consistent with its set of norms or not. One solution could be to try and actually prove for each change that the complete set of norms is free of conflicts (see, for example, [13]). We follow a different approach – we accept that organisations are situated in a changing world and, therefore try to make the agent robust against such conflict situations. In order to introduce robustness in the face of organisational change, a model of an agent is needed that can detect and resolve norm conflicts. The NoA model of norm-governed agency [20, 24] is specifically designed to deal with such problems. NoA takes inspirations from classical BDI models [35] with following key distinctions: (a) norms are introduced as a governing concept into the practical reasoning of an agent and (b) a specific form of deliberation, called informed deliberation, is introduced. An agent based on the NoA model will analyse whether its can fulfil its obligations in a norm-consistent way – are all options of actions for such an obligation allowed or at least one of them or would the agent be forced to violate any other norms if it wants to fulfil this obligation? NoA agents do not filter out options for action that are norminconsistent. Instead, the deliberation process of the agent is informed about the consistency situation of an obligation. With such a norm-informed deliberation, a NoA agent becomes norm-autonomous [6] – an agent can decide whether to honour its norms or act against them. It is important to note that the detection of consistency of actions can take place only if there are no conflicts within the set of norms – the agent must be able to create a complete partitioning of the options for actions in terms of consistency. Such conflicts may be introduced during a change in the organisation – for example, a role receives new responsibilities (obligations) or new restrictions (prohibitions) or new privileges (permissions). With that, the agent may 2

already hold a prohibition to achieve a state of affairs, but receives a new permission for actually achieving this state of affairs due to organisational changes. An important aspect of the NoA model is to make agents robust against such organisational changes and the potential threat of introducing conflicts. Allowing conflicts in the first place has practical benefits in engineering such multi-agent systems – exceptional situations do not have to be anticipated in advance, but the agents themselves are endowed to deal with them. For that, NoA introduces mechanisms for detecting and classifying conflicts and proposes conflict resolution strategies the agent can employ to disambiguate its normative position so that it can decide how to fulfil its obligations. In section 2, the NoA model of norm-governed agency is discussed and outlined in detail. Specific attention is given to (a) problems of consistency between the agent’s actions and its currently held norms and (b) conflicts within this set of norms. These are important problems agents may encounter during organisational change. A specific form of deliberation, called informed deliberation, is employed by NoA to inform an agent about problems of consistency and conflict. This form of deliberation makes the agent norm-autonomous – the agent can decide whether to honour its norms. In section 3, this model of informed deliberation is used to investigate issues of norm conflict during organisational change. Sections 4 and 5 conclude this paper.

2

A Model of Norm-governed Agency

If we regard virtual organisations as governed by a set of rules or regulations that determine what an agent in a specific role is obliged, permitted or prohibited to do, then a specific form of agent is required that can take such regulations or norms into account into its decision making – it must be able to reason about its current social or normative position. To provide an agent with such abilities, a set of issues must be investigated: • What are the normative concepts that an agent takes into account in its practical reasoning? • How are norms and actions of an agent represented? • How do norms influence the practical reasoning of the agent? • How do agents resolve conflicts between adopted norms and possible inconsistencies between actions and norms that have arisen, for example, due to organisational changes? Conflicts between norms and inconsistencies between the agent’s possibilities for action and its currently held set of norms may occur especially in case of organisational change. An important aspect for a model of a norm-governed agent must therefore be concepts to resolve such problems. The NoA model and architecture for norm-governed practical reasoning [20, 24] has been designed specifically for the development of agents that are able to 3

reason about their normative position and to detect and resolve norm conflicts. A detailed outline will therefore be given of this model and of conflict situations and their resolution.

2.1

Normative Positions

The work by Kanger [19], Lindahl [25] Jones and Sergot [18, 32] and others for the representation of rights in human interaction and the concept of a normative position of individuals in a social environment provide strong inspirations for the modelling of normative concepts and their interpretation of the NoA model of norm-governed practical reasoning. Lindahl, for example, formulates three basic liberties that underpin action of social individuals, using the deontic operator M ay (denoting permission) and an action operator Do (denoting that an agent “sees to it that” [32, 28]): • the liberty of agent p to achieve a state of affairs F : M ay Do(p, F ) • the liberty of agent p to achieve a state of affairs ¬F : M ay Do(p, ¬F ) • the liberty of an agent p to remain passive: M ay(¬Do(p, F ) ∧ ¬ Do(p, ¬F )) These three liberties are mutually exclusive – the agent may either achieve F , ¬F or remain passive, but it will do one of them. By creating consistent conjunctions of these three basic liberties (and shortening resulting statements due to equivalences), Lindahl introduces seven individualistic one-agent types of legal or normative positions. This formulation of normative positions describe that for a set of tuples hp, F i, where p is a specific agent and F is a proposition, one of these seven positions is true (the operator May (expressing permission) serves as the dual of Shall according to the equivalence: May φ ≡ ¬Shall ¬φ). Figure 1 shows the seven individualistic one-agent types of normative positions according to [25]. The diagram shows that a specific normative position is determined by the liberties the agent currently holds. It also shows that the agent can move from a position with more / less liberties into another position with less / more liberties. Lindahl clearly inspires the design of a representation of normative concepts within NoA – operators expressing “obligation”, “permission” and “prohibition” are combined with a representation of activities. An important distinction, however, between the representation of norms in NoA and that of those in the models developed by Lindahl, Sergot etc. is that an agent may achieve states of affairs and perform actions [28]. This captures the idea that an agent may be, for example, obliged to achieve some goal or to perform a specific act, and that these are clearly distinct norms.

2.2

The NoA Model

The NoA model of norm-governed agency also takes inspiration from the classic BDI model of agency [35] that gives an account of practical reasoning in terms of mental notions such as beliefs, desires (or goals / states of affairs) and intentions 4

T5

T2

T6

T3

T7

T4 May (Do (p, F ))

May (Do (p, ¬F ))

T1 May (¬ Do (p, F ) ∨ ¬ Do (p, ¬F ) )

Figure 1: Lindahl’s individualistic one-agent normative positions. (plans under execution). NoA extends this basic (and solipsistic) model through the introduction of normative concepts to enable a norm-governed practical reasoning. In the NoA model, the norms held by the agent are its obligations, permissions and prohibitions: • Obligations are the principal motivators for an agent – they motivate the agent to either achieve a state of affairs or to perform an action. Based on such a motivation, a norm-governed agent may select appropriate plans for execution. • Prohibitions require the agent to not achieve a state of affairs or to not perform an action – the agent is forbidden to pursue a specific activity. Prohibitions are not motivators for the agent, but explicitly restrict the choices of activities the agent can, ideally, employ. • Permissions explicitly allow the achievement of a state of affairs or the performance of an action. NoA maintains a set of BELIEF S as a representation of its view of the current state of the world, a set P LAN S of pre-specified plans, the set N ORM S representing the adopted set of norms and the set of ROLES comprising all those roles the agent has adopted by signing contracts with members of one or more societies. Therefore, an agent will also hold a set CON T RACT S. Each role r ∈ ROLES is declared in a contract c ∈ CON T RACT S and is characterised by a set of norms – the adoption of a specific role leads to the 5

adoption of all the norms describing this role. All norm specifications of all the adopted roles comprise the set N ORM S. To allow a unique identification of elements within these sets, the concept of an identifier is introduced. Definition 1 The set I N orms = {n1 , ..., nn } describes a finite set of norm identifiers. The set I P lans = {p1 , ..., pn } describes a finite set of plan identifiers. The set I Roles = {r1 , ..., rn } describes a finite set of role identifiers. The set I Agents = {x1 , ..., xn } describes a set of agent identifiers. The set I Contracts = {c1 , ..., cn } describes a finite set of contract identifiers. IDEN T IF IERS = I N orms ∪ I P lans ∪ I Roles ∪ I Agents ∪ I Contracts is the set of all identifiers, where I N orms , I Roles , I Agents , I P lans and I Contracts are mutually disjoint sets. In outlining the NoA model, we will use the concept of expressions, comprising first-order predicates and the operators ∧, ∨ and ¬: Definition 2 The set EXP RESSION S is the set of all possible expressions comprising first-order predicates over variables and constants and the operators ∧, ∨ and ¬.

2.3

Roles and Contracts

A virtual organisation is created when a set of agents sign a contract that assigns roles and, with that, sets of norms to the signatories. A contract is an instantiation of a specific contract template that specifies a set of roles and a set of norms, where each norm specification is related to a specific role. The set CON T RACT S is the set of contracts currently signed by the agent. The set ROLES is the set of roles the agent has currently adopted by signing contracts. The set N ORM S is the set of norms the agent has currently adopted by signing contracts and adopting one or more roles within those contracts. Definition 3 A role ρ is defined as a tuple hc, ri, where c ∈ I Contracts and r ∈ I Roles . A role is, therefore, related to a specific contract.

2.4

Norms

Norms in the NoA model are the obligations, permissions and prohibitions the agent adopts when it takes on a specific role within a society. An important aspect in the specification of norms is the distinction between the achievement of a state of affairs or the performance of an action (see, for example, [28]). In the context of NoA, the norm-governed agent is described as pursuing either a state-oriented or action-oriented activity. Norm declarations, therefore, contain a so-called activity specification. This activity specification is the centre-piece of the norm declaration. Definition 4 An activity A determines either the achievement of a state of affairs, called state-oriented activity, or the performance of an action, called 6

action-oriented activity. The expression achieve(p) expresses the achievement of a state of affairs p. The expression perf orm(σ) expresses the performance of action σ, where σ describes the signature of a pre-specified plan procedure formulated in the NoA language. An agent can be allowed, forbidden or required to achieve or not achieve a state of affairs (or its negation): • “achieve a state of affairs p”: achieve(p) • “achieve a state of affairs ¬p”: achieve(¬p) • “not achieve a state of affairs p”: ¬achieve(p) • “not achieve a state of affairs ¬p”: ¬achieve(¬p) An agent is also obliged, forbidden or allowed to perform or not perform an action: • “perform action σ”: perform(σ) • “not perform action σ”: ¬perform(σ) This definition of an activity shows the possible norm declarations in terms of either states or actions - the achievement / non-achievement of a state of affairs, the achievement / non-achievement that such a state of affairs does not occur, and the performance / non-performance of an action σ. The activity achieve(¬p) means that the agent pursues any activity that leads to a state of affairs where p does not hold. This is expressed by a current set of beliefs, BELIEFS, held by the agent that does not contain p. The set BELIEFS is defined in the following way: Definition 5 An agent holds a set of beliefs BELIEF S = {b1 , ..., bn }, where the bi are first-order ground terms. Describing the set of beliefs not just as propositions but as first-order ground terms has fundamental consequences for the NoA model. In contrast to Lindahl, who uses an action modality based on propositional logic, an activity in NoA, expressed as achieve(p), uses first-order predicates to express the achievement of a state of affairs or goal. Currently, NoA does not allow the formulation of an achievement of a conjunction of goals – an activity statement achieve(p∧q) cannot be interpreted by NoA. Only activity statements over first-order predicates or negated first-order predicates are allowed. Definition 6 The set ACT S is the set of all possible activity statements. Norm specifications include activity statements, expressing that a state or action is either obliged, permitted or prohibited. A label is introduced to identify a norm specification as either an obligation, permission or prohibition.

7

Definition 7 The set LN orms = {obligation, permission, prohibition} is the set of labels used to identify obligations, permissions and prohibitions1 . A norm specification can then be defined in the following way: Definition 8 A norm specification, expressing an obligation, permission, prohibition is a tuple hn, iRoles , A, a, ei, where • n ∈ LN ORM S • iRoles ∈ I Roles is a role identifier for a norm addressee • A ∈ ACT S is the activity specification • a ∈ EXP RESSION S is the activation condition • e ∈ EXP RESSION S is the expiration condition Norms in NoA are conditional entities – they are relevant to an agent under specific circumstances only. Our model of norm-governed agents includes a concept of explicit norm activation and deactivation: specifications of norms carry two conditions, an activation condition and an expiration condition. These two conditions allow an exact specification of circumstances under which a norm becomes active and, therefore relevant to the agent. A separate expiration condition allows a more precise specification of the circumstances when a norm is actually active: • As soon as the activation condition holds, a norm is activated and becomes relevant to the agent. • It continues to be activated, even if the activation condition ceases to hold • A norm is transferred from an activated into a deactivated state only if the expiration condition holds. With that, the two conditions test two events – the occurrence of a state of affairs that activates the norm and the occurrence of a state of affairs that deactivates the norm. A norm cannot be activated if the expiration condition holds at the same time as the activation condition (the norm is immediately deactivated). Therefore, a norm is transferred from an inactive state into an active state, if the activation condition holds and the expiration condition does not hold. It is transferred from an active state into an inactive state if the expiration condition holds, regardless of the activation condition. The NoA approach of using two separate conditions is in contrast to other proposals for the specification of norms. Conditional norms are usually modelled 1 A label “sanction” exists as syntactic sugar, as it is an obligation for an agent in the role of a so-called “authority” to pursue certain activities that represent such sanctions (see [21] for more details).

8

with a single activation condition (see, for example, [11]). This approach for defining conditional norms is subsumed by the activation / expiration condition pair in NoA. For example, if we assume p as the activation condition and ¬p as the expiration condition the the norm is activated and becomes relevant to the agent if and only if p holds. Alternatively, if the activation condition is p and the expiration condition is declared as F ALSE (the norm will never expire), then the occurrence of p is a triggering condition that determines under what circumstances a norm begins to be relevant without ever expiring – a case that that cannot be expressed by a single condition. If we, as another alternative, assume that the expiration condition is ¬p ∨ q, then the expiration condition is not just the negation of the activation condition (and tests the absence of p in the state of affairs that would activate the norm), but there can be another state of affairs, expressed by q, that deactivates the norm. Such a situation cannot be expressed by approaches based on a single condition. According to the above definition, normative statements can contain negated activity statements or be negated themselves. However, it is important at this point to make clear how the NoA interpreter behaves when it encounters such negated specifications. Every norm held by a NoA agent is a positive occurrence of a permission, prohibition or obligation, specifying a positive occurrence of an activity statement – the set NORMS contains only such positive representations of norms. Formulations of negated normative statements are transformed by the NoA interpreter into their equivalent positive formulation. For example, if the designer of the agent specifies a negated permission to not achieve a state of affairs, then it is treated by NoA as an obligation. This transformation is important, as an obligation is a motivator for a NoA agent to act, whereas a permission does not motivate any activity. The NoA interpreter performs a transformation of norm specifications according to the following equivalence relationships2 : • ¬permission(iRoles , ¬achieve(p), a, e) ≡ obligation(iRoles , achieve(p), a, e) • ¬permission(iRoles , ¬achieve(¬p), a, e) ≡ obligation(iRoles , achieve(¬p), a, e) • ¬permission(iRoles , achieve(p), a, e) ≡ prohibition(iRoles , achieve(p), a, e) • ¬permission(iRoles , achieve(¬p), a, e) ≡ prohibition(iRoles , achieve(¬p), a, e) • ¬permission(iRoles , ¬perform(σ), a, e) ≡ obligation(iRoles , perform(σ), a, e) • ¬permission(iRoles , perform(σ), a, e) ≡ prohibition(iRoles , perform(σ), a, e) An obligation to not see to it that a state of affairs is achieved is equivalent to a prohibition to see to it that this state of affairs is achieved: obligation(iRoles , ¬achieve(p), a, e) ≡ prohibition(iRoles , achieve(p), a, e) An obligation to not see to it that an action is performed is equivalent to a prohibition to see to it that this action is is performed: 2 These

equivalence relationships are inspired by Deontic Logic.

9

obligation(iRoles , ¬perform(σ), a, e) ≡ prohibition(iRoles , perform(σ), a, e) A negated permission, containing a negated activity statement, expresses an obligation for the agent. Obligations with a negated activity are not treated as motivators, but as prohibitions. In the same way, a prohibition with a negated activity specification is equivalent to an obligation: prohibition(iRoles , ¬achieve(p), a, e) ≡ obligation(iRoles , achieve(p), a, e) prohibition(iRoles , ¬perform(σ), a, e) ≡ obligation(iRoles , perform(σ), a, e) For NoA, it is important to distinguish between a norm that is a motivator for action and norms that prescribe the range of activities that are allowed or forbidden. A motivating obligation will initiate the deliberation process of the agent – the selection of a specific plan for execution. Permissions and prohibition contribute to this deliberation by partitioning the set of plans into allowed and forbidden options. It is therefore important to perform such transformations. The following specifications comprise a special case: • permission(iRoles , ¬achieve(p), a, e) • permission(iRoles , ¬achieve(¬p), a, e) • permission(iRoles , ¬perform(σ), a, e) A permission to abstain from achieving a state of affairs (or its negation) or from performing an action does not have any impact on the practical reasoning of a NoA agent. A specification of such a permission, therefore, is ignored by the NoA interpreter.

2.5

Plans

The design of the NoA architecture is based on reactive planning [14, 15, 1], but also takes inspirations from production systems such as CLIPS or Jess [16]. Characteristic for a reactive planning system is the provision of pre-specified plan procedures at design time as the behavioural repertoire of an agent. A NoA agent adopts a set of such plans as its set P LAN S. Obligations can motivate the agent to either achieve a state of affairs or to perform an action. Plan procedures in NoA have to serve both cases. If an action-oriented activity is required, plans are selected directly according to their signature. In case of a state-oriented activity, plans have to be selected according to what effects they produce. To allow such a selection according to their effects, plan specifications contain explicit effect specifications. Similar to norms, plans are relevant to an agent under specific circumstances only – a plan is applicable, if specific conditions hold. Consequently, NoA plan specifications contain a set of preconditions. These preconditions are important for instantiating a plan to make it ready for execution.

10

Definition 9 A plan is defined as a tuple P = hσ, precondition, effects, bodyi, where: • σ is the signature of the plan specification, with σ = hI P lans , {par1 , .., parn }i comprising a plan identifier and a set of parameters, • precondition ∈ EXP RESSION S comprises an expression over predicates and operators ∧, ∨, ¬; if the set BELIEF S reflects a state of affairs that evaluates the precondition to true, the plan becomes activated, • effects comprises a list of predicates expressing possible effects occurring during the execution of the plan body, • body comprises an executable specification of the plan. Definition 10 The set P LAN S comprises a set of plan specifications currently adopted by the agent.

2.6

Activation and Instantiation

The concept of activation is essential in NoA, as it determines which norms, especially obligations, are relevant to the agent at any time. Active obligations motivate the agent to select a specific plan for execution. The set of currently activated and, therefore, instantiated plans is the basis for this selection process. A change in the set BELIEF S leads to such an activation. The unification between conditions in norms and plans and the set of beliefs can lead to multiple instantiations of both norms and plans, characterised by a set of bindings for variables contained in these conditions. The currently activated norms determine the agent’s current normative position. The currently activated plans comprise the agent’s current behavioural repertoire. Two sets are introduced in NoA to express the current activation state of an agent: (a) the set IN ST N ORM S, representing the set of activated and instantiated norms and (b) the set IN ST P LAN S, representing the set of activated and instantiated plans. Definition 11 The set IN ST N ORM S = IN ST OBL ∪ IN ST F OR ∪ IN ST P ER is the set of currently activated and, therefore, instantiated norms. Subsets of the set IN ST N ORM S are IN ST OBL, IN ST P ER and IN ST F OR, which are the sets of currently instantiated obligations, permissions and prohibitions. Definition 12 The set IN ST P LAN S is the set of currently activated and, therefore, instantiated plans. Activation and de-activation can be represented by a set of functions that create these two sets. Definition 13 The function activateP lan : BELIEF S×P LAN S → IN ST P LAN S takes the set of declared plan procedures and creates full instantiations of plans according to the current set of beliefs (all variables receive bindings). A plan activation occurs, if the precondition holds. 11

Definition 14 The function deactivateP lan : BELIEF S × IN ST P LAN S → IN ST P LAN S deletes existing plan instantiations and creates a new set IN ST P LAN S. A deactivation occurs, if the precondition no longer holds. Definition 15 The function activateN orm : BELIEF S×N ORM S → IN ST N ORM S takes the set of declared plan procedures and creates partial instantiations of norms, based on the current set of beliefs. An activation occurs, if the activation condition holds. Definition 16 The function deactivateN orm : BELIEF S×IN ST N ORM S → IN ST N ORM S removes norm instantiations according to their expiration condition. A deactivation occurs, if the expiration condition holds. As norms have separate activation and expiration conditions, a new set IN ST N ORM S is formed by (a) adding newly activated norms to the current set IN ST N ORM S and (b) removing any norm from this new set IN ST N ORM S with an expiration condition that holds according to the current set of beliefs. These two steps in sequence account for the fact that norms have a separate activation and expiration condition and that a norm is deactivated only if the expiration condition holds, regardless of the activation condition – if both conditions hold at the same time, the norm is de-activated and must be removed from the set IN ST N ORM S. The function newNormSet() is performing this activity. Definition 17 The function newN ormSet : BELIEF S×N ORM S×IN ST N ORM S → IN ST N ORM S creates a new set IN ST N ORM S, taking activations and deactivations of norm instantiations into account. The function is defined in the following way: newN ormSet(BELIEF S, N ORM S, IN ST N ORM S) := deactivateN orm(BELIEF S, (IN ST N ORM S ∪ activateN orms(BELIEF S, N ORM S))) The sets IN ST N ORM S and IN ST P LAN S change, whenever the set BELIEF S changes. With that, a set of obligations IN ST OBL ⊆ IN ST N ORM S becomes instantiated, comprising the set of motivators for the agent. Definition 18 The set IN ST OBL ⊆ IN ST N ORM S is the set of currently instantiated obligations. Definition 19 The function getObligations : IN ST N ORM S → IN ST OBL selects the subset of obligations from the set IN ST N ORM S. The agent has to select options or candidates for action from the set of currently instantiated plans, IN ST P LAN S. NoA forms a set CAN DIDAT ES ⊆ IN ST P LAN S that contains all the candidates for each obligation o ∈ IN ST OBL. A single element of the set CAN DIDAT ES can be a candidate for multiple obligations. 12

Definition 20 The set CAN DIDAT ES ⊆ IN ST P LAN S is the set of plan instantiations that are candidates to obligations in the set IN ST OBL. To be able to address the subset of candidates that are options for a specific obligation, the function options(o), with o ∈ IN ST OBL, is defined: Definition 21 For a specific instantiated obligation o ∈ IN ST OBL, the function options(o) describes a subset of elements from the set CAN DIDAT ES, where each element of this subset is a candidate for obligation o: options(o) = { c o | c o ∈ CAN DIDAT ES ∧ o ∈ IN ST OBL ∧ is candidate(co , o)} Traditionally, practical reasoning agents based on a reactive planning architecture achieve a given goal or state of affairs selecting one specific candidate for execution from a set of plan options. In case of NoA, the set CAN DIDAT ES represents this set of options. A function selectCandidates() expresses the formation of such a set of options or candidates. Definition 22 The function selectCandidates : IN ST OBL×IN ST P LAN S → CAN DIDAT ES selects the set of plan instantiations that are possible candidates for fulfilling instantiated obligations in the set IN ST OBL. According to the activity stated by a specification of an obligation, plan instantiations are either selected to achieve a state of affairs or to be performed as the intended action. The following can be stated: • if the obligation demands an activity to achieve a state of affairs, elements of IN ST P LAN S are selected according to their effect specifications, • if the obligation demands an activity to perform an action, elements of IN ST P LAN S are selected according to their signature. In case of the achievement of a state of affairs, instantiations of many different plan declarations can be candidates. In case of a performance of an action, only instantiations of a single plan declaration are potential candidates – as norms can be partially instantiated, maybe more than one instantiation of a single plan declaration can be a potential candidate.

2.7

Norm Consistency

In a process of deliberation, the agent applies certain strategies to select one specific candidate plan for execution. With the introduction of norms and normgoverned practical reasoning, certain actions of the agent may violate norms it currently holds. For example, to achieve an obliged state of affairs the agent may hold a set of candidates where some candidates may produce side-effects that are (a) forbidden or (b) counter-act any of its other obligations. A normgoverned agent must, therefore, be enabled to reason about the consistency of its actions with its currently held norms. The agent must be able to investigate 13

if a specific candidate plan instantiation will fulfill a given obligation without violating other norms. The agent will be successful in this investigation only, if it can determine clearly for each candidate if it is allowed or forbidden to be executed. If there are conflicts between norms in its set IN ST N ORM S – for example, a prohibition and a permission for a specific state of affairs are activated at the same time – it will have to apply conflict resolution strategies to determine those norms that actually contribute to its current normative position. In essence, two problems have to be investigated: • Possible conflicts between permissions and prohibitions from the set IN ST N ORM S; • Possible inconsistencies between candidates from the set CAN DIDAT ES and currently activated norms from the set IN ST N ORM S. Permissions and prohibitions configure the normative position of an agent, either allowing or forbidding certain elements of the set CAN DIDAT ES to be executed. Permissions and prohibitions, therefore, introduce a partitioning of the set CAN DIDAT ES into allowed and forbidden options. With regard to a specific instantiated obligation, three distinct cases can be observed: possible candidates as options for action are (a) all allowed or (b) at least one of these options is allowed or (c) all options are forbidden. To allow the definition of norm-consistent plan execution, a set of concepts is introduced in the following. Definition 23 The set SO describes those states of affairs obliged by currently active obligations contained in the set IN ST OBL, whereas the set TO describes actions obliged by currently active obligations contained in the set IN ST OBL. Similarly, the sets SF and SP and the sets TF and TP describe states of affairs prohibited / permitted and actions prohibited / permitted by currently active norms. To allow the investigation of possible effects of an instantiated plan p ∈ IN ST P LAN S, a function effects(p) is provided. Definition 24 For a plan instantiation p ∈ IN ST P LAN S, the function effects(p) provides the set of fully instantiated effect specifications: effects(p) = { e | e is an effect of plan instantiation p ∈ IN ST P LAN S} A second function is needed that allows us to refer to states of affairs that are the negation of states expressed by plan effects. The function producing this set is called neg effects(p). Definition 25 For a plan instantiation p ∈ IN ST P LAN S, the function neg effects(p) describes a set that contains a negated version for each element e of the set described by effects(p):

14

neg effects(p) = { n | e ∈ effects(p) ∧ n = ¬e} With these definitions in place, a norm-consistent execution of a plan can be expressed in the following way: Definition 26 The execution of a plan instantiation p ∈ IN ST P LAN S, with p ∈ / TF (p is not a currently forbidden action), is consistent with the current set of active norms, IN ST N ORM S, of an agent, if none of the effects of p is currently forbidden and none of the effects of p counteracts any currently active obligation: consistent(p, TF , SF , SO )

iff and and

p∈ / TF SF ∩ effects(p) = ∅ SO ∩ neg effects(p) = ∅

An investigation into consistency takes place in NoA according to this definition. This consistency information is important for the deliberation of a NoA agent. To make an agent norm-autonomous, inconsistent options are not simply filtered out but are provided together with consistent options for the deliberation process of the agent. NoA, therefore, introduces a labelling mechanism to indicate the consistency of plan options. Definition 27 A label, expressing consistency / inconsistency of a plan instantiation c ∈ CAN DIDAT ES, is a tuple L = hc, M OT IV AT ORS, P ROHIBIT ORSi, where • c ∈ CAN DIDAT ES is the labelled candidate for a set of motivating obligations • M OT IV AT ORS = { oc | oc ∈ IN ST OBL ∧ c ∈ CAN DIDAT ES ∧ effects(c) ∩ SO ∈ / ∅ } ∪ { oc | oc ∈ IN ST OBL ∧ c ∈ CAN DIDAT ES ∧ c ∈ TO } is the set of obligations that motivated the addition of this candidate to the the set CAN DIDAT ES, because (a) one of its effects achieves the state of affairs demanded by this obligation or (b) it is the action demanded by these obligations • P ROHIBIT ORS = { f c | f c ∈ IN ST F OR ∧ c ∈ CAN DIDAT ES ∧ c ∈ TF } ∪ { f c | f c ∈ IN ST F OR ∧ c ∈ CAN DIDAT ES ∧ effects(c) ∩ SF ∈ / ∅} ∪ { oc | oc ∈ IN ST F OR ∧ c ∈ CAN DIDAT ES ∧ neg effects(c) ∩ SO ∈ / ∅} is the set of conflicting prohibitions or obligations By annotating the elements of the set CAN DIDAT ES, it is transformed into a labelled set CAN DIDAT ES L . The function checkConsistency() performs this transformation. 15

Definition 28 The function checkConsistency() checks each element of CAN DIDAT ES for consistency and annotates it with a label L = hc, M OT IV AT ORS, P ROHIBIT ORSi: checkConsistency : CAN DIDAT ES × IN ST N ORM S → CAN DIDAT ES L From this labelling, the agent can derive the consistency of each candidate. For a candidate c ∈ CAN DIDAT ES L , a label expresses consistency, if the set of P ROHIBIT ORS is empty: • Label expressing consistency: h c, M OT IV AT ORS, {} i Via characterising the consistency of candidate plans, we can define the consistency of an obligation. For a specific obligation o ∈ IN ST N ORM S, a specific subset of the set CAN DIDAT ES L represents the set options(o) of possible candidates. There are three possible configurations for this set: (a) all elements in options(o) are labelled consistent, (b) at least one element in options(o) is labelled consistent or (c) all elements are labelled inconsistent. According to these three possibilities, we introduce three so-called consistency levels for a specific obligation: • Strong Consistency. An obligation is strongly consistent if all options(o) ⊆ CAN DIDAT ES L are labelled as consistent: strong consistent(o, SF , SO , TF ) iff ∀p ∈ options(o). consistent(p, TF , SF , SO ) • Weak Consistency. An obligation is weakly consistent if at least one candidate in the set options(o) is labelled as consistent: weak consistent(o, SF , SP , SO , TF ) iff ∃p ∈ options(o) s.t. consistent(p, TF , SF , SP , SO ) • Inconsistency. An obligation is inconsistent if no candidate in the set options(o) is labelled as consistent: inconsistent(o, SF , SO , TF ) iff ∀p ∈ options(o). ¬consistent(p, TF , SF , SO )

2.8

Default Normative Position

An essential question for the agent is the consistency situation of its actions, if it does not hold any norms – are its actions allowed or forbidden? To resolve this situation, a default normative position must be introduced. The agent has to ground its norm-governed behaviour in a specific assumption about the permissibility of its actions. An agent that does not hold any currently activated permissions and prohibitions will not have any information about actions or states of affairs being allowed or forbidden. In such a case, the agent needs a default strategy. An

16

BlockMover

a b

c

d

e

Figure 2: A simple Blocks World. agent in such a situation could be regarded as being allowed to employ its capabilities freely – it has complete freedom to act. In contrast, the agent could also be regarded as not having such a default freedom and that it can only act if it is explicitly allowed. These are the two classic points of view or implicit assumptions a society or organisation can choose for the grounding of its norms: • Implicit-permission-assumption. If an agent has the capability to perform an action or achieve a state of affairs then it has an implicit (or default) permission to do so. • Implicit-prohibition-assumption. Regardless of its current capabilities, the agent is implicitly prohibited to act. If we assume the implicit-permission-assumption as our default normative position regarding states of affairs or actions, then the agent acts as if it holds implicit permissions for any state or action it is supposed or willing to achieve or perform. As soon as it adopts a specific prohibition that forbids the achievement of a specific state or the performance of an action, this prohibition is regarded as “overriding” the implicit permission.

2.9

Norm Conflicts

If the agent holds a set of adopted norms, then the investigation into consistency of candidate plans and their labelling requires that there are no conflicts between these norms. If there is a conflict between a permission or prohibition regarding a specific state of affairs or action, then it is impossible for the agent to determine if a specific candidate plan for achieving such a state or for performing such an action is allowed or forbidden. A norm-governed practical reasoning agent must be able to (a) detect such a conflict and (b) inform the deliberation process about such conflicts. During deliberation, the agent can then apply conflict resolution strategies to determine the normative situation for a specific candidate plan. To be able to discuss basic cases of conflict, we use a simple blocks world scenario as depicted in figure 2.

17

Following two norms are a typical example for a conflict between a permission and a prohibition: prohibition ( robot, perform shift("a","b","c"), TRUE, FALSE ) permission ( robot, perform shift("a","b","c"), TRUE, FALSE ) The prohibition forbids the agent to perform action perform shift("a","b","c"), whereas the permission, at the same time allows the agent to perform this action. As the scope of influence of both norm specifications is this single action, both are in conflict regarding this action – the normative situation of this action is unclear. The example above shows a special case of how norms can be specified and also shows the simplest form of a norm conflict – only one single action is involved in the conflict between both norms. Generally, norm specifications in NoA can contain universally quantified variables. As the following example shows, a more careful investigation is needed to identify what actions are involved in the conflict between such norm specifications. We consider following permission and prohibition: prohibition ( robot, perform shift("a",Y,Z), TRUE, FALSE ) permission ( robot, perform shift("a","r",Z), TRUE, FALSE ) Because of universally quantified variables, these norm specifications address sets of permitted or prohibited actions. The activity statements within these norm specifications are regarded as partially instantiated, referring to the fact that only a part of the parameters are constant values. To provide more insight into the conflict situation in terms of partially instantiated norm specification, we introduce the so-called instantiation graph as

18

shift ( X, Y, Z )

shift ( “a”, Y, Z )

prohibition ( role, perform shift ( “a”, Y, Z ), T, F )

shift ( “a”, “r”, Z )

shift ( X, “r”, Z )

shift ( “a”, “s”, Z ) Scope of shift ( “a”, Y, Z )

shift ( “a”, “r”, “u” ) shift ( “a”, “r”, “v” ) shift ( “a”, “s”, “u” ) X ∈ {“a”, “b”} Y ∈ {“r”, “s”} Z ∈ {“u”, “v”}

shift ( “a”, “s”, “v” ) Instantiation set of shift ( “a”, Y, Z )

Figure 3: Instantiation Graph and Scope of Influence of Norms. a device to map out all possible (partial) instantiations of actions (as well as states) and to explain and visualise possible conflict scenarios. Figure 3 shows a part of a graph that outlines all partial and full instantiations of action shift(X,Y,Z). It also shows the scope of influence of the prohibition for action shift("a",Y,Z). This prohibition is regarded to be explicitly specified for shift("a",Y,Z) and propagated to each node contained in its scope – each of these nodes represents a specific partial instantiation of shift(X,Y,Z) and each of these partial instantiations is regarded as being explicitly forbidden. The instantiation set in this depiction is the set of full instantiations that correspond to shift("a",Y,Z). They are regarded as inheriting their normative status from their antecedents and represent those actions that are explicitly forbidden because of the adoption of a prohibition that contains an activity specification that addresses a whole set of actions. The instantiation set represents the set of actions (or states) that are actually allowed or forbidden. With this representation, we can regard norms as being explicitly introduced for a specific partial instantiation of an action (or state), represented as a node in this graph, and being propagated to all nodes in the scope of the norm. Nodes are interconnected according to their (partial) instantiation, with leaf nodes in this graph representing full instantiations. The instantiation graph is used as a devise to investigate possible conflict situations. Conflicts occur if norms are adopted with scopes of norm influence that overlap. In terms of the instantiation graph, norms are regarded as being introduced for different nodes within this graph at the same time, where (a) a norm addresses a specific partial instantiation of a state or action that is contained within the scope of another norm, (b) the scopes of two norms intersect or (c) a norm is adopted for a specific action that conflicts with norms adopted 19

for states of affairs that are effects of this action. Three main categories of conflicts emerge: • Containment. The scope of a norm is contained within the scope of another norm. The norms themselves can be regarded as having a specialisation relationship – one norm contains an activity specification that addresses a subset of actions or states addressed by the second norm. • Intersection. The scope of a norm intersects the scope of another norm. There is no specialisation relationship between the norms. The actions or states in the intersection of both scopes inherit both norms at the same time. • Indirect Conflict. A norm is adopted for a specific action that conflicts with norms adopted for states of affairs that are effects of this action. In the following discussion, this instantiation graph is used to introduce these cases of conflict in more detail. 2.9.1

Containment

In using the instantiation graph again as a means of visualising conflict situations, we can regard norms as being declared for any partial and full instantiation of an activity (action or state). The situation of containment describes a situation where the scope of normative influence of one norm is within the scope of another norm (see figure 4). With that, a norm may be declared for a more specialised partially instantiated action that is, at the same time within the scope of another norm for a more general partially instantiated action. This situation can be interpreted as expressing a kind of specialisation relationship between conflicting norms. According to the example in figure 5, a prohibition is introduced for actions described by shift("a",Y,Z). With that, all partially instantiated action specifications within the scope of this prohibition inherit this norm. The example also shows the introduction of a permission for actions described by shift("a","r",Z). This is a permission with an action specification that (a) addresses a smaller set of actions and (b) is fully contained within the scope of the prohibition. Both norms are in conflict for a subset of specific actions that they both address at the same time. In this specific situation, the permission introduces a normative situation for an activity specification that is (a) within the scope of the prohibition and (b) more specific than the activity specification of the prohibition. This situation can be interpreted as expressing a specialisation relationship between these two norms. In its deliberation, the agent has to decide whether the more general norm overrides the more specific norm or the more specific overrides the more general norm. The benefit of choosing an overriding strategy where the more specific norm overrides the more general one is the possibility to specify exceptions from the norm. An example for such a request would be the following: 20

prohibition ( robot, perform shift(“a”,Y,Z), TRUE, FALSE )

shift(“a”, “s”, “u”) shift(“a”, “s”, “v”)

shift(“a”, “r”, “u”) shift(“a”, “r”, “v”) permission ( robot, perform shift(“a”,“r”,Z), TRUE, FALSE ) X ∈ {“a”, “b”} Y ∈ {“r”, “s”} Z ∈ {“u”, “v”}

Figure 4: Conflicts due to Containment of Scopes Consider an agent with the ability to query information sources. Without any restrictions in place, it will be able to access the data. Suppose, that a norm is established by the owner of the information source that explicitly prohibits the agent from obtaining access. This agent should not access the resource, but, following a special agreement with the owner, it may download a specific document. As soon as this download has taken place, the temporary permission expires and the original prohibition regains its influence on the normative state of the agent. In terms of our instantiation graph, this corresponds to a situation where a permission is annotated to a specific node representing an action description that is also a member of the scope of a more general prohibition. More abstract, this can be expressed as a request such as All actions described by shift("a",Y,Z) are, in general, prohibited from being executed, except action shift("a","s","u") To allow such a request, we require a strategy of exception or specificity – more specific norms override less specific norms: • Specificity. A norm that is adopted for a more specific (partial) instantiation of an action (plan) or state of affairs is overriding inherited norms at this node and in the scope of this node. This has certain consequences how we regard propagation / inheritance of norms within the instantiation graph. Assuming specificity as a strategy means that the propagation of a norm is regarded to stop if it meets an explicit annotation at a decendent node within its own scope. Speaking in terms of the 21

shift ( X, Y, Z ) prohibition ( role, perform shift ( “a”, Y, Z ), T, F ) shift ( “a”, Y, Z )

shift ( “a”, “r”, Z )

More / less specific overrides less / more specific?

shift ( X, “r”, Z )

X ∈ {“a”, “b”} Y ∈ {“r”, “s”} Z ∈ {“u”, “v”}

shift ( “a”, “s”, Z )

shift ( “a”, “r”, “u” ) shift ( “a”, “r”, “v” )

shift ( “a”, “s”, “u” ) shift ( “a”, “s”, “v” ) permission ( role, perform shift ( “a”, “r”, Z ), T, F )

Figure 5: Conflicts due to Containment of Scopes instantiation graph, a strategy of specificity means that the propagating norm does not override an explicitly annotated norm at a decendent node within its scope and, therefore, will not be able to reach all nodes of its instantiation set. Parts of its instantiation set are “covered” by the norm annotating this decendent node. 2.9.2

Intersection

The second case of conflicts can be described as an intersection of the scopes of influences of both norms. Two norms have intersecting scopes of influence, if they do not have a specialisation relationship as shown in the previous case of containment. Figures 6 and 7 show a permission and a prohibition at two different antecedent nodes for action shift("a","r",Z). This is a situation of multiple-inheritance. A node in the instantiation graph has multiple antecedent nodes, where these antecedent nodes may have different norm annotations and no relationship of specialisation can be established. These two norm specifications describe two intersecting sets of actions. Conflicts occur in the intersection of both sets (figure 6). As no relationship between the two conflicting norms can be identified, the agent will not be able to determine if one of the conflicting norms overrides the second one according to a specificity relationship. The elements contained in the intersection of both scopes of norm influence inherit two conflicting norms. A special case of intersecting norm influence is a case where two norms have identical scope. Figure 8 shows two norm specifications with identical scopes of

22

permission ( robot, perform shift(“a”,Y,Z), TRUE, FALSE )

shift(“a”, “s”, “u”) shift(“a”, “s”, “v”)

shift(“a”, “r”, “u”) shift(“a”, “r”, “v”) prohibition ( robot, perform shift(X,”r”,Z), TRUE, FALSE )

shift(“b”, “r”, “u”) shift(“b”, “r”, “v”)

X ∈ {“a”, “b”} Y ∈ {“r”, “s”} Z ∈ {“u”, “v”}

Figure 6: Confict due to Intersecting Influence of Norms norm influence. A conflict occurs for all partial specifications of actions within both scopes of influence. The agent is not able to derive any form of specialisation relationship between the competing norms and, therefore, will rely on conflict resolution strategies as discussed in one of the following sections. 2.9.3

Indirect Conflicts

The last important case of norm conflict is that of indirect conflict. An indirect conflict occurs if, for example, a specific action is allowed, but one of its effects is a forbidden state of affairs. Figure 9 shows the conflict between norms imposed on actions and states. It shows both a fragment of the instantiation graph of action shift(X,Y,Z) and of the state on(X,Z). Let us assume that the agent holds a plan shift(X,Y,Z) and that this plan is specified with following syntax according to definition 9: plan shift ( X, Y, Z ) precondition(on(X,Y),clear(Z),test X Z) effects(on(X,Z), not on(X,Y), clear(Y), not clear(Z)) { achieve clear(X); primitive doMove(X,Z) ; }

23

prohibition ( robot, perform shift ( X, “r”, Z ), T, F ) prohibition ( role, perform shift ( “a”, Y, Z ), T, F ) shift ( “a”, Y, Z )

shift ( X, “r”, Z )

shift ( “a”, “r”, Z )

X ∈ {“a”, “b”} Y ∈ {“r”, “s”} Z ∈ {“u”, “v”}

shift ( “b”, “r”, Z )

shift ( “a”, “r”, “u” ) shift ( “a”, “r”, “v” ) shift ( “a”, “s”, Z )

shift ( “a”, “s”, “u” ) shift ( “a”, “s”, “v” )

Figure 7: Conflicts due to Intersection of Scopes This declaration shows that one of the explicitly defined effects is on(X,Z). In figure 9, a prohibition is established for the agent to achieve a state of affairs on("a","u"). At the same time, the action shift("a","s","u") is allowed by an explicit permission. Suppose that the agent adopts the following obligation, which is immediately active: obligation ( robot, achieve clear("s"), TRUE, FALSE ) To fulfill this obligation, the agent can either perform action shift("a","s","u") or shift("a","s","v"). Both actions produce the forbidden effect on("a","u"). If a strategy is chosen, where the prohibition for this effect overrules the permission for shift("a","s","u"), then the agent has no option to action, as action shift("a","s","v") is forbidden as well. The prohibition removes any possibility for the obligation to be fulfilled. On the other hand, if the agent decides that the permission of this single action overrides the prohibition for one of its effects, then the agent would have at least this single action for fulfilling its obligation.

24

Implicit Permisssion

shift ( X, Y, Z ) permission ( robot, perform shift ( “a”, Y, Z ), T, F )

? shift ( “a”, Y, Z )

prohibition ( robot, perform shift ( “a”, Y, Z ), T, F )

shift ( “a”, “r”, Z )

shift ( X, “r”, Z )

shift ( “a”, “s”, Z )

shift ( “a”, “r”, “u” ) shift ( “a”, “r”, “v” ) X ∈ {a,b} Y ∈ {r,s} Z ∈ {u,v}

shift ( “a”, “s”, “u” ) shift ( “a”, “s”, “v” )

Figure 8: Identical Scopes of Norm Influence (Competing Norms)

2.10

Conflict Resolution Strategies

If the agent holds a set of norms and there are conflicts among these norms, then the agent cannot decide the consistency of a specific candidate in its current set CAN DIDAT ES. The function checkConsistency() (definition 28) will fail in such a case to determine the consistency situation of this candidate. Nevertheless, it can create a label that records additional information about conflicts between permissions and prohibitions. This information can then be used in the deliberation phase to first resolve such conflicts by applying conflict resolution strategies. The resolution of such a conflict between any two norms will establish a dominance of one of the norms regarding a specific candidate plan. For each candidate with a disputed normative situation, a resolution of these conflicts opens the possibility to determine its consistency situation. Such a “ranking” of norms provides an agent with means to select a norm – out of a set of conflicting norms – that would be appropriate to adhere to. It is important to note that a conflict resolution strategy is needed that is complete. Such a strategy must be able to create a complete partitioning of the space of actions the agent can perform and states the agent can achieve into allowed and forbidden actions and states. A conflict resolution strategy is incomplete, if such a decision cannot be made for at least one action or state. The following strategies are under investigation in the context of the NoA model: • Arbitrary decision • Recency 25

Implicit Permisssion

shift ( X, Y, Z ) X ∈ {a,b} Y ∈ {r,s} Z ∈ {u,v} shift ( “a”, Y, Z )

shift ( “a”, “r”, Z )

shift ( X, “r”, Z )

Implicit Permisssion

shift ( “a”, “s”, Z )

on ( X, Z )

shift (“a”,“r”,“u”) shift (“a”,“r”,“v”)

on ( “a”, Z )

on ( X, “u” )

shift (“a”,“s”,“u”) shift (“a”,“s”,“v”)

permission ( robot, perform shift ( “a”, “s”, “u” ), T, F )

on (“a”,“u”)

on (“a”,“v”)

prohibition ( robot, achieve on ( “a”, “u” ), T, F )

Figure 9: Indirect Conflict between Norms for States and Actions. • Seniority • Cautiousness • Boldness • Social power • Negotiate with norm issuer • Specificity These are conflict resolution strategies that can be employed during the agent’s deliberation to achieve a complete partitioning of its candidate set into allowed and forbidden plans. Arbitrary decision is the most simplest form of conflict resolution as it does not take any information about the conflict situation into account. If the agent chooses recency or seniority, then a form of time stamp is required that records the activation time of a norm. With that, a ranking according to activation time can be established and selections according to recency or seniority can take place. The agent is pursuing a cautious strategy, if prohibitions always overrule permissions and it is pursuing a bold strategy, if permissions always overrule prohibitions. If a case of containment of norm scopes as described above can be detected, then this can be interpreted as a

26

specialisation relationship between the conflicting norms. The agent can resolve such a conflict according to specificity. A conflict resolution strategy according to social power would require a substantial extension of the role model within NoA to express relationships of dependency and influence between roles. Such relationships can be used to determine, if a norm is “more powerful” to override a conflicting norm. Such a strategy can be used to solve the multiple-inheritance situation that occurs if norms have intersecting scopes of norm influence (see above). An agent can also renegotiate specific norms and reach agreements to either revoke prohibitions or receive additional permissions that override prohibitions. Finally, the strategies described so far may not be sufficient to disambiguate the agent, if the issuer of norms, acting in a position of power, issues multiple conflicting norms. The agent, despite being able to detect such conflicts, will not be able to resolve the conflict according to social power as all conflicting norms are issued by the same source. The agent may claim that this source is inconsistent and require it to resolve these conflicts and reissue a set of norms without conflicts. Such a situation can be regarded as a distributed conflict resolution strategy.

3

Organisational Change

A change of an organisation can result in conflicting norms. Agents must be able to deal with these conflicts and the NoA model provides the necessary means for an agent to (a) detect and (b) resolve such conflicts. NoA agents are, therefore, well suited to deal with organisational change. In our model, organisations are described as sets of roles, where these roles are determined by a set of norm specifications. Definition 29 An organisation of agents according to the NoA model of normgoverned agency is a set of roles. A role is determined by a set of norms. Changes in organisations can occur in a variety of ways. We distinguish two main categories: • The organisational structure remains unchanged – the set of agents adopting roles within this organisation changes • The organisational structure changes – roles are added or removed and roles can change in terms of their sets of norms If the organisational structure remains unchanged, then changes occur if the set of agents adopting norms changes. This can be described as a process of recruiting – a specific agent is recruited and adds capabilities to an organisation. As the agent employs these capabilities in fulfilling its obligations, indirect conflicts (see above) may occur. It therefore depends on the set of capabilities an agent holds, if such conflicts occur. By maintaining the set of capabilities of

27

an agent – the agent adopts new plans with different effects or removes plans – conflicts can be avoided. This can be described as a policy of training. If the agent adopts new roles in the same or different organisations, then potential conflicts between the norm sets of such roles can occur. The policy in the NoA model is not to guarantee that norm sets of an agent are free of conflict, but that the agent can deal with such conflicts. A change in the organisational structure means a change in the set of roles. With the addition of roles, potential conflicts can occur with other roles on the basis of norms. If agents with specific sets of capabilities are recruited into such roles, additional indirect conflicts can occur. If a role is removed, then the organisation looses capabilities as well – responsibilities connected to this role will not be fulfilled any more. For the removal of roles, the organisation has to be investigated if it comprises the minimum set of roles required and can still operate. Roles can change in terms of their sets of norms, especially if they assimilate norms from roles that are removed from an organisation.

3.1

Informed Deliberation

Informed deliberation is the mechanism within NoA to deal with conflicts occurring during organisational change. The labelling mechanism deployed in the NoA architecture records aspects of norm inconsistencies and conflicts and annotates this information to candidates in the set CAN DIDAT ES. From this set of candidates, the agent has to select one specific candidate for execution. NoA employs a specific form of informed deliberation for this purpose, using the information recorded in the label to (a) resolve conflicts and (b) select one specific candidate. The consistency label of candidates is currently used in NoA in the most essential and simplest way – candidate plans for execution are simply identified as either consistent or inconsistent. According to definition 27, the consistency label contains – in its simplest form – a set of M OT IV AT ORS (the motivating obligations) and a set of P ROHIBIT ORS (the set of prohibitions and obligations that prohibit the execution of the candidate). 3.1.1

Example

For example, let us assume that an agent holds a set IN ST N ORM S = { p1 , p2 } with two plans p1 and p2 as its current (instantiated) capabilities. We assume that these plans will produce following states of affairs as their effects during execution: • plan p1 : effects(p1 ) = { s, t } • plan p2 : effects(p2 ) = { s } We also assume that the agent adopts two roles, ROLES = { r1 , r2 } and, consequently, two sets of norms annotated to these roles. If we use specific syntactic forms to express norm specifications according to definition 8 (see 28

[24, 20] for details), then we can describe the two role-related norm sets in the following way: • role r1 : { obligation(r1 , achieve(s), φ, ψ), prohibition(r1 , perf orm(p2 ), φ, ψ) } • role r2 : { prohibition(r2 , achieve(t), φ, ψ) } According to definition 8, norm specifications are characterised by a reference to a role, an activity specification and two conditions, the activation and expiration condition (denoted here as φ and ψ). For the following discussion, we assume that these two norm sets are issued by two different normative authorities, authority Ax and Ay . With that, the agent’s set IN ST N ORM S, comprising these two role-related norm sets, contains norms issued by different normative authorities. The agent is motivated by its obligation obligation(r1 , achieve(s), φ, ψ) to achieve this state of affairs. Consequently, it forms the set CAN DIDAT ES. Plan p1 as well as p2 produce s as one of their effects and, therefore, comprise the set CAN DIDAT ES: • CAN DIDAT ES = { p1 , p2 } The investigation of consistency yields following problems: candidate p1 is inconsistent with the prohibition to achieve state t, as t ∈ effects(p1 ) and candidate p2 is inconsistent with the prohibition to perform action p2 . A set of labels emerges, characterising these inconsistencies (see definition 27): • Lp1 = h p1 , { obligation(r1 , achieve(s), φ, ψ) }, { prohibition(r1 , achieve(t), φ, ψ) } i • Lp2 = h p2 , { obligation(r1 , achieve(s), φ, ψ) }, { prohibition(r2 , perf orm(p2 ), φ, ψ) } i In both labels, the set M OT IV AT ORS (see definition 27) contains the one motivating obligation. In both cases, the set P ROHIBIT ORS is not empty but contains prohibitions that mark the corresponding candidate as inconsistent. Therefore, for the motivating obligation, there is no consistent candidate – the obligation itself is inconsistent. In this situation, the agent has two options: • although the agent holds an obligation that is in a state of inconsistency, it acts by selecting one of the inconsistent candidates for execution. • The agent tries to improve its level of consistency regarding this specific obligation, so that at least one of the candidates becomes a consistent option.

29

3.1.2

Improving the Level of Consistency

As outlined before, the consistency of candidate plans defines the consistency of an obligation. For a specific obligation o ∈ IN ST N ORM S, a specific subset of the set CAN DIDAT ES L represents the set options(o) of possible candidates for this obligation. This set can have one of the three following states: (a) all candidates are consistent, (b) at least one of them is consistent or (c) none of them is consistent. According to the consistency situation of options(o), the obligation o is then either strongly consistent, weakly consistent or inconsistent. An obligation can be fulfilled without violating other norms, if it is at least weakly consistent. In the previous example, the set M OT IV AT ORS for the two candidates p1 , p2 contains one obligation to achieve a state of affairs s: obligation(r1 , achieve(s), φ, ψ) According to the labelling outlined in the example, none of the candidate plans for this obligation are consistent – the agent is regarded in terms of this obligation at a level of inconsistency. If the agent decides to fulfill this obligation in a norm-consistent way, then it must try to upgrade the level of consistency of this obligation. This would mean to free – maybe temporarily – at least one of the candidate plans from its prohibitors. This can take place by engaging with the authority that issued the prohibitors in a dialogue and reach an agreement that can be the following: • the authority revokes the prohibiting norms • the authority issues a permission that temporarily overrides the existing prohibition If the authority issues a (temporary) permission, then a situation of conflict occurs with the existing norms contained in the set P ROHIBIT ORS of at least one of the candidates for this obligation. In this case, such a conflict is intentional – during the dialogue with the authority, the agent negotiates the release of such a permission, using knowledge about the possible classes of conflict (as outlined above) between norms. After receiving such a permission, the agent relies on its set of conflict resolution strategies to achieve the correct overriding between norms. Let us assume that the agent could convince the authority to issue following permission: permission(r1 , achieve(t), φ, ψ) In our example, the agent has adopted two roles, r1 and r2 . Let us assume that the agent receives this permission for its role r1 . As this permission allows the achievement of state t that is forbidden by the existing prohibition, a conflict of intersecting scopes of norm influence occurs. From the set of conflict resolution strategies, the agent can choose, for example, recency to make the 30

permission the dominant norm. With that, the agent allows the prohibitor for candidate plan p1 to be overridden – this candidate becomes a consistent choice. Candidate p2 is still inconsistent, therefore the agent can fulfill this obligation at a level of weak consistency.

3.2

Role Assimilation

Role assimilation occurs, if norms from one role are moved to another role and become assimilated by this role. Reasons for such a role assimilation can be, for example, the transfer of responsibilities (obligations) from one role to another or the complete removal of a specific role and the assimilation of responsibilities that were originally assigned to this role. Role assimilation can produce norm conflicts – the norm transferred from role r1 to role r2 may conflict with norms already assigned to role r2 . 3.2.1

Example

We assume following organisational structure: • role r1 : { obligation(r1 , achieve(r), φ, ψ), prohibition(r1 , achieve(s), φ, ψ) } • role r2 : { obligation(r2 , achieve(s), φ, ψ), permission(r2 , achieve(t), φ, ψ) } • role r3 : { obligation(r3 , achieve(s), φ, ψ), obligation(r2 , achieve(t), φ, ψ) } If we remove role r2 and intend the assimilation of its norms by the remaining roles, then this can lead to conflicts. Role r1 holds a prohibition to achieve state s. If it assimilates role r2 , then an obligation will be added to role r1 that demands an agent in this role to achieve state s, which is forbidden by the prohibition contained in this role. This new obligation will be inconsistent because whatever plan the agent selects for fulfilling this obligation, the required effect of such a plan execution is prohibited. By investigating role r2 , we see that no conflict occurs because(a) it already contains an obligation to achieve state s and the added permission from role r2 will not contradict this obligation (taking into account the agents with their current capabilities in these roles would add additional complications that are omitted here for reasons of simplicity). If an organisation changes, such issues have to be taken into account. The change can be guided by issues of conflict and consistency. Based on this information about conflicts and consistency, roles may be changed by adding permissions or removing prohibitions so that obligations are moved from a level of inconsistency to a level of – at least – weak consistency.

4

Related Work

Norms have found increasing attention in the research community as a concept that drives the behaviour of agents within virtual societies. Conte and Castelfranchi [5, 3] investigate in detail how agents within a society reason about 31

norms regarding their actions and what motivates them to honour their obligations and prohibitions and fulfill their commitments. Conte et al. [5, 6], argue that for a computational model of norm-governed agency, the internal representation of norms and normative attitudes, and models of reasoning about norms is a necessity. Norm-governed agents must be able to recognise norms as a social concept, represent them as mental objects and solve possible conflicts among them. Such agents should, in the words of [6], be truly norm-autonomous – they must be able to take a “flexible” approach towards norms: know existing norms, learn / adopt new ones, negotiate norms with peers, convey / impose norms on other agents, control and monitor other agents’ norm-governed behaviour, and be able to decide whether to obey or violate them. Cavedon and Sonenberg [4] use Castelfranchi’s concept of social commitment to investigate mechanisms of commitment, power and control within agent societies. Like in NoA, in their model obligations of agents are attached to the concept of a role. By adopting such a role the agent commits to pursue the attached obligations or “goals”. Such an adoption takes place, if agents engage in a specific relationship based on a social contract that assigns specific roles to the contracting partners. To solve conflicts between obligations due to the agent adopting multiple roles, a concept of role influence is used – one role is more influential to the agent’s acting then other roles and, therefore, translates into a stronger social commitment for the agent. Panzarasa et al. [31, 30] discuss the influence of a social context on the practical reasoning of an agent. They point out that the concept of social commitment as introduced by Castelfranchi and investigated by Cavendon and Sonenberg has to be extended to include issues of how social commitments and regulations inform and shape the internal mental attitudes of an agent to overcome the solipsistic nature of current BDI models. Work pursued by Broersen et al. [2], Dastani and van der Torre [10, 9], the model of a normative agent described by Lopez et al. [27] and, specifically, the NoA system as presented in this paper and elsewhere [24, 20] introduce concepts of norm influence into practical reasoning agent to make this transition from solipsistic to social agents. The NoA model of norm-governed agents takes strong inspirations from the work of Kanger [19], Lindahl [25] and Jones and Sergot [18, 32] for the representation of rights and the concept of a normative position. Members of a society adopt these norms and, ideally, operate according to them. Adopted norms determine the social or normative position of an individual [25], expressing duties, powers, freedom etc. under specific legal circumstances. This normative position can change any time with new norms coming into existence or old ones removed. These normative positions are important for legal relationships between individuals – one person’s duty is another person’s privilege or one person’s power is another person’s disability to escape such powers [17]. Relationships of power create organisational structures and hierarchies within a society, assigning specific roles to members of an organisation [18, 29]. Dignum et al. [12] describe the three basic aspects in the modelling of virtual societies of agents: (a) the overall purpose of such a community of agents, (b) organisational structure based on a set of roles and (c) norms for regulating the actions and interactions of the agents adopting such roles. In line of our previous 32

argument that the solipsistic nature of agents has to be overcome for virtual organisations, they emphasise as well the importance of introducing a collective perspective on an agent’s actions in a specific role within a society - the agent cannot not be solely driven by internal motivations, but it has to be socially aware in its practical reasoning. As also described in [7], Agents take on roles and responsibilities and are determined in their actions by external influences in the form of social regulations and norms. We argue in this paper that normgoverned agents are an necessary prerequisite to create virtual organisations, where the actions and interactions of agents are guided by normative directives such as obligations, permissions and prohibitions. Such an argument is also reflected in the work of Pacheco and Carmo [29]. They describe the modelling of complex organisations and organisational behaviour based on roles and normative concepts. The creation of virtual societies is based on contracts between agents. Such a contract describes the set of norms that specify roles and agents adopting such roles commit to act according to these norms. Pacheco and Carmo emphasise the importance of these contracts as the central element to bind agents into societies. The contract specification language put forward by them includes role specifications, deontic characterisations for these roles, the attribution of roles to agents and the relations between roles. Organisational change and the impact of these social dynamics on the normative position of the agent, as addressed in this paper and previous work [22, 24, 23, 20], also find attention in the work of Esteva et al. [13], Lopez and Luck [26] and Skarmeas [33]. Dastani et al. [8] investigate conflicts that can occur during the adoption of a role by an agent. Esteva et al. [13] present a computational approach for determining the consistency of an electronic institution. As shown in this paper, the NoA model includes a detailed classification of conflict situations that allows to inform the deliberation of the agent about problems of norm conflicts and inconsistencies between the agents actions and its norms. With that, a NoA agent does not require a conflict-free set of norms to be operable, as it is provided with conflict resolution strategies to deal with conflicting norm sets. This is an important contribution for dealing with organisational change not found in other models.

5

Conclusion

In this paper we claimed that a specific form of agent is required that can handle changes in virtual organisations. As we assume that the actions and interactions of agents within such a virtual organisation are based on a normative directive – roles as the structural element of such organisations describe the obligations, permissions and prohibitions of agents adopting such roles – a form of an agent is required that can take such norms into account in its practical reasoning. To be actually operable within virtual organisations that undergo changes over time, such a norm-governed agent needs means to detect problems that arise from such changes and has to be equipped to resolve such problems. The NoA

33

model of norm-governed agency is presented in this paper that incorporates such features. It introduces a specific form of informed deliberation, where the decision process of the agent is informed about conflicts between norms the agent holds according to its social position and possible inconsistencies between its actions and its norms. With that, the set of norms held by a NoA agent does not have to be free of conflicts for the agent to be operable - the agent holds conflict resolution strategies to deal with conflicting norm sets. A detailed model is outlined that classifies conflict situations and a set of conflict resolution strategies are discussed.

References [1] M.E. Bratman, D.J. Israel, and M.E. Pollock. Plan and Resource-bounded Practical Reasoning. Computational Intelligence, 4(4):349–355, 1988. [2] J. Broersen, M. Dastani, J. Hulstijn, Z. Huang, and L. van der Torre. The BOID architecture: Conflicts between Beliefs, Obligations, Intentions and Desires. In Proceedings of Autonomous Agents 2001, pages 9–16, 2001. [3] C. Castelfranchi. Modelling Social Action for AI Agents. Artificial Intelligence, 103:157 – 182, 1998. [4] L Cavendon and L. Sonenberg. On social Commitments, Roles and Preferred Goals. In Proceedings ICMAS 98, 1998. [5] R. Conte and C. Castelfranchi. Cognitive and Social Action. UCL Press, 1995. [6] R. Conte, R. Falcone, and G. Sartor. Agents and Norms: How to fill the Gap? Artificial Intelligence and Law, 7(1), March 1999. [7] M. Dastani, F. Dignum, and V. Dignum. Organizations and Normative Agents. In Proceedings of the First Eurasian Conference on Advances in Information and Communication Technology Eurasia ICT, 2002. [8] M. Dastani, V. Dignum, and F. Dignum. Role-assignment in open Agent Societies. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, 2003. [9] M. Dastani and L. van der Torre. A Classification of Cognitive Agents. In Proceedings of the 24th Annual Meeting of the Cognitive Science Society CogSci 2002, pages 256–261, August 2002. [10] M. Dastani and L. van der Torre. What is a normative Goal? In Proceedings of the International Workshop on Regulated Agent-based Social Systems: Theories and Applications (AAMAS 2002 Workshop), 2002.

34

[11] F. Dignum, D. Morley, E.A. Sonenberg, and L. Cavedon. Towards socially sophisticated BDI Agents. In Proceedings of the fourth International Conference on Multiagent Systems ICMAS 2000, pages 111–118. IEEE Computer Society, July 2000. [12] V. Dignum, J-J. Meyer, H. Weigand, and F Dignum. An OrganizationOriented Model for Agent Societies. In Proceedings of AAMAS 2002, 2002. [13] M Esteva, W. Vasconcelos, C. Sierra, and J.A. Rodriguez-Aguilar. Verifying Norm Consistency in Electronic Institutions. In Proceedings of the AAAI 2004 Workshop onf Agent Organisations: Theory and Practice, 2004. [14] R.J Firby. An Investigation into Reactive Planning in Complex Domains. In Proceedings of the National Conference on Artificial Intellgence (AAAI), pages 809–815, 1987. [15] M.P. Georgeff and A. Lansky. Reactive Reasoning and Planning. In Proceedings AAAI-87, pages 677–682, Seattle, WA, 1987. [16] J. Giarratano and G. Riley. Expert Systems: Principles and Programming, 2nd Edition. PWS Publishing, 1993. [17] Wesley Newcomb Hohfeld. Fundamental Legal Conceptions as Applied in Judicial Reasoning. Yale University Press (Reprint 1964), 1923. [18] A.J.I. Jones and M. Sergot. A Formal Characterisation of Institutionalised Power. Journal of the IGPL, 4(3):429–445, 1996. [19] S. Kanger and H. Kanger. Rights and Parliamentarism. Theoria, 32:85–115, 1966. [20] M.J. Kollingbaum. Norm-governed Practical Reasoning Agents. PhD thesis, University of Aberdeen, 2005. [21] M.J. Kollingbaum and T.J. Norman. Supervised Interaction - creating a Web of Trust for Contracting Agents in Electronic Environments. In C. Castelfranchi and W. Johnson, editors, First International Joint Conference on Autonomous Agents and Multi-Agent Systems AAMAS 2002, pages 272–279. ACM Press, 2002. [22] M.J. Kollingbaum and T.J. Norman. Norm Consistency in Practical Reasoning Agents. In M. Dastany and J. Dix, editors, PROMAS Workshop on Programming Multiagent Systems, 2003. [23] M.J. Kollingbaum and T.J. Norman. Norm Adoption and Consistency in the NoA Agent Architecture. In M. Dastani, J. Dix, and A. Seqhrouchni, editors, Programming Multiagent Systems: Languages, Frameworks Techniques, and Tools, volume 3067 of Lecture Notes in Artificial Intelligence. Springer-Verlag, 2004.

35

[24] M.J. Kollingbaum and T.J. Norman. Strategies for Resolving Norm Conflict in Practical Reasoning. In ECAI Workshop CEAS 2004, 2004. [25] L. Lindahl. Position and Change: A Study in Law and Logic. D. Reidel Publishing Company, 1977. [26] F. Lopez y Lopez and M. Luck. Towards a Model of the Dynamics of Normative Multi-Agent Systems. In Proceedings of the International Workshop on Regulated Agent-Based Social Systems: Theories and Applications ´ Bologna, 2002. RASTA02, [27] F. Lopez y Lopez, M. Luck, and M. d´Inverno. Constraining autonomy through norms. In In Proceedings of the First International Joint Conference on Autonomous Agents and Multi-agent Systems, pages 647–681, 2002. [28] T.J. Norman and C.A. Reed. Delegation and Responsibility. In C. Castelfranchi and Y. Lesperance, editors, Intelligent Agents VII, LNAI 1986, volume 1986 of Lecture Notes in Artificial Intelligence, pages 136–149. Springer-Verlag, 2001. [29] Olga Pacheco and Jos´e Carmo. A Role Based Model for the Normative Specification of Organized Collective Agency and Agents Interaction. Autonomous Agents and Multi-Agent Systems, 6(2):145–184, 2001. [30] P. Panzarasa, N.R. Jennings, and T.J. Norman. Social Mental Shaping: Modelling the Impact of Sociality on the Mental States of Autonomous Agents. Computational Intelligence, 17(4):738–782, 2001. [31] P. Panzarasa, T.J. Norman, and N.R. Jennings. Modelling Sociality in the BDI Framework. In Intelligent Agent Technology: Systems, Methodologies, and Tools. Proceedings of the First Asia-Pacific Conference on Intelligent Agent Technology., pages 202–206. World Scientific Publishing, 1999. [32] M. Sergot. A Computational Theory of Normative Positions. ACM Transactions on Computational Logic, 2(4):581–622, 2001. [33] N. Skarmeas. Organisations through Roles and Agents. In Proceedings of the International Workshop on the Design of Cooperative Systems ´ 1995. COOP95, [34] W. Vasconcelos. Norm Verification and Analysis in Electronic Institutions. In AAMAS 2004 Workshop Declarative Agent Languages and Technologies (DALT 2004), 2004. [35] Michael Wooldridge. An Introduction to MultiAgent Systems. John Wiley & Sons, LTD, 2002.

36