SEAS DTC First Conference Proceedings - CiteSeerX

0 downloads 0 Views 118KB Size Report
1st SEAS DTC Technical Conference - Edinburgh 2006 ... requirements may mandate that there are periods when no communication is to ... There is currently wide interest in both the ... and manage the individual vehicles as well ..... a data structure would contain is stored in ..... Network and Computer Applications, Vol.
Provision of Robust Behaviour in Teams of UAVs – a Conceptual Model Dennis Jarvis, Jacqueline Jarvis, Ralph Rönnquist and Martyn Fletcher Agent Oriented Software Limited Wellington House, East Road, Cambridge CB1 1BH

Abstract Current approaches for the provision of autonomous team behaviour assume that team members are able to communicate with each other whenever they wish. This is an assumption that does not always apply in the UAV domain. Environmental conditions, such as weather or terrain can invalidate this assumption, as can damage to the UAV itself. Also mission requirements may mandate that there are periods when no communication is to occur. The potential inability to communicate within the team at critical times leads to a reconsideration of the underlying team architecture and of the way in which behaviours are modelled and executed. It also emphasizes the need for the ability to dynamically reassign roles within the team when required. In this paper, we present a conceptual model to underpin the provision of robust team behaviour in the presence of such perturbations. The conceptual model extends an existing teams infrastructure (JACKTM Teams), which in turn is based on the BDI model of agency. Keywords : Agents, Teams, C2 Introduction There is currently wide interest in both the military and civil domains in expanding the usage of Unmanned Air Vehicles (UAVs) and the range of roles that they perform. However, their potential utility is still severely limited by a dependence on humans on the ground to control the vehicle and to plan and replan the mission. Also this operational approach does not extend to the management of teams of UAVs, as it becomes too difficult to control and manage the individual vehicles as well as the interaction and coordination between them. Intelligent agent technology is being explored as one way of providing the autonomy that UAVs will require if this reliance on human controllers is to be significantly reduced. In this approach, a UAV would be tasked to achieve an operational objective, such as find the location of survivors from a capsized yacht. The UAV would then reason autonomously about how best to achieve the goal and automatically adapt to changing

circumstances as the mission proceeds. This approach is currently being trialled for the Codarra Avatar UAV [1]; the agents were developed using the JACKTM Intelligent Agents (JACK) product [2]. Existing teamwork models assume that team members are able to communicate with each other whenever they wish, and that is an assumption which does not always apply for the provision of autonomous team behaviour in the UAV domain. Environmental conditions, such as weather or terrain can invalidate this assumption, as can damage to the UAV itself. Also mission requirements may mandate that there are periods when no communication is to occur. The potential inability to communicate within the team at critical times leads to a reconsideration of existing architectures and of the way in which behaviours are modelled and executed. It also emphasizes the importance of being able to dynamically reassign roles within the team when required. The work described in this paper was conducted as part of the Reconfiguring Task Teams for Intelligent Mission

1st SEAS DTC Technical Conference - Edinburgh 2006

B16

Management, and Command and Control (MP005) project. This project was a component of the Autonomous and Semiautonomous Vehicles Theme within the Systems Engineering for Autonomous Systems (SEAS) Defence Technology Centre established by the UK Ministry of Defence. The primary objective of the MP005 project was to develop a framework that supports modelling and execution of team behaviours in situations where it is not always possible for team members to communicate with each other, either temporarily or permanently. Note that while the focus of this paper is on UAVs, the model that we develop is generally applicable to teams of autonomous vehicles, regardless of the operational environment. Previous Work In artificial intelligence, philosophy and cognitive science, there is general consensus that the collective activity of a group is more than the aggregate of the domain-oriented actions performed by the individuals belonging to the group. However, there is ongoing debate as to how collective behaviour should be modelled. A key issue is whether collective intentionality requires a different kind of mental-state construct, namely an intentional attitude that although individually held is different from and not reducible to an “ordinary” intention. Opposing views have been presented by Searle [3] (for) and by Bratman [4] (against). From a multi-agent perspective, this tension is reflected in the Joint Intention theory of Cohen and Levesque [5] and the SharedPlans theory of Grosz and her collaborators [6]. In the Joint Intention theory, a team is defined as “a set of agents having a shared objective and a shared mental state”. Joint intentions are held by the team as a whole, and require each agent to commit to informing other team members whenever it detects that the common goal has been achieved, or has become impossible to

achieve or that because of changing circumstances, the goal is no longer relevant. By way of contrast, individuals in the SharedPlans theory deploy two intentional attitudes – intending to do an action and intending that a proposition holds. The former is used to represent an agent’s commitment to its own actions; the latter to group activity and the actions of its team members in service of that activity. Such commitments lead an agent to engage in what Grosz refers to as intention cultivation. This process includes reasoning about other team member’s actions and intentions and ways that the agent can contribute to their success in the context of the group activity. Both Joint Intention theory and SharedPlans theory have provided the theoretical basis for successful implementations, most notably the team oriented programming (TOP) model of TEAMCORE [7]. The TOP model is based on Joint Intentions theory and incorporates aspects of SharedPlans. Other implementations not grounded in the above theories include cooperation domains and JACK Teams. The concept of cooperation domains, as implemented in [8], encapsulates the interaction that occurs over the lifecycle of a team created to execute a particular task. Team member behaviours are external to the cooperation domain; the cooperation domain only provides the protocol for team interaction. While cooperation domains adopt a similar modelling viewpoint to that of the SharedPlans and Joint Intentions theory where team behaviour is an individual agent concern, in JACK Teams team behaviour is modelled separately from individual behaviour, using explicit team entities. The adoption of this approach has provided significant benefits from a software engineering perspective by enabling team behaviour to be modelled and understood as a top-down process, rather than a bottom-up process as in other approaches.

1st SEAS DTC Technical Conference - Edinburgh 2006

B16

JACK Teams extends the BDI model of agency to include teams as explicitly modelled entities – that is, a team has its own beliefs, desires and intentions and is able to reason about team behaviour. Team behaviour is specified independently of individual behaviour through the use of roles. A role is an interface definition that declares what an entity that claims to implement a role must be capable of doing. It has two aspects – the goals involved in fulfilling the role and the beliefs that are to be exchanged. A team member commits to the performing of a particular role – there is no notion of shared intention from the team member’s perspective or of commitment to other team members. As we shall see later in this paper, the contract concept in the Robust Teams model can be viewed as providing team members with an expression of joint intention. With the exception of JACK Teams, team behaviour has been viewed as an individual agent concern – that is, an agent possesses the capability of either acting individually or participating as a member of a team. Consequently there is no notion of a team as a distinct software entity that is able to reason about and coordinate team member behaviour. For example in TEAMCORE, joint intentions are represented as plans, but each individual agent executes its own copy. Reasoning regarding team behaviour is the responsibility of the individual team members; infrastructure is provided to propagate changes in mutually held beliefs between team members. A focus on interaction at the expense of behaviour is also evident in the concept of cooperation domains and the work of FIPA [9]. Treating team behaviour as an individual agent responsibility has intuitive appeal, as humans do not separate the teaming aspects of reasoning from the individual aspects. For example “if I have role x in team y then I do z”. However, such an approach introduces significant difficulties from a software engineering perspective, as the development of behaviours becomes

extremely complex and brittle. For example, if an individual agent can assume many roles within a variety of teams, changes to the potential team structures will result in extensive redesign and the agent code quickly becomes un-manageable. JACK Teams overcomes these difficulties by declaring teaming aspects separately from agent aspects. The role concept then provides the means to connect the two aspects. This declarative specification of team behaviour in a single modelling construct (the team plan) reflects a heritage of tactics modelling [10,11], where a critical issue is Subject Matter Expert (SME) involvement in tactics capture and validation. The philosophical implications of representing a team as a distinct entity have not been explored, although there are similarities with Koestler’s notion of a holon [12]. Koestler did not distinguish between a team and an individual – rather, he described system behaviour in terms of holons. As defined by Koestler, a holon is an identifiable part of a system that has a unique identity yet is made up of subordinate parts and in turn is part of a larger whole. Holons can then be thought of as self-contained wholes looking towards the subordinate level and/or as dependent parts looking upward. This duality can be equated at a software level to the JACK Teams role concept and at a philosophical level to Grosz’s intentional attitudes of intending-to and intending-that. In all of the above implementations and theories, communication between team members is not explicitly modelled. The implication is that dealing with its nonavailability is a behavioural issue – it is the responsibility of the team members (and in the case of JACK Teams, the team as well) to incorporate appropriate strategies within their existing reasoning processes. If this approach were to be adopted, the same software engineering issues that arise if one implements team reasoning as the concern of individual agents would apply. In JACK

1st SEAS DTC Technical Conference - Edinburgh 2006

B16

Teams, these issues were resolved by employing a novel architecture in which teams are explicitly represented and are able to reason about team behaviour. The hypothesis behind this work is that in order to adequately address the issues arising from communications unavailability during teamed UAV operation, an architecture that encapsulates communication behaviour and separates it from team behaviour and individual behaviour will be required. The solution principle that is being explored in this project is one in which a team member resident on a particular execution platform is represented on all other execution platforms by proxies. The team itself is replicated on all platforms. The concept is that a team on a particular execution platform (and the team member associated with that platform) do not interact directly with team members on other platforms – rather they interact indirectly via their proxies. Thus from the perspective of a team, all interaction is local. Note that this approach assumes that teams are represented as explicit software entities, as in JACK Teams. Indeed, the approach will be conceptualised and implemented as an extension of the JACK Teams model. The particular research issues that such a solution presents include the following: • How are the team behaviours on different platforms synchronised? • What reasoning is required of the team and the proxies? What are the generic aspects of this reasoning? • Can communication-related behaviour be specified separately from team and individual behaviour? • What modelling support is required to underpin proxy and team reasoning? • How does the approach scale in terms of team size? What are the design decisions that need to be made in terms of team structure?

A Conceptual Model for Robust Team Behaviour The conceptual model being developed in this project extends the existing JACK Teams model to provide support for robust team behaviour in situations when team members are incapacitated and when communications between team members becomes unavailable for periods of time. The JACK Teams model in turn builds on the JACK Agent model, and both implement the BDI model of agency. Consequently the conceptual model for robust team behaviour consists of 4 layers: 1. a BDI layer 2. a JACK Agent layer 3. a JACK Teams layer and 4. a Robust Teams layer The BDI Layer The Belief/Desire/Intention (BDI) model is a paradigm that explicates rational decision making about actions. Based on work by Bratman on situated rational agents [13], this approach has been applied in a wide range of demanding applications [14]. Central to the notion of BDI is the recognition of intention as a distinct mental attitude, different in character from both desire and belief and the recognition that intentions are vital for understanding how agents with limited resources, such as human beings, plan ahead and coordinate their actions. In the BDI model, agents have • Beliefs about their environment, about other agents and about themselves. • Desires that they wish to satisfy. • Intentions, which are commitments to act towards the fulfilment of selected desires. These intentions must be consistent both with the agent's beliefs and with its other intentions. The BDI model was first formalised by Rao and Georgeff [15]. In order to ensure computational tractability, they proposed

1st SEAS DTC Technical Conference - Edinburgh 2006

B16

an abstract model that incorporated several simplifications to their theory, the most important being that only beliefs are represented explicitly. Desires are reduced to events that are handled by predefined plan templates and intentions are represented implicitly by the runtime stack of executing plans. The execution loop contained three phases: • The selection of a plan to achieve the current goal • The execution of that plan and • The updating of mental attitudes Although variations to the abstract model have been introduced in implemented architectures (e.g. flexible phase execution and explicit goal representation) the abstract model has provided the conceptual basis for all major BDI architectures [16]. The success of the BDI approach can be attributed to the availability of commercial strength architectures (PRS [17], dMARS [18] and most recently, JACK) and to the fact that while the approach is not formally grounded in cognitive science, it provides an intuitive framework for the modelling of behaviour.

execution, event posting can happen on belief modification and from the environment. Note that the plan selection process itself can result in events being posted (JACK supports meta-level reasoning) – these events are handled as above. The basic structural elements of the JACK agent model are agents, events, plans and beliefsets. These are augmented with views and capabilities. Views are data abstraction mechanisms that allow agents to use heterogeneous data sources without being concerned with the details of the interface. In essence, they make the interface to an external data source the same as a beliefset. Capabilities are used to organise the functional components of agents so that components can be re-used across agents. The relationship between these structural elements is illustrated in Figure 1. Event

Agent

Plan

Beliefset/ view

The JACK Agent Layer At a conceptual level, the JACK execution model is similar to Rao and Georgeff’s abstract model – that is the execution loop is driven by events corresponding to desires. The handling of an event requires the generation of the set of plans that are applicable to the event (the applicable plan set) and then the selection of one plan instance from the set for execution. Plans consist of plan steps; these are executed atomically and may result in the modification of the agent’s beliefs, the sending of messages to other agents, actions being performed on the environment or events corresponding to new desires being posted. A plan may fail in which case the initiating event may be reposted, depending on the behaviour that is required. In addition to plan step

has uses Event

Capability

Plan

Beliefset/ view

Figure 1: JACK structural elements

In developing agent-based execution systems, we have found it useful to view the agent model for such applications as consisting of separate behaviour and embodiment sub-models [19]. Behaviour

1st SEAS DTC Technical Conference - Edinburgh 2006

B16

models encapsulate the reasoning that underpins agent activity. In performing this reasoning, an agent has knowledge of the actions that it can perform – for example, a UAV agent might be able to take off, change heading, change attitude, fly, observe, land etc. Embodiment models implement the actions that are available to the behaviour model. These actions could be performed by a model of the physical system if we are developing a simulation or by the actual physical system. The JACK Teams Layer A JACK team is very similar to a JACK agent in that it is a separate, autonomous reasoning entity with its own beliefs, desires and intentions. This reasoning is also defined in terms of events, plans, beliefs etc. A team is characterised by the roles it performs and the roles that it requires other teams to perform on its behalf. A role models the interaction between two teams as an exchange of goals and beliefs. Like an interface in Java, a role contains no implementation – only a description of the facilities that the participants in the role relationship must provide. Furthermore the role relationship distinguishes between two types of participants – role tenderers, which are teams that require teams to fill particular roles and role performers, which are teams that can perform particular roles. A team can participate in multiple roles and can be a role tenderer, a role performer or both. A team maintains separate lists known as role containers for each role that the team tenders. A role container records the teams that have expressed interest in performing that particular role. A team can modify its role containers dynamically to reflect changes in interest by role performers. A team will normally be part of a larger structure in which the relationships between teams are tenders role x and performs role y. This structure is known as the role obligation structure and is not explicitly represented as a single data structure. Rather, the information that such

a data structure would contain is stored in the role containers for each participating team. Team behaviour is specified in terms of the roles that are required to execute the behaviour and is encapsulated in team plans. When a team plan is executed, a binding between roles and team instances is made and a task team is created dynamically to execute the specified behaviour. That is, the team determines which of the role performers that have expressed interest in performing the roles required by this plan will in fact be used. This process involves the selection of team instances from the appropriate role containers – these instances are referred to as sub-teams. The reasoning associated with the selection process is encapsulated within the team plan’s establish() method. A default establish method is provided that binds a role to the first member of its corresponding role container. However, this method can be overridden and reasoning appropriate to a particular application (perhaps involving negotiation with the role performers) provided. Task team structure is summarised in Figure 2 – note that the relationships that appear in Figure 2 are dynamic, not static. Task Team

Team

has

Figure 2: JACK Teams task team structure

Within a team plan, a team can have access (via the appropriate role) to a synthesized beliefset that is derived from the beliefs of its sub-teams. JACK supports the definition of filters that determine if and when the propagation should occur, and what subset of beliefs should be propagated to the containing team. Similarly, sub-teams can inherit (via the appropriate role) a subset of

1st SEAS DTC Technical Conference - Edinburgh 2006

B16

the beliefs of the containing team. Belief propagation is triggered by changes to a team or sub-team’s beliefset. The structural elements for JACK Teams are summarised in Figure 3. In contrast to agents, the team execution model incorporates a team initialisation stage as well as an execution loop. In the initialisation stage, the role containers for the team being initialised are populated with teams that are able to perform the required roles. This information is usually specified as JACOB initialisation objects1. Note that not all teams that perform a given role need to be included in this process – it merely reflects an initial interest and teams can be added and removed dynamically. Event

Team

Teamplan/ Plan

Teamdata/ Beliefset/ View

has uses

Role Event

Capability

Teamplan/ Plan

Teamdata/ Beliefset/ View/

Figure 3: JACK Teams structural elements

Once the team initialisation phase has completed, an execution cycle very similar to the agent execution cycle, is initiated. The key difference is that team execution includes a belief propagation step that

1

JACOB is an object modeling language that is provided as part of the JACKTM Intelligent Agents product suite

handles dissemination of information between role tenderers and role performers. Team plan execution is initiated in the same way as for a JACK agent, i.e, in response to the processing of an event. The first step in team plan execution is task team formation. Task teams are formed for each team plan that is to be executed. While this approach provides for maximum flexibility, it is often the case that a particular binding of roles to teams holds for multiple tasks. For example, in the absence of any untoward events requiring team reconfiguration, a fourship would be expected to retain its team structure for an entire mission. JACK Teams allows such persistence to be readily modelled, but it requires the role obligation structure to mirror the desired task team structure. Team reconfiguration is then achieved by the addition and removal of teams from the appropriate role containers, rather than the selection of an alternative task team from the role containers. The role concept allows for a clear distinction to be made between potential team structure and actual team structure. Sporting teams provide an apt example – the team that plays on match day (the actual team structure) is selected from a squad of players (the potential team structure). Within the match day team, there is a greater or lesser degree of flexibility allowed in dynamic reconfiguration of the team depending on the sport – for example some sports allow for rotation off the bench. Previously, we introduced the notion of an agent having a behaviour and an embodiment. In the JACK teams model, the embodiment for a team can be provided either by other teams (through the role concept) or by itself (through either an actual physical embodiment or by a simulated embodiment). If the team provides its own embodiment, we will refer to it as a grounded team and as an ungrounded team otherwise.

1st SEAS DTC Technical Conference - Edinburgh 2006

B16

The Robust Teams Layer Team Member Structure: A grounded team member refers to a physically separate entity and is hosted by a distinct execution environment known as a platform. Each grounded team member is represented as a collection of a behaviour agent, an embodiment and multiple proxy agents as illustrated in Figure 4. The purpose of the proxy agents is to act as the team member’s representatives on other platforms. Platform2

Proxy Agent12

Platformn

...

Proxy Agent1n

JACK Teams, team behaviour is specified in the form of team plans. A team plan will execute concurrently on each platform that hosts a grounded team member.

Platform2

Platform3

Interacts with

Platform1

Figure 6: System Architecture Behaviour Agent1 Interacts with Embodiment1 Platform 1

Figure 4: Team member architecture

In addition, the task team structure(s) that a particular grounded team member belongs to are instantiated on its host platform. Behaviour agents from other platforms are represented in these structures by proxy agent instances. The proxy agent instances and their corresponding behaviour agents communicate when it is possible and desirable to do so. The platform architecture is represented in Figure 5. Teami

Behaviour Agenti

Embodimenti

Proxy Agent 2i



Proxy Agentni

has

Figure 5: Architecture for platform i

A system would involve several such platforms as illustrated in Figure 6. As with

Non-grounded team members do not have an embodiment or proxies. Rather, they are replicated on each platform and execute team plans concurrently. For example, a fourship consists of two twoships and twoship in turn consists of two UAVs. Thus there would be 4 grounded team members corresponding to the 4 UAVs and hosted by 4 platforms. Each UAV would have an embodiment and 3 proxies resident on the other platforms. However the two twoship teams and the fourship team would be replicated on each of the 4 platforms. Task Team Formation: In the robust teams model, team formation uses the same structures as the basic teams model. In particular, a role obligation structure is created from the available teams and the roles that they require/perform. This structure defines for each role required by a team all the teams that have declared that they are able to perform that role. Each execution of a team plan then requires a task team to be selected from this structure. This selection process occurs at run time and can involve negotiation between the role tenderer and role filler. Task team structure in the robust teams model extends the structure for the team model (Figure 2) by allowing task teams to incorporate proxy agents as well as teams and task teams. This is illustrated in Figure 7. Note that the task team structure is dynamic and that the

1st SEAS DTC Technical Conference - Edinburgh 2006

B16

internal structure for the elements presented in Figure 7 is detailed in Figures 1 and 3. Team

Task Team Proxy Agents

has

Virtual Embodiment Behaviour Agent

Physical Embodiment

Figure 7: Task team structure for the robust teams model

In the basic model, the supported task team lifecycle is one in which the task team lifecycle mirrors that of its associated team plan. That is the task team has no existence outside the team plan for which it was created. However, in the robust model, task teams can have a lifecycle independent of the lifecycle of a single team plan, providing an explicit separation between task team formation and team plan execution. This allows for future infrastructure to be developed that enables a team plan to monitor the state of its role performers and to fail the plan on role performer failure. With respect to the last point, note that in the basic model, delegation through a role is synchronous and behaviours either succeed or fail; there is no notion within the delegation model of monitored behaviour. Furthermore, no distinction is made between a behaviour failing because an action did not succeed (plan step failure) and a role performer no longer being able to perform its role (role performer failure).

Behaviour Specification: As in the basic teams model, team behaviour is specified explicitly in terms of team plans, which reference roles rather than team instances. As noted in the previous section, a task team that binds roles to team instances is created dynamically prior to team plan execution. Behaviour can be delegated to role performers and coordinated in various ways. In the robust model, coordination of behaviour is complicated by role performers that are unable to communicate and therefore unable to indicate task completion. A team may be able to tolerate lack of knowledge regarding the status of a delegated task for a greater or lesser period of time. However, when a behaviour is being executed by its sub-teams, there will be points at which the team will need to know that delegated tasks are in particular states before continuing. We call these points checkpoints and, in the robust model, team plans are decorated with these as appropriate. Checkpoints are similar to the preconditions in TOP team plans and the context method in a JACK plan, but whereas these constructs control plan activation, checkpoints can occur throughout a plan. We also introduce the notion of a team contract. A team contract specifies the required behaviour of each task team member and the synchronisation that is required of that member with other members to achieve its specified behaviour. For example, task team members could share position information, thereby allowing individual members to adjust their speed and heading to retain formation. A contract also specifies any reporting requirements. Contract activities for a particular behaviour agent are mediated by its proxies; in particular, breaks in communication between the proxy and its behaviour agent will be managed by the proxy and not by the team that initiated the contract.

1st SEAS DTC Technical Conference - Edinburgh 2006

B16

Behaviour Execution: As noted in the previous section, when a contract is being executed, the interaction between the team plan and its team members is as shown in Figure 8.

Activity

Team Plan

Proxy Agent

Action/status

Status

Behaviour Agent

Figure 8: Contract Execution

A proxy agent might also be monitoring the status of other behaviour agents (either directly or via their proxies) as necessitated by the contract. During execution, loss of communication between the proxy agent and its behaviour agent may occur. This loss may be permanent or temporary. We assume that if permanent loss has occurred (i.e. the vehicle or its communications capability has been destroyed) then the remaining vehicles will be notified by the ground station. In order for a proxy agent to monitor the status of its behaviour agent, a richer framework for the modelling of action than is afforded by mainstream BDI architectures is required. In this regard, the robust model provides the concept of a durational action. In a durational action we distinguish action execution from action monitoring. Thus when a durational action is performed, the triggering task becomes a monitoring task for a parallel, asynchronous execution task. Furthermore, durational actions are performed within distinct action groups. While an action group can contain multiple actions, no more than one action can be in progress at a particular time. When an action completes or a new action is added to a group, the agent briefly reviews the action group to decide which action in the group to pursue next. Team plan execution involves both plan execution and the monitoring of the status of the task team members. Loss of a task team member will result in plan failure and

the (BDI) goal associated with that plan will then be reposted. Meta-level reasoning can then be used to determine whether the goal should be pursued, perhaps with a different team structure, or whether the mission should be replanned. To assist in replanning/plan resumption, we assume that there is a mechanism available to track plan completion, such as the recording of checkpoint achievement. Loss of the member currently performing the C2 role is handled no differently to the manner just described – the C2 plan will fail because the monitor for that plan detects that the C2 role has failed and then meta-level reasoning will be invoked to determine which team member should assume the C2 role. If temporary loss of communications with a task team member occurs, its proxy agents will act on the behaviour agent’s behalf. As the proxy agents have monitored the play out of the activities requested of the behaviour agent, they will be aware of the state that the behaviour agent was in prior to loss of communications and they may choose to predict when an activity completes and notify the team plan of completion at that predicted time. It is conceivable that this prediction proves to be incorrect, in which case the team plan should fail and the situation be reviewed. Alternatively, the proxy agents may have chosen to wait until communications were restored before notifying the team plan of activity completion. An example of the latter behaviour would be when activity completion was part of a team plan checkpoint. Checkpoints thus provide a means of ensuring that the copies of a team plan instance that are executing concurrently on different platforms are synchronised. Note that a behaviour agent will have been tasked multiple times to perform the same activity – directly by the team plan executing on the behaviour agent’s platform and indirectly by its proxy agents on other platforms. It is useful for the

1st SEAS DTC Technical Conference - Edinburgh 2006

B16

behaviour agent to log these requests, as it can give an indication as to whether another team is missing, out of communication etc. Temporal Reasoning In the robust model, temporal reasoning is central to the behaviour of proxy agents. Within the BDI model, time is not explicitly represented. Rather time is seen as a property of the environment and as such is external to the agent. An agent thus perceives time and may utilise time explicitly in its reasoning processes – for example as an index for object references or for the scheduling of actions. Extensions of the BDI model to accommodate such reasoning have been proposed (e.g. the use of CTL* in LORA [20]), but the concept has not been supported in mainstream BDI architectures. The philosophy has been that temporal reasoning is an application specific issue and as such the role of a BDI architecture is to accommodate the incorporation of appropriate reasoning models rather than to support a specific model. Furthermore, depending on the particular function that an agent is required to perform, different temporal reasoning models may be preferred. We believe that when dealing with unfolding situations, existing logics are inadequate as they do not allow an agent to focus on how objects are changing in the world. However when dealing with the planning and scheduling of sub-team actions, it is our expectation that existing interval-based logics, such as [21], would suffice. A key issue here is that the robust model should support multiple temporal reasoning models. This has been argued for BDI architectures in general in [22]. Future Work Our intent is to implement the robust model using the existing JACK Teams infrastructure. JACK capabilities will be developed to support the additional reasoning and modelling of action required

by the robust model – no enhancements will be made at the language level. All agents will be implemented as JACK teams; their ‘robust’ behaviour will be provided by their use of the appropriate capabilities. A scenario has been developed that is sufficiently rich to evaluate the potential of the robust model. The scenario will be implemented using both the JACK Teams model and the robust model. Experimentation will then be conducted following the methodology described in [23] in order to evaluate the robust model and to identify directions for future research. Conclusion A conceptual model to underpin the provision of robust team behaviour in situations when communication links between team members can not be assumed to be always available has been presented. This model extends an existing teams infrastructure, namely JACK Teams and would be implemented as a set of capabilities that would be available for use by a JACK team. References [1] Karim, S. and Heinze, C. “Experiences with the design and implementation of an agent-based autonomous UAV controller” In Proceedings of the 4th International Joint Conference on Autonomous agents and Multiagent systems, Utrecht, The Netherlands, 2005. [2] Agent Oriented Software Limited, www.agentsoftware.co.uk, 2006 [3] Searle, J. The Construction of Social Reality, Allen Lane, 1995. [4] Bratman, M. Faces of Intention: Selected Essays on Intention and Agency, Cambridge University Press, 1999. [5] Cohen, P. and Levesque, “Teamwork”, Nous, Vol. 25, No. 4, pp. 487-512, 1991. [6] Grosz, B. and Kraus, S. “Collaborative Plans for Complex Group Action”, Artificial Intelligence, Vol. 86, pp. 269-357, 1996. [7] Pynadath, D., Tambe, M., Chauvat, N. and Cavedon, L. “Toward Team-Oriented

1st SEAS DTC Technical Conference - Edinburgh 2006

B16

Programming”, LNCS, Vol. 1757, pp.233-247, 1999. [8] Tamura, S., Seki, T. and Hasegawa, T. "HMS Development and Implementation Environments", in Deen, M. (Ed.), Agent Based Manufacturing. Advances in the Holonic Approach. Springer, 2003. [9] The Foundation for Intelligent Physical Agents, www.fipa.org, 2006. [10] Jarvis, D., Fletcher, M., Rönnquist, R., Howden, N. and Lucas, A. “Human Variability in Computer Generated Forces – Application of a Cognitive Architecture for Intelligent Agents”. In Proceedings of SimTecT 2005, 2005.

[22] Jarvis, B., Corbett, D. and Jain, L. “Reasoning about Time in a BDI Architecture”, LNAI, Vol. 3682, pp. 851-857, Springer-Verlag, 2005. [23] Cohen, P. Empirical Methods for Artificial Intelligence, MIT Press, 1995.

Acknowledgements The work reported in this paper was funded by the Systems Engineering for Autonomous Systems (SEAS) Defence Technology Centre established by the UK Ministry of Defence.

[11] Heinze, C., Goss, S., Josefsson, T., Bennett, K., Waugh, S., Lloyd, I., Murray, G. and Oldfield, J. “Interchanging Agents and Humans in Military Simulation”, AI Magazine, Vol. 23, No. 2, pp. 3748, 2002. [12] Koestler, A. The Ghost in the Machine, Arkana, 1967. [13] Bratman M.E., Intention, Plans, and Practical Reasoning, Harvard University Press, Cambridge, MA (USA), 1987. [14] Evertsz, R., Fletcher, M., Jones, R., Jarvis, J., Brusey, J. and Dance, S. “Implementing Industrial Multi-agent Systems using JACKTM”, LNAI, Vol. 3067, pp. 18-48, Springer-Verlag, 2004. [15] Rao, A. and Georgeff, M., “BDI Agents: from theory to practice”. In Proceedings of the 1st Int. Conf. on MAS (ICMAS’95), 1995. [16] Pokhar, A., Braubach, L. and Lamersdorf, W., “A Flexible BDI Architecture Supporting Extensibility”, In Proceedings of the 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT-2005), 2005. [17] Georgeff M. P., Lansky A. L., “Procedural knowledge”. In Proceedings of the IEEE Special Issue on Knowledge Representation 74, pp.13831398, 1986. [18] d'Inverno M., Kinny D., Luck M., Wooldridge M., “A formal specification of dMARS”. LNAI, Vol. 1365, pp. 155–176, Springer-Verlag, 1998. [19] Jarvis, J. Rönnquist, R., McFarlane, D. and Jain, L. “A Team-Based Holonic Approach to Robotic Assembly Cell Control”. Journal of Network and Computer Applications, Vol. 29, No. 2, pp. 160-176, 2005. [20] Wooldridge, M. Reasoning about Rational Agents, The MIT Press, 2000. [21] Allen, J.F. “Maintaining Knowledge about Temporal Intervals”. CACM, Vol. 26, No.11, pp. 832-843, 1983.

1st SEAS DTC Technical Conference - Edinburgh 2006

B16