a distributed control architecture for autonomous ... - Semantic Scholar

0 downloads 0 Views 309KB Size Report
control architectures like modularity, fault-tolerance, integrability and .... A centralized and a distributed architecture for this two-arm robot will be explained.
A DISTRIBUTED CONTROL ARCHITECTURE FOR AUTONOMOUS ROBOT SYSTEMS Thomas Laengle, Tim C. Lueth, Ulrich Rembold Institute for Real-Time Computer Systems and Robotics (IPR) University of Karlsruhe, D-76128 Karlsruhe, F.R. Germany ABSTRACT The main advantage of distributed controlled robots and subsystems is the decentralized task execution by the system components. This way, properties for the design of flexible control architectures like modularity, fault-tolerance, integrability and extendibility are easily obtained, further it is possible to use the concepts of distributed knowledge and decentralized world representation. On the other hand, coordination between the components, for example path planning for collision avoidance between both manipulators in a two-arm-system, is very difficult to guarantee. To explain these concepts, the Karlsruhe Autonomous Mobile Robot KAMRO is considered which is being developed at IPR. The robot system consists of several subcomponents like two manipulators, hand-eyecameras, one overhead-camera and a mobile platform. Extensions to the distributed control architecture KAMARA (KAMROs Multi Agent Robot Architecture) are described that are responsible to overcome coordination problems, for example caused by the independent task execution of both manipulator systems. Further, it is explained which way the decentralized world representation can be used for parallel task execution. The described intelligent control architecture will replace the former control architecture of the autonomous robot KAMRO.

1.

Introduction

During the past few years, the need for large scale and complex systems has become obvious. They are necessary to intelligently carry out tasks in the area of transportation, manufacturing, and maintenance. Examples are automatically guided transport systems containing many vehicles, complex flexible manufacturing cells, or eventually mobile manipulator systems, which could be used for autonomous service applications in an industrial setting [1, 2]. The main problem is mostly the design of the intelligent control structure (ICS) for such complex systems, and also for system components if the overall system consists of several separate systems. Up to now, the control structures of such systems were usually designed as a hierarchical and centralized structure with a top-down process for planning and decision making. The number and complexity of the hierarchical layers determine the time the system requires for a reaction and the quality of a chosen action. In most cases, additional actuators or sensors have to be added during the development cycle to improve the capability of the overall system. In this case and, if the integration of component capabilities is required, it's easy to see the disadvantages of the

hierarchical and centralized approach in comparison to the advantages existing at the initial system design process. In contrast to this approach, distributed or decentralized control architectures [3 - 12] reveal their main advantages when it becomes necessary to enhance the system, to integrate components, and to maintain the system. Therefore, most of the following properties are well-known in the area of computer architectures: • Modularity: By having a predefined framework for information exchange processes, it is possible to develop and test single parts of the system independently (e.g. the drive and navigation module for a mobile platform). • Decentralized knowledge bases: Each subsystem of the overall system is allowed to use a local knowledge base, that stores relevant information for the subsystem, and is capable of exchanging this information with other subsystems if required (e.g., for mobile manipulators, information about obstacles on the platform's path is not important for the manipulator itself). • Fault-tolerance and redundancy: If system-inherent redundancy exists, this redundancy should be usable without any error model, in case of a broken down subsystem or another error situation (e.g., if one of several vehicles in a transportation system is damaged, nothing else should happen than a slower task termination). • Integrability: Without any change in the control architecture, cooperation of subsystems is possible and all synergetic effects can be used (e.g., any kinematic chain in a multi-manipulator system or support of a manipulator by an existing mobile platform). • Extendibility: New system components can be added to the original system and any improvements (such as reduced task completion time) come about without any change in the system architecture. New components inform other components about their existence and capabilities (e.g., extension of a sensor system). The main disadvantage of not centralized architectures is to make sure, that the system will fulfil an overall or global goal. On the other hand, the independent task execution by the system components causes problems in the area of coordination between the system agents. In this area, centralized control architectures show their main advantage. It will take quite a while to integrate all these properties in an intelligent control architecture, but distributed and decentralized concepts will be the main approach for this goal. In the second section, a taxonomy for intelligent control architectures is given. In section three, the distributed control architecture KAMARA for the mobile assembly robot KAMRO is briefly presented. Hereby, a new agent model with local world representation is used. This new model together with communication forms is explained in the fourth section. In distributed systems, task negotiation between the system components is necessary, refer to section five. In section six, an example is discussed which explains the concepts in detail.

2

Section seven deals with coordination between system components. The article closes with an evaluation of the advantages of this new approach and conclusion for future work. 2.

Intelligent Control Architectures for Multi-Agent Systems

In principle, complex systems, which consist of several executive subsystems (agents), can be divided into three different design classes: • Centralized Systems: A decision is made in a central mechanism for the system goal and transmitted to executive components (Fig. 1a). • Distributed Systems: The decision is made by a negotiation process among the executive components and executed by them (Fig. 1b). This way, it is possible to fulfil a global goal [13]. • Decentralized Systems: Each executive component makes its own decisions to fulfil the individual goal of the component and executes only these decisions (Fig. 1c). Independent of the individual goals of the components, it is also possible to derive a common goal of the whole system if this goal is obtained by combination of the individual ones.

Task

Task

Controller

Task Channel

Sub-tasks Communication Task Task

a)

b)

c)

F i g . 1 : Execution view of multi-agent systems: a) centralized, b) distributed, c) decentralized

From the definition of an agent, it is possible to describe and explain hierarchical systems (Fig. 2). An agent consists of three parts: communicator, head (for planning and action selection), and body (for action execution) [14].

Communication

Communicator

Planning/ Action selection

Head

Execution

Body

F i g . 2 : Elements of an agent

3

The communicator connects the head to other agents on the same communication level or higher. The head is responsible for the action selection process for the body (centralized approach), organizes the communication among the body's agents actively or passively (distributed approach), or is only a frame to find a principle of order for the decentralized approach. The body itself consists of one or more executive components, which can be similarly considered as agents. The executive components can be divided into three classes, as well as the components of the process for planning and action selection, i.e., the head of an agent (Fig. 3). Information

Information

Action

Information

el

Action

nn

a Ch

a)

Communication

b)

c)

Action

F i g . 3 : Three different ways to make decisions, plan, or select actions

The classification is as follows: • Centralized action selection: Available information is centrally processed by a decision making component and transformed into an action for the agent's body (linear planner). • Distributed action selection: Available information is processed by several decision making components, which communicate and negotiate to come to a decision. Afterwards the information is transformed locally and globally into an action for the agent's body (Blackboard [15], Whiteboard [16]) • Decentralized action selection: The available information is processed independently by several decision making components and transformed locally into their own action decision for the agent's body (Motor schema [17]). From an execution-oriented point of view, the presented taxonomy not only allows the classification of already described ICS for planning and control. It also describes multiagent systems in a similar way as single systems. 3.

Two Architectures for One Intelligent System

This concept for describing an intelligent control architecture can be used to explain two different approaches toward controlling the Karlsruhe Autonomous Mobile Robot KAMRO (Fig. 4). A centralized and a distributed architecture for this two-arm robot will be explained in the following sections and the advantages and disadvantages will become clearer.

4

The robot receives assembly tasks from a cell controller, which are represented by assembly precedence graphs. Afterwards, the robot travels to a workstation, searches for the necessary assembly parts with its camera system and performs the assembly autonomously. The used control architecture FATE [18] consists of a blackboard planning level that generates situation-dependent manipulator-specific elementary operations.

Fig. 4: The mobile robot KAMRO

The real-time robot control system RT-RCS executes the elementary operations. The realtime controller is able to control the manipulators independently or in a closed kinematic chain. Therefore, the overall system can be described as a centralized execution architecture with distributed action selection (Fig. 5a). For the independent movement of the manipulators and for the kinematic chain, two different kinds of elementary operations are necessary. It has already been shown that this control architecture is principally suitable for solving the problem of autonomous assembly tasks by robots. On the other hand, many difficulties have arisen by extension of the system with miniature hand-eye cameras and through the integration of the mobile platform and the manipulators for mobile manipulation. Similarly, the automatic execution (replacing) of a damaged manipulator's task through a cooperation of the platform and the functioning

5

manipulator is only realizable by completely redesigning the centralized control architecture. These difficulties are avoidable if a new architecture is implemented which doesn't only have one executive agent (as FATE uses RT-RCS), but multiple agents like the image processing system, the two manipulators, and the mobile platform as well (Fig. 5b).

Cell controller

Cell controller

Task

Task

FATE

KAMARA

BB

Command

Channel

Status

Real-time controller

a)

b)

F i g . 5 : a) Centralized control architecture FATE, and b) Distributed control architecture KAMARA

These agents have to communicate and negotiate among each other to collect the missing information, that is required for an autonomous assembly, and to perform the desired task. The new control architecture, KAMARA (KAMRO's Multi-Agent Robot Architecture), for distributed intelligent robot systems and their components allows easier control in many directions and also easier component integration. Different types of cooperation for coupled agents, like closed kinematic chains or camera-manipulator coupling, are also considered in this architecture. The main topic in the following sections will be the problem of task distribution between the executive agents. 4.

Agent Model and Communication Mechanism

As mentioned before, an agent A consists of a communicator, a head, and a body. In our system description, an agent, like a manipulator, is only capable of performing one task at a time. The reason for this is that its body is implemented as a single procedure. On the other side, a head with a communicator doesn't only have to control the body, but also has to communicate and negotiate with other agents or heads. An important reason for communication is the determination of the agent for executing an elementary operation. This means the head (and the communicator) have to deal with several different tasks at one time. Therefore, head and communicator are implemented as a variable set H, C of equal

6

independent processes H (Head) for planning or action selection, and C (Communicator) for communication and negotiation (Fig. 6): A = (C, H, B)

Channel

ody

Body

ody

F i g . 6 : Head and communicator can be several processes

The communication mechanism for all agents and for task distribution or task allocation is blackboard-like. Hereby, the channel of the distributed system (Fig. 1) holds all executable missions m in a mission set: M = {m 1, m 2, …, m n} M receives new missions Mn+1 from the cell planner P or from the agents of KAMRO: P, A: M = M ∪ M n+1 Whether or not this communication mechanism (channel) is implemented as Blackboard, Whiteboard, or token ring, etc., is an implementation-oriented question (here, a Blackboard-architecture with a contract-net-like protocol is used). In principle, this multi-agent architecture is also useful on the cell level. In this case the communication mechanism of one KAMRO robot is the head of a KAMRO-Agent (distributed action selection architecture), and it is possible to use more than one KAMRO robot for complex tasks like carrying a large object with several robots or loaning one manipulator to a second robot (Fig. 7). Considering the distributed control architecture, it's easy to see that between the system components, different cooperation forms are needed: 1. independent task execution, 2. asynchronous communication (teams), and 3. synchronous communication (special agents).

7

Plant controller Tasks

Cell

controller

Channel

KAMRO

KAMRO KAMRO

KAMRO

F i g . 7 : Distributed cell controller view

Teams as a set of cooperating agents are dynamic in nature, the number as well as the kind of agents may change during the task execution. An example is the exchange of parts between both manipulators. For a defined space of time, cooperation is necessary to reach this goal. The communication for this kind of cooperation will be done on a high abstraction level by the agent's communicator. As an example, a team of the components camera and manipulator of the autonomous mobile assembly robot KAMRO can be considered. The cooperation of manipulator and camera system is important if grasping and joining operations should be performed by the robot. In this case, the camera system must be able to correct the position of the manipulator if needed. Another example is the exchange of an obstacle between both manipulators, or a regrasping operation to change the gripping configuration of an object. In the first case, the camera and a manipulator must build a team to solve the described problem, in the second case, both manipulators build a team together. Because there is just brief information exchange that has not to be synchronized in a specific time interval, team building (see Fig. 8) is sufficient. The communication form between the system agents is asynchronous. On that account, it's not possible to guarantee real-time constraints.

communication component

knowledge base

planning component

Implementation

asynchronous communication

team

communication component

communication component

knowledge base

knowledge base

planning component

Implementation

planning component

Implementation

Fig. 8: Team: asynchronous communication

If two manipulators grasp a large or heavy part, and by this way close a cinematic chain, asynchronous communication between these system components is not sufficient. In this

8

case, dependent on the desired control concept for the cinematic chain, a decentralized architecture (simple reflexive behaviour), a distributed architecture (master slave tasks), or a centralized architecture is required. For the last case, it is necessary to extend the distributed control concept by the introduction of special executive agents (centralized approach). These special agents SA have, like all other agents, a head H and a communicator C. The body is allowed to allocate bodies of other agents, if available, and control them by special communication channels with high transfer rates (see Fig. 9). During this time, the normal agents have no access to their bodies, since they are being used by the special agent.

Communication Channel

Planning/ Action selection

SA

SA

Execution Body a)

Body

b)

Fig. 9: Special agent: centralized planning for other agent bodies

In other words: if the information exchange between agents of the same team increases so dramatically that this results in a narrowing in the communication channels, then these agents have the possibilities to refer to a closer internal relationship (see Fig. 10).Because special agents change the structure of the control architecture while they are active, they should only be used if no other type of cooperation is suitable. communication component

knowledge base

planning component

Implementation

Implementation

Implementation

Implementation

Implementation

Implementation

synchronous communication

Fig. 10: Communication for high transfer rate

9

5.

Task Negotiation in KAMARA

If a new mission is assigned from the mission set M, it is possible that many agents are able to work on that task. As a consequence, an important problem that has to be solved is the negotiation among these agents that compete for the mission. This negotiation process can be performed by • a centralized mediator (only one mediating agent exists in the system), • a selected candidate (every agent has the ability to mediate), or • all (or many) candidates. The implementation of a centralized mediator conflicts with important goals of the new system architecture. In this case, the disadvantageous and inflexible information flow goes unavoidably through a slow centralized planning system, which has to negotiate with many system components. Similiar to human behavior, a selected candidate is considered. In this concept, the mediating agent demands a mission-solving evaluation from all other competing candidates, compares these offers and chooses the best one. 5.1

Task Description

Because all agents are able to negotiate with the competing agents and the communication is managed by a blackboard architecture (mission set), it is not only necessary to represent the task itself, but other information as well. First of all, it is important to store the agent identification of the mediator and a list of all other candidates that compete for the mission. This way, the mediator may be identified, and the mediator knows which other agents are involved in the negotiating process. Since the blackboard structure controls the information exchange among the system components, the evaluations of the competing agents are also stored in the task information block. As mentioned before, all agents that work on the subtasks have to send their solutions (or a signal) to the responsible initiators. Thus, the mission representation must also indicate the agent that appended the subtask to the mission set. To be specific, it is necessary to store the identification number of the desired head and communicator, since many of these heads and communicators may exist for an agent. Since an agent head that needs further information to perform its task is blocked until the desired information is present, a correspondence field informs this agent that the solution is stored on the blackboard structure. To identify the solution, the mission identification number n is used as a reference in the solution representation. Because the solution representation can be very complex and is not always known in advance, it should be not integrated into the mission description. Therefore, a mission is represented as a tuple m=(n, I, R, P, t, A, V, E) The field I contains a list of the mission initiators, t represents the task itself, R is a set of receivers which are interested in the mission solution, A is a list of the competing agents and 10

V their valuations. The first candidate in the candidate list A is the mediator between the competing agents. All other entries are the candidates. When the mediator selects the agent that has to perform the task, a corresponding message is sent to all agents through the execution set E. In this field, all competing agents can see whether they are chosen to work on the mission or not. The set P contains signals to the initiators or receivers, which indicate that the mission solution is presented on the blackboard structure. The blocked initiator or receiver head which is waiting for information has to examine this field until a signal is presented. In order to overcome problems with older messages on the blackboard while still holding the information available as long as necessary for other interested agents, the last receiver that fetches its information from the blackboard has to delete the mission and the corresponding answer from the mission set. This way, information is available as long as necessary and is deleted if all interested agents have received the solution. 5.2

Selection of the Mediator

The negotiation procedure starts when a new mission appears in the mission set. The communicator of every agent, whether the body of this agent is already performing another mission or not, searches for tasks which it can solve in the mission set. One of these competing agents should negotiate with the rest of them. If the candidate list A is empty, the first competing agent head acts as mediator and stores its identification number into the first position of the field A. When another agent becomes interested in mission solving, it is obvious that this agent head should evaluate its problem solving ability and send it to the mediator by writing the information into the corresponding position of the valuation field V. The mediator calculates its own ability to work on the mission and waits an a priori defined time τ for the evaluation of all other agents a 2,a3,...,an, compares these evaluations and chooses the best agent ai of the entire candidate list A=(a1, a2, ..., an) to work on the mission. This way, the candidates which aren't able to calculate their evaluation fast enough or are disabled are not involved in the negotiation process. Because all agents that have the ability to work on the mission are integrated in this selection procedure, it may occur that an agent which was previously working on a different task enters the competing process later when it's body is "free". 5.3

State Transition Diagram for the Agent Head

To briefly describe the above mentioned negotiation process, internal state transition diagrams for the agent head is presented. The state transition diagram of the agent head shown in Fig. 11 consists of four states: mediate, calculate, ready and not existing. If a new mission appears in the mission set, the communicator of an interested agent initiates a new process copy of the agent's head (state transition not existing to ready). The mediator head then changes its state to mediate, and all other candidates have to change their state to calculate.

11

In both states, calculate and mediate, the agent calculates its ability to solve the mission. All candidates that are in the state calculate store their calculated value in the corresponding position in the evaluation field V. When this field is complete or the defined time constant τ is reached, the mediator selects the candidate that has the best ability to complete the mission and stores the execution information in the execution field E. Thereby, all candidates are informed whether they are chosen to work on the mission or not. The heads of all candidates that are not selected to work on the mission change their states back to not existing, the chosen candidate changes its state to ready. If two (or more) agents send equivalent valuations that also is the highest in the negotiating process, then the mediator randomly selects one of these competing agents. In contrast to that, the physical agent is only capable of performing one task at a time, thus the agent body is implemented as a single procedure.

mediate

not existing

ready

calculate

Fig. 11: State transition diagram for the agent head

5.4

Evaluation

Main goal of the described protocol is to fulfil the properties of fault-tolerance and system extendibility. On that account, all information exchange has to be performed by use of the mission set (direct agent communication is not allowed). As a consequence, it's not necessary to recognize a damaged subsystem nor to reconfigure the control architecture to work with damaged components. By use of several agent heads, it is possible to have parallel planning when changes of the environment information has not to be integrated into the action selection process. Furthermore, new components can easily be integrated into the system architecture. Comparing both control architectures (the centralized system FATE, and the distributed concept KAMARA), it can be shown that the temporal resolution of a coordinated effort is better in the distributed approach, because independent task execution and task planning is often possible. 6.

Agents and Tasks in KAMARA

In the KAMARA system, several agents exist that work together to perform the desired task. Consequently, a communication protocol between the agents is required. This 12

language consists of operations that address an agent to perform a task and could be used by other agents to involve other agents in the solution process. In KAMARA, the agents perform the following operations: • Manipulator: A manipulator is able to perform the implicit elementary operations PICK and PLACE. • Two-arm-manipulator: A two-arm-manipulator is also able to perform the implicit elementary operations PICK and PLACE. Because this agent consists of two independent actuators which make up a superagent, the mission valuation of this agent is much higher than the calculated value of a single manipulator when picking up a heavy or large obstacle. • Manager: This agent is responsible for the interpretation of a complex mission the system has to perform. It decomposes a complex task into its executable parts. • Database: The database is able to offer world state information the agents need to perform their tasks. • Overhead camera: This agent type is able to determine the position of obstacles by examination of a wide environmental area. • Hand camera: This sensor type is able to determine exact relative object positions based on inexact absolute position estimation. This information is, for example, necessary for a manipulator, just before performing a grasping operation. It can also be used to extract object positions like the overhead camera. • State controller: This agent is responsible for the blackboard structure and the state of the other system components. One important task is to control the period of time a mission is hold on the blackboard for execution. If the time stamp increases above a given threshold, the state controller can search the protocols to determine whether a system component has recently performed a similar mission. If so, this component may be damaged, overloaded with tasks or is blocked. Thus, the mission could be given to the cell controller, so another system independent of KAMARA's control can perform the task. The state controller also controls system component evaluation: if a system component estimates its ability to perform a mission as very high, and execution of the mission often fails, the state controller is able to inform the corresponding agent, and this way reduces its evaluation coefficient to zero. The communication and negotiation concept which exists between agents in order to perform a mission will be demonstrated in the following example of an assembly task, the Cranfield Assembly Benchmark. An assembly task is represented by a precedence graph. The nodes consist of individual subtasks. Therefore, all possible sequences of subtasks by which a given task can be performed are represented. For the Cranfield Benchmark shown in Fig. 12, the assembly description is given in Fig. 13. This precedence graph only describes the goals the system has to reach, whereas the executing agent has to decide how these goals can be achieved

13

depending on the environment at execution time. Therefore, the agent head uses the system's sensor information to expand this implicit representation to an explicit one.

Fig. 12: Cranfield Assembly Benchmark

PICK sideplate:1 PICK spacingpiece

PLACE sideplate:1

PLACE spacingpiece

PLACE shaft PLACE lever

PLACE sideplate:2

PICK shaft

PICK lever

PICK sideplate:2

Fig. 13: Assembly Precedence Graph

Interpretation of a complex mission is only performed by the manager agent. This agent then competes for it. As a consequence, a new manager head process is involved in the system. This head is responsible for the whole execution process of this assembly mission. Thus, it starts with the task decomposition, taking the precedence graph into account. Referring to the example described above, only the operation PICK(sideplate) can be performed and is then appended to the mission set: m1 = (1, {M},{M}, nil, PICK(sideplate), nil, nil, nil) This mission, one implicit elementary operation, can be performed by many system components, for example all manipulators, perhaps also a special agent or a team of other agents. All these agents are involved in the solution process and have to calculate their ability to solve the mission. For example, both manipulators of KAMRO are interested in the mission, and the manipulator Mp1 is the first candidate that competes for it: m1 = (1, {M},{M}, nil, PICK(sideplate), (Mp1, Mp2), nil, nil)

14

As a consequence, two new processes of the agent's heads are started, the state transition to mediate is performed by agent head 1, and the second agent head switches to the state calculate. In these states, both manipulators start to calculate their mission solution valuation. Therefore, these system components need further information, for example, the position and the weight of the sideplate. Because both manipulators are not able to determine the position information, other agent types have to be involved. If, for example, the agent Mp2 is the initiator of the command, a new mission is appended to the mission set: m2 = (2, {Mp2}, {Mp2}, nil, find_position(sideplate), nil, nil, nil) After a short time, manipulator Mp1 is also interested in the obstacle position: m2 = (2, {Mp1, Mp2},{Mp1, Mp2}, nil, find_position (sideplate), nil, nil, nil) Both agents also need weight information: m3 = (3, {Mp1, Mp2},{Mp1, Mp2}, nil, find_weight (sideplate), nil, nil, nil) In mission m 2, there are two types of system components which are able to calculate this position, and are therefore interested: the overhead camera OK, and both hand-eye-cameras (HK1, HK2). As described above, a higher problem solving valuation for OK is calculated: m2 = (2, {Mp1, Mp2},{Mp1, Mp2}, nil, find_position (sideplate), (HK 1 , OK, HK 2 ), (10%, 95%, 10%), nil) On that account, the mediator HK 1 negotiates between the candidates and modifies E to inform the competing agents whether they are chosen to work on the mission or not: m2 = (2, {Mp1, Mp2},{Mp1, Mp2}, nil, find_position (sideplate), (HK 1 , OK, HK 2 ), (10%, 95%, 10%), (0,1,0)) The component OK is able to calculate the position without further information. Because both MP1 and MP2 are waiting for the position and are registered in I and R,, both agents are appended to P : m4 = (4, {OK}, nil, {Mp1, Mp2}, Answer(2, (3.5;5.3;6)), nil, nil, nil) Mp1 examines the blackboard, recognizes that there is information available and deletes its identification number from P: m4 = (4, {OK}, nil, {Mp2}, Answer(2, (3.5;5.3;6)), nil, nil, nil) The initiating mission m2 stays on the blackboard structure so that another agent that demands this information can find the desired information immediately by use of a mission identification number. When a2 fetches the information from the blackboard, the post list is

15

empty, and a2 thereby deletes message m 4 and mission m 2 from the blackboard. The database DB is able to calculate the object weight in an analog way: m5 = (5, {DB}, nil, {Mp1, Mp2}, Answer(3, (17g)), nil, nil, nil) Now, the competing manipulators are able to determine their ability to perform the desired PICK task under consideration of the distance to the object, weight of the object, and perhaps other information. The mediator compares all offers and starts the best qualified manipulator to perform the task, i.e. Mp2 , under use of the execution field : m1 = (1, {M},{M}, nil, PICK(sideplate), (Mp1, Mp2), (30%, 60%), (0,1)) Agent Mp2 gets the execution signal and starts the PICK operation. Thereby, it sends a sequence of executable operations to its agent body. Immediately before executing the grasping operation, it's necessary to determine the exact obstacle position. The integration of the hand camera to get the exact object position is also performed by the above described algorithm. This way, manipulator Mp2 holds all needed information to execute the grasping operation: m6 = (6, {Mp2}, nil, {M}, Finished(1), nil, nil, nil) After execution of the PICK operation, the manager receives a signal that mission m1 is completed, and appends the next executable implicit elementary operation to the mission set. When the precedence graph is finished, the manager sends a signal to the cell planning system and leaves the system. 7.

Coordination between system agents

In a distributed system, coordination between system components is not easy to obtain. For example, the manipulator agents aren't able to avoid collision when they independently evaluate their action selection function. Only the position in the environment can be perceived by use of the sensor information. On that account, these agents need further information that cannot be calculated by their sensor systems. This further information can either be 1. information where other system agents are positioned, or 2. information about the plans of other system agents. By the use of an action selection function that depends on internal world representation, sensor data and information processed and offered by other system components, it is possible to integrate the behaviour of other system components into the own action selection process. For coordination between the components, the special agent concept presented above can be used. If, for example, the spatial distance between both manipulators is so small that 16

asynchronous information exchange is not sufficient to guarantee collision avoidance, these agents have the possibility to refer to a closer internal relationship. This way, the special agent acts as a single component in the system. This is the case if, for example, both manipulators need the same spatial area at the same time to work up their missions. On that account, three steps of collision avoidance can be distinguished. First of all, both manipulators can work up their missions independently. Therefore, they use their internal world model, the sensor information and the information offered by the other system components to calculate the next agent action. The information exchange is hereby asynchronous by use of the mission board (information channel). Normally, this information exchange is sufficient to avoid collisions by the action selection function. If the trajectory of both manipulators lead through a common space, the action selection function of the agent introduces new tasks t1/2 on the mission board which force both manipulators to build a special agent (SA): t1 = (manipulator1:stop, SA) t2 = (manipulator2:stop, SA) This way, the agent bodies are blocked until the communicator of both agents has build a special agent. In the special agent, one planning component and hereby one action selection function is used to calculate the operations for both manipulators synchronously: t = (t1, t2) As a consequence, it is also possible to stop one manipulator for a time interval or slow down the velocity or speed to avoid collisions. For example, one manipulator can wait at a rest position until the other system agent has performed its operation. This approach is also used in the centralized control architecture FATE. If the critical spatial area is passed, both agents (A) are reintroduced to work up their individual plans: t = ((manipulator1:stop, A), (manipulator2:stop, A)) This way, the special agent leaves the multi-agent system, and the assembly task can be finished by use of two independent working manipulators. 8.

Conclusion

In real complex robot systems, unexpected situations often occur that can't be taken into consideration in advance. For example, it is possible that a system component or a subsystem of another system obstructs the scene to be examined by the overhead camera. In this case, a centralized system structure must start an error recovery procedure to determine components which block the system, but if a component of another robot covers the scene, it is impossible to solve this situation. In KAMARA, a mission is appended to the mission

17

set by the overhead camera requesting the blocking component to leave the scene. If the state controller which is responsible for the blackboard recognizes that a mission cannot be performed by the system itself, it delivers the mission to the cell planning system, which involves other systems in the solution process. It is also possible that a whole system or a system component can no longer work. Centralized control architectures do not have the fault tolerance to overcome these problems: At first, the system is often unable to recognize a damaged system. Furthermore, it's not easy to reconfigure the control architecture to work with damaged components. In KAMARA, damaged agent heads must not be recognized because these components do not compete for missions. If the agent body is damaged, the head calculates a valuation that is too high for the real ability. The state controller perceives that the desired agent often fails during task execution and sends a message to the agent head. The integration of a new system component into a complex centralized system is often very difficult because it is not obvious where the system must be modified. In KAMARA, new system components can easily be added to the system because the system structure doesn't have to be modified. The integration is performed by the negotiation process. 9.

Acknowledgement

This research work was performed at the Institute for Real-Time Computer Systems and Robotics (IPR), Prof. Dr.-Ing. U. Rembold and Prof. Dr.-Ing. R. Dillmann, Faculty for Computer Science, University of Karlsruhe and is funded by the nationally based research project on artificial intelligence (SFB 314) funded by the German Research Foundation (DFG). 10. 1.

2.

3.

4.

5.

References Rembold, U.; Lueth, T.C.; Hörmann, A. Advancement of Intelligent Machines. ICAM JSME Int´l Conf. on Advanced Mechatronics, Tokyo, Japan, August 2-4, 1993, pp. 1-7. Schraft, R.D. Vom Industrieroboter zum Serviceroboter - Stand der Technik, Chancen und Grenzen. ISRR VDI/VDE-GMA Intelligente Steuerung und Regelung von Robotern, Langen, Nov. 9.-10., 1993 Beni, G.; Hackwood, S. Coherent swarm motion under distributed control. DARS Int. Symp. on Distributed Autonomous Robotic System, Wako, Japan, Sept. 21-22, 1992, 1992, pp. 39-52. Camargo, R.F.; Chatila, R.; Alami, R. A Distributed Evolvable Control Architecture for Mobile Robots. ICAR Int. Conf. on Advanced Robotics, Pisa, Italy, June, 4-6, 1991, pp. 1646-1649. Drogoul, A. When Ants Play Chess. MAAMAW Europ. WS on Modelling Autonomous Agents in a Multi-Agent World, Neuchatel, Swizerland, August 24-27, 1993. 18

6.

7.

8. 9.

10.

11.

12.

13. 14. 15. 16. 17. 18.

Kotosaka, S.; Asama, H.; Kaetsu, H.; Endo, I., Sato, K.; Okada, S.; Nakayama, R. Fault Tolerance of a Functionally Adaptive and Robust Manipulator. IROS IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Yokohama, Japan, July 26-30, 1993, pp. 294-300. Lueth, T.C.; Laengle, Th. Task Description, Decomposition and Allocation in a Distributed Autonomous Multi-Agent Robot System. IROS IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Munich, Germany, Sept 12-16 1994, pp. 1516-1523. Noreils, F.R. An Architecture for Cooperative and Autonomous Mobile Robots. ICRA IEEE Int. Conf. on Robotics and Automation, Nice, France, 1992, pp. 2703-2710. Ohmori, Y.; Nakauchi, Y.; Itoh, Y.; Anzai, Y. A Task Allocation Algorithm for Multiple Mobile Robot Environment. 2nd ICARV Int. Conf. on Autonmation, Robotics and Computer Vision, Singapore, September 16-18, 1992, pp. RO-15.3 Ozaki, K.; Asama, H.; Ishida, Y.; Matsumoto, A.; Kaetsu, H.; Endo, I. The Cooperation among Multiple Mobile Robots using Communication System. DARS Int. Symp. on Distributed Autonomous Robotic System, Wako, Japan, Sept. 21-22, 1992 Tigli, J.Y.; Occello, M.; Thomas, M.C. A Reactive Multi-Agents System As Mobile Robot Controller. IROS IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Yokohama, Japan, July 26-30, 1993, 1993, pp. 2008-2014. Yuta, S.; Premvuti, S. Coordinating Autonomous and Centralized Decision Making to Achieve Cooperative Behaviors between Multiple Mobile Robots. IROS IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Raleigh, NC (USA), July 7-10, 1992, p. 1566 ff Demazeau, Y; Mueller, J.-P. Decentralized Artificial Intelligence, North-Holland, 1992 Steiner, D. A Priliminary Agent Model: Cooperating Agents. Deliverable D-II.2.ii. IMAGINE, Esprit Project 5362, Siemens, 1991. Hayes-Roth: A blackboard architecture for control. Artificial Intelligence 26 (1985), pp. 251-321. Thorpe, C.E., Ed. Vision and Navigation - The Carnegie Mellon Navlab, Kluwer Academic Publishers, Boston, 1990. Arkin, R.C. Motor Schema-Based Mobile Robot Navigation. The International Journal of Robotics Research, 8, 4 (1989), pp. 92-112. Hörmann,A.; Meier,W.; Schloen,J. A control architecture for an advanced faulttolerant robot system. Robotics and Autonomous Systems, 7 (1991), pp. 211-225.

19