Flexible Wireless Communication Network for Mobile

0 downloads 0 Views 2MB Size Report
Robots are identified by a unique code which leads the transmitted position data. ..... The master has to create a list of all active robot nodes in the network by an ... and remote control of a robot from a base station, we mainly distinguish five ...
Flexible Wireless Communication Network for Mobile Robot Agents1 Peter Wilke ⊗⊕, Thomas Bräunl ⊕ ⊗

Lehrstuhl fuer Programmiersprachen Institut fuer Informatik Friedrich-Alexander-Universitaet Erlangen-Nuernberg Martensstrasse 3, 91058 Erlangen, Germany ⊕ Centre for Intelligent Information Processing Systems Dept. of Electrical and Electronic Engineering The University of Western Australia Nedlands WA 6907, Australia e-mail: {wilke,braunl}@ee.uwa.edu.au

Abstract For intelligent robots in a multi-agent system communication is essential for cooperative behavior. Here we describe the explicit communication between individual robots acting as team members of a RoboCup team playing soccer. The robots are based on the EyeBot platform. An overview of communication systems being published and a discussion of their advantages and drawbacks is followed by an introduction into multiagent systems and the problems we faced applying them to the task of playing soccer. Then we describe the wireless communication network in detail including the EyeBot platform, message structures, selfconfiguration and error recovery. The communication allows transmission of messages between individuals, broadcasts and communication with a remote computer workstation. The communication system is a layer beneath the multi-robot console, which is the user interface, and above the EyeBot hardware.

1

Introduction

Coordinated multi-agent robotic systems are studied for quite some time, among them are Fukuda and Sekiyama [Fukuda94], who studied an early cellular robotic system, i.e. forming a larger robot from docking small robots guided by infra-red photodiodes, and Premvuti and Yuta [Premvuti90], who focused on multi agent robotic systems using robots equipped with ultrasound emitters and sensors, a position estimation mechanism and a communication system to transmit the position. Expected benefits from multi-agents include: • Improved system performance on tasks with a high degree of parallelism by cooperating and dividing the task into smaller parts and performing them individually, • Completion of tasks individual robots cannot complete, • Distributed sensing, i.e. extending the range of their private sensors by sharing the information detected by team members, 1

This work was partly supported by a grant from Lehrstuhl fuer Programmiersprachen, Institut fuer Informatik, Friedrich-Alexander-Universitaet Erlangen-Nuernberg, Martensstrasse 3, 91058 Erlangen, Germany



Increase of the systems redundancy and reliability because of the reduced complexity of the team members.

But there are drawbacks as well, among them are: • Monotonical decrease of the benefit each time a new team member is introduced, • Increase of the communication overhead, • Crowding the available space, • Increase of cost, i.e. costs for communication hardware, computations and energy, • Decrease of robustness by introducing additional parts, • Handling noisy channels, • Increased uncertainty concerning team members intentions, • Increasing the overall cost. Efficient, robust and reliable communication is one of the key features when it comes to design and implementation of a real world problem solving robot system. In the following sections we will focus on the software implementation of the communication system for the EyeBot controller [Braunl97] (see section 4.5) or general purpose multi-agent tasks, one example being robot soccer [Braunl98a, Kitano97, RoboCup97, RoboCup98]. The main issues to be discussed are the following: • Taxonomy • Premvuti and Yuta [Premvuti90]: active or non-active cooperation, level of independence and types of communication used • Dudek et. al. [Dudek93]: team size, communications topology, communications bandwidth, team reconfigurability and team processing ability. • Cao et. al. [Cao95]: architecture, communications structure and other agent's intentions, capabilities, states and beliefs. • Social Organization and Structure • Mataric et. al. [Mataric92]: behavior bases: homing, aggregation, dispersion, following and safe wandering. • Inter-Robot Communication • content of the message • guaranteeing communication • practical systems • Fung [Fung94]: UHF wireless transmitter and receiver to communicate position data • Rude et. al. [Rude97]: wireless communication IRoN provides support for both explicit and implicit communication • Wang and Premvuti [Wang94]: carrier sense multiple access protocol using second microcontroller to control the transfer of message frames across the network • Network Layers • Electrical Layer composed of the wireless module and the RS-232 port on the EyeBot controller. • Link Control Layer initializes the physical link, controls the normal data interchange, terminates the link at end of transmission and controls abnormal conditions such as invalid responses, loss of synchronization, and faults arising from anomalies in the communications link. • Network Layer responsible for supporting communication between the link layer and any user application.

2

Multi-Agent System

In this context we assume that cooperative behavior in an intelligent2 multi-agent robotic system requires individual agents to communicate with one another through either explicit or implicit means. If this question is to be decided the important features of the communication system must be identified (see [Fukuda94]), among them are: •

Synchronization: Are there tasks which require coordinated behavior of the robots involved?



Information exchange: Is there information which should be exchanged to enhance the performance?



Negotiations: Are there constraints (e.g. time, resources) whose fulfillment need decisions to be made which have to be negotiated during runtime?

Numerous studies [Werner90, MacLennan91, Franklin95] have been made to determine if and to what extent communication is important for cooperation. All these studies come to the conclusion that some level of communication is required in cooperating robotic teams: •

Communication range: The performance of a communications system is not proportional to the wideness range [Agah95]. The communication range can be calculated by minimizing the communications delay time between the moving robots when the average required bandwidth for each agent is given [Yoshida95].



Communications content: The task efficiency is influenced by the communication content [Balch95] and the form in which the content is transmitted. The three forms can be distinguished: no communication at all, transmission of state information and transmission of goal information. The bottom line is:





For tasks with little implicit communication required introducing communication can improve the performance significantly.



In tasks with implicit communication no additional communication is necessary.



More complex communication strategies (goal) offer little benefit over more complex communication methods (state).

Guaranteeing communication: This means that the communications architecture should - in the optimal case - be adaptive, reliable, without undue delays and resistant to faults. Fault tolerance and elimination of faults was the aim of a deadlock-free cooperative protocol solving a sorting task [Lin94]. In this case broadcast communication is used after a deadlock is detected. Other approaches use point-to-point communication and/or introduce levels of abstraction.

2

In lack of a widely accepted definition of this term we rely on the common sense interpretation of this expression.

3

Wireless Communication Network

3.1 3.1.1

Related Work Simple Position Sensing Networks

In the approach of Fung [Fung94] the position of a robot is gathered from infra-red sensor data and is then transmitted to other robots. Communication is transmitted via a two-way radio link. Robots are identified by a unique code which leads the transmitted position data. The received information is decoded and processed by the on-board microprocessor of the robot. The radio link uses an UHF transmitter/receiver and an encoding IC which detects if the address matches the robot's ID and then transfers the data directly to the microprocessor. The communication network as suggested by Fung et. al. is tailor made to communicate very specific data between agents. The system is therefore inflexible to any changes of the communication system.

3.1.2

Inter-Robot-Network IRoN

IRoN is a robot communication network with modest cooperation and flexible enough to support a range of tasks [Rude97]. Cooperation is achieved by exchanging state variables using implicit communication. In addition explicit communication routines are available. The signal to noise ratio is increased at the expense of bandwidth by using a spread spectrum technique with 19200 bits/second using a commercial radio modem, i.e. the energy of each single bit is spread over the entire frequency band. Each team member stores a set of every other agent's state variables and is able to transmit them. The Netserver module is a process in a multi-tasking environment of each robot and handles all transmissions. This process has two interfaces; one for the modem and the other for the processes exchanging data. The modem is connected to the RS232 interface on the host robot processor. Implicit communication is realized by updating the robot's state variable copy in each team member in regular intervals. Overflow and deadlock are avoided by giving implicit communication priority over explicit communication. IRoN uses a token ring communication to perform time division multiplexing which is relatively easy to implement. IRoN is a high performance robot network and is able to perform implicit and explicit communication. The drawback is that the state variables implicitly exchanged may not be suitable and/or sufficient for all robot applications, i.e. the complexity of the user processes increased while the Netserver module is kept simple. Implicit communication is based on explicit communication, i.e. can be implemented on every system with explicit communication. IRoN can easily be modified to either maintain additional state variables and/or adding new processes to handle the communication on each host.

3.1.3

Carrier Sense Multiple Access with Collision Detection

The CSMA protocol divides time between the various agents. This sophisticated technique is more difficult to implement than the token ring communication but has a smaller overhead in each message [Wang94]. It is based on the CSMA/CD protocol and allows wireless inter-robot communication among multiple autonomous mobile robots with a centralized supervisor. Two difficult problems have to be solved to used CSMA. First CSMA/CD relies on a centralized mechanism to detect collisions and second the detection of the collision itself is difficult for a sending and receiving unit.

These problems are solved by two changes to the protocol. First the network software ensures that all nodes have an individual message length which they transmit. Second the network software monitors the channel before and after an attempt to send. This is done to avoid start sending while the channel is in use and to detect that one or more nodes started sending at the same time. Therefore, the nodes that are involved will be able to detect the collision except the one transmitting the longest message which must be notified by the others. CSMA requires a transceiver, a modem, HDLC controller and a micro-controller to implemant the protocol. The processes running on the micro-controllers receive the message and deliver the message frames to the robot system, i.e. frames are identified within the continuous data stream and those addressed to the current node are forwarded to the robot system. The nodes are in receive mode except when they are sending. The transmission mechanism is responsible for the transmission of message from the robot and the detection of collisions. This modified CSMA protocol is quite complicated and difficult to implement. It requires significant amounts of processing capacity on the robot system. Its major advantages are the smaller overhead in each message and the de-centralized collision detection.

3.2 3.2.1

The EyeNet Approach Motivation

Our EyeBot robots [Bräunl97] act as autonomous agents independent of any global sensor systems and without the need for remote computing on a host system. All sensor data evaluation, control and planning is done locally on-board the robot. There are several reasons to have a communication network in this scenario: 1. To allow robots to communicate with each other, e.g. to share sensor data or cooperate on a common task or devise a shared plan. 2. To remote control one or several robots, e.g. giving low level driving commands or specifying high level goals to be achieved. 3. To monitor robot sensor data, e.g. display camera data from one or more robots or record a robot's distance sensor data over time. 4. To run a robot with off-line processing, e.g. a combination of the two previous points, each sensor data is sent to a host where all computation takes place, the resulting driving commands are relayed back to the robot. 5. To create a (multi-) robot monitoring console, e.g. to monitor each robot's position, orientation and status in a multi-robot scenario in a common environment. This will allow a postmortem analysis of a robot team's performance and effectiveness for a particular task.

3.2.2

Electrical Interface

The EyeBot serial port interfaces to the wireless transceiver module. The operation of several wireless modules operating at the same frequency is organized by time division multiple access. One of the three serial ports of the EyeBot platform is on the main CPU board and complies with RS232 data transmission standards. The other two can be used to drive the wireless modules and are low-voltage serial ports controlled by an ST16C552 Dual UART chip.

3.2.3

Wireless Module

The data is transmitted half-duplex with speeds up to 40 kb/s and up to 30 m distance using a RF Solutions FM Radiometrix Transceiver as UHF radio module. The data is modulated on a FM carrier wave operating at 418 MHz or 433 MHz and the radiated power from a quarter wave antenna is 0.25 mW.

The EyeBot controller does the data coding and decoding and switches between send and receive mode, therefore establishing a data link between the serial ports of the communicating controllers. For a reliable transfer the following conditions must be satisfied: • Pulse Width Time: The receiver base-band bandwidth means that the frequency f of data transitions, , must satisfy: 40khz > f > 0.25kHz using bit-coding. • RX Setting Time: The transceiver modules require at least 3ms of preamble to be transmitted before all data streams to ensure that the data at the receiving module will be sliced correctly. • Mark Space Ratio: To prevent DC wandering in the transceiver the mark space ratio of the data must be approximately 50:50 averaged over any 4ms period. The data slicer in the receiving circuit will tolerate sustained asymmetry up to 70:30, however this will increase transmission pulse distortion and result in deceased noise tolerance. The required mark space ratio can be achieved in several ways, among them are: Manchester Coding, Byte Coding, FEC Coding and Enhanced FEC Coding.

3.2.4

Connection Wireless Module to EyeBot Serial Port

Figure 1 shows the fully buffered CMOS digital interface. The important signals between controller and transceiver are: • receive/transmit mode signal • binary data stream for transmission • power supply

Figure 1 CMOS digital interface While the signals received by the transceiver are: • Carrier detect signal



Binary data stream

The transceiver is directly connected to the EyeBot serial port. The data streams are connected to the appropriate send or receive pin, respectively, in the UART, while the carrier detect line is connected to the carrier detect input of the UART.

3.2.5

Network of Wireless Modules

Several robots can form a network using TDMA (time division multiple access) protocol to access the available bandwidth. In TDMA networks, only one transceiver may transmit data at a time, otherwise a signal clash will occur and all transmitted data will be lost. Our network system implements TDMA by using a “virtual token ring” structure.

3.2.6

Link Control Layer

The EyeBot link control protocol is character oriented. It utilizes a packet data structure to frame the information that is to be communicated. This packet data structure is comprised of individual bytes. A byte oriented protocol was chosen over a bit oriented protocol as the UART is designed for the serial transmission and reception of individual characters.

3.2.7

Functions

The link control protocol facilitates the interchange of information between sender and receivers over a given interconnecting link. Ideally the data link takes place over an error-free-point-topoint facility, but wireless data communications environments are far from ideal and exhibit characteristics that must be accommodated by the protocol. Additionally to the requirements imposed by the link itself, the protocol must contend the requirements that derive from the application and others: • Frame control: Delimiting frames and character count to differentiate frames. • Error control: Detection of errors and acknowledgement of correctly received messages. The error detection method used are cyclic redundancy check [Peterson61] and sequence control. • Initialization control: Establishment of an active data link over a communications facility. • Flow control: Acceptance or rejection of information transfers. • Link management: Controlling the transmission direction and identification of sender station. • Transparency: Transmission of data streams without interfering of information and link control functions. • Abnormal recovery: Handling of illegal sequences, loss of response etc.

3.2.8

Frame Structure

Messages are transmitted in a frame structure, i.e. a number of fields, each carrying information and usually fixed in length. Among them are: • Start Byte, • Destination ID, • Source ID, • Frame Length, • Message Contents and • CRC Checksum.

Special frames are used for control information. A token frame is passed transmission control to the network from one communications node to the next.

3.2.9

Network Layer

The network layer in the EyeBot communications network routes the messages associated with higher level user applications.

3.2.10 Topology The wireless network can be considered as a logical fully connected network, with every node able to access every other node in one hop, i.e. data is always transmitted directly without the need of a third party.

3.2.11 Network Control The control token is passed from one network node to another according to the set of rules that have been defined allowing the node with the token to access the network medium. At initialization, a token is generated by the master station and passed around the network once to ensure that all stations are functioning correctly.

3.2.12 Data Transfer A node may only transmit data when it is in possession of the control token. A node must pass the control token after it has transmitted an allotted number of frames. The procedure is as follows: 1. A logical ring is first established that links all nodes connected to the wireless network, and the master control station creates a single token. 2. The token is passed from node to node around the ring until that node which is waiting to send a frame receives it. 3. The waiting node then sends the waiting frame using the physical medium, after which it passes control to the next node in the logical ring.

3.2.13 Recovery from Error The major error recovery tool in the network layer is the timer in the master station that was mentioned earlier. The timer monitors the network to ensure that the token is not lost. If the token is not passed across the network for a certain period of time, a new token is generated at the master station and the control loop is effectively reset.

3.3

Technical Setup

We use one of a robot's serial port (RS232) for wireless communication. The port is used in halfduplex mode and connected to a radio transceiver module with carrier frequency 433 MHz. Switching the module between sending and receiving is done by a single signal line, linked to one of the serial control lines. We assume to have a network of about 10 mobile robots, which should be able to talk directly to each other without the need for a base station. However, it should be possible to include a base station (PC or workstation) as one of the network nodes. Using different frequencies for each robot is not viable for a number of reasons: 1. Not enough different channels are available in commercial modules 2. Even with different channels available, cross-talk and reflections would create a very high noise level that results in frequent drop-outs and makes an effective communication impossi-

ble [we conducted experiments with different channels that indicated that three or more transceivers active at a time are not possible with this technology]. For the reasons mentioned above, we opted for a solution where all robots and optional base stations share the same transceiver module with the same frequency. It is now obvious that at no time, two transceiver modules may send data at once. There are basically two techniques available to satisfy this condition:

Figure 2 The two message collision avoiding techniques polling and token-ring





Polling: One robot (or a base station) is defined as the “master” node. In a “round-robin” fashion, the master talks to each of the robots subsequently to let it send one message, while each robot is listening to all messages and can filter out only those messages that apply to it. Token-Ring: Like before, one robot becomes “master”, because it initiates the ring mechanism and also takes over if a problem occurs like the loss of a token (see section on fault tolerance). The master sends the token to the next robot, who may send its messages and then passes the token on to the next robot in the list, and so on.

Figure 2 shows the polling on the left and the token ring on the right. While both approaches are rather similar, the second approach has the advantage of less overhead, yet has the same functionality and safety. Therefore, we implemented the Token-Ring approach. 1. The master has to create a list of all active robot nodes in the network by an initialization polling process. This list has also to be maintained during the operation of the network (see below) and has to be broadcast to all robots in the network (each robot has to know its successor for passing on the token). 2. The master has to monitor the data communication traffic with a timeout watch-dog. If no data exchange happens during a fixed amount of time, the master has to assume that the token got lost (e.g. the robot that happened to have the token “died”), and create a new one. 3. If the token is lost e.g. three times by the same robot, the master decides that this robot is malfunctioning and takes it off the list of active nodes 4. The master periodically checks for new robots that have become available (e.g. just switched on or recovered from an earlier malfunction) by sending a “wild card” message. The master will then update the list of active robots, so they will participate in the token ring network.

All robots (and base stations) are identified by a unique ID number, which is being used for addressing a robot. A specific ID number is used for broadcasts, which follows exactly the same communication pattern. In the same way, subgroups of nodes can be implemented, so a group of robots receive a particular message.

3.4

Message Structure

Since we have at least two different application purposes here, exchange of data between robots, and remote control of a robot from a base station, we mainly distinguish five types of messages: • USER: user messages, • OS: operating system messages, • TOKEN: empty message, enables successor node sending, • WILD CARD: asks for replies from nodes not included in the network and • SYNC: broadcast current network structure to all nodes. Three different buffers are maintained: a single common send buffer, and two separate receive buffers for system and user messages. That way it can be guaranteed that both application purposes (remote control and data communication between robots) can be executed transparently at the same time. Each message contains the following data: • USER/OS flag: • user: user defined • os: several message types exist, e.g. for pressing a button on the robot or displaying characters and images on the robot's LCD • receiver ID: who is this message for? A special ID can be used for broadcast to all robots or a subgroup of robots • sender ID :who is sending this message? (Information is supplied by RoBIOS operating system for each robot) • message length: number of bytes to follow • message contents: the actual contents of the message • CRC check sum: The cyclic redundancy check sum is treated transparently, i.e. the user program is not aware of it and does not have to deal with it - the same as for the headeror preamble-bytes necessary to start the data exchange. If the CRC is bad, an error message is returned and the sender node will re-send the message.

4 4.1

Self-Configuration of EyeNet Ring Master

The token ring network is a symmetric network so during normal operation there is no need for a master ring node. However, a master is required to generate the very first token and in case of a token loss due to a communication problem or a robot malfunction, a master is required to restart the token ring. There is no need to have the master role fixed to any particular node, in fact the role of master can be assigned temporarily to one of the nodes once the situation arises to perform the function mentioned above.

4.2

Network Startup

Since there is no master node, the network has to self-configure at initialization time and whenever problems arise. When a robot is switched on, it only knows its own id-number, but neither the total number or IDs of other robots, nor who the master is. Therefore, the first round of communication is performed only to find out: • How many robots are communicating, • What are their ID numbers and • Who will be temporary master for starting the ring or recovery. Robot ID numbers are used to trigger the initial communication. When the wireless network initialization is called for a robot, it first listens to any messages currently being sent. If no messages are picked up within a certain time interval multiplied by the nodes own ID number, the robot assumes it is the lowest ID number and therefore becomes master. The master will keep sending tokens to the “wild card” address at regular time intervals, allowing a new incoming robot to enter and participate in the ring. The master will receive the new robot's ID and assign it a position in the ring. All other robots will be notified about the change in total robot number and any changes in the virtual ring neighborhood structure, via a broadcast SYNC message.

4.3

Fault-Tolerant Network Recovery

Each robot has an internal timer process. If, for a certain time, no communication has taken place, it is assumed that the token has been lost, e.g. due to a communication error, a robot malfunction or simply because one robot has been switched off. In the former case, the message can be repeated, but in the latter case (if the token is repeatedly lost by the same robot), that particular robot has to be taken out of the ring. If the master node (assigned at a previous occasion) is still active, that robot recreates a token and monitors the next round for repeated token loss at the same node. If that is the case, the faulty robot will be decommissioned and a broadcast message updates all robots about the reduced total number and the change in virtual ring neighbor structure. If there is no reply after a certain time period, each robot has to assume that it is the previous master itself which became faulty. In this case, the previous master's successor in the virtual ring structure now takes over its role as master. However, if this process does not succeed after an additional time period, the network startup process is repeated and a new master is negotiated.

4.4 4.4.1

Multi-Robot Console Purpose

As has already been mentioned before, the wireless robot communication network EyeNet serves multiple purposes, among others: • message exchange between robots for application specific purposes under user program control and • monitoring and remote controlling of one or several robots from a host workstation.

Figure 3 Remote EyeBot Screen

Both communication protocols are implemented as different message types using the same token ring structure, transparent to both the user program and the higher levels of the RoBIOS operating system itself. We will here discuss the design and implementation of a multi-robot console that allows monitoring robot behavior, their sensor readings, internal states, and current position and orientation in a shared environment. Remote controlling of individual robots, groups of robots or all robots is also possible by transmitting input button data, which is interpreted by each robot's application program.

4.4.2

Host System

The host system is a workstation running Unix or Windows, accessing a serial port linked to a wireless module, identical to the ones on the robots. The workstation behaves logically exactly like one of the robots, which constitute the other network nodes. I.e. the workstation has a unique ID number and also receives and sends the token. System messages from a robot to the host are transparent from the user program and update the robot display window (figure 3) and the robot environment window (figure 6). All messages being sent in the entire network are monitored by the host without the need to explicitly send them to the host (see figure 4). The EyeBot Master Control Panel (see figure 5) has entries for all active robots. The host only then actively sends a message, if a user intervention occurs, e.g. by pressing an input button in one of the robot's windows. This information will then be sent to the robot in question, once the token is passed to the host. The message is handled via low level system routines on the robot, so for the upper levels of the operating system it cannot be distinguished, whether the robot's physical button has been pressed or whether it was a remote command from the host console.

Implementation of the host system largely re-uses communication routines from the EyeBot and only makes a few system-dependent changes where necessary.

Figure 4 Data Monitor

Figure 5 Master Remote Panel

Figure 6 Remote Robot Environment

4.5 4.5.1

EyeBot Project Introduction

The EyeBot Controller is a based on a Motorola M68332 micro-controller with interfaces to a number of color and grayscale cameras and a 64x128 black/white graphics LCDisplay. We are currently operating on images of size 60x80, which we found have sufficient resolution for typical navigation tasks of small mobile robot systems. The controller is powerful enough to perform real-time on-line image processing, depending on the complexity of the operation. The Univ. Stuttgart (Germany), The Univ. of Western Australia (Australia), Univ. Kaiserslautern (Germany) and Rochester Institute of Technology (USA) have contributed to the system software.

4.5.2

EyeBot Controller Hardware

Although the controller runs at moderate speed (35 MHz), it is fast enough to compute basic image operations on a low resolution image in real time. E.g. image acquisition, Sobel edge detection and display on the LCD can be performed at a rate of about 10 frames per second. EyeBot's graphics LCD is essential for interaction between the robot and the programmer. The programmer needs to see the robot's view in order to set camera parameters or orientation. Although the camera provides gray scale images at 6bit and color images at 24bit at a higher resolution, the display can only show low resolution black/white images. This is sufficient as a feedback to the programmer when running the robot, but not for program development, which is done on a workstation [Braunl97].

Figure 7 EyeBot Controller MK3

A version of the gnu C-compiler and library has been adapted for EyeBot, so program development can be made in a high level language, using assembly routines for time-critical passages. In addition, a multi-threading scheduler has been developed, which is essential for robotics applications. The microcontroller's timing processor unit (TPU) is being used for servo control with pulse width modulation (PWM), for sound synthesis and sound playback, as well as the control of infrared distance sensors. A more detailed description is available on the web: http://www.ee.uwa.edu.au/braunl/eyebot.

4.5.3

Operating System RoBIOS

The operating system RoBIOS (robot basic input output system) has been written in C plus m68k assembly language, using the gnu C compiler and assembler tools. RoBIOS comprises a small real time system with multi-threading, libraries for various I/O functions, and a number of demonstration applications. The C low level text input and output routines have been adapted for EyeBot. This enables us to use the standard C I/O library “clib” together with the EyeBot system for user application programs. E.g., a user can call getchar(), in order to read a key input and use printf(..), in order to write text on the screen. Special care has been taken to keep the RoBIOS operating system flexible among several different hardware configurations, because the same system is to be used for wheeled robots and for legged robots. Therefore, a hardware description table has been included into the system design. The EyeBot operating system RoBIOS relies on the hardware description table HDT, in order to find out which hardware components are currently connected to the system. These hardware components can be sensors or actuators (motors or servos), whose control routines are already available in the general RoBIOS system. HDT allows easy detection, initialization, and use of hardware components. Entries in the HDT table define the current hardware configuration of the system. RoBIOS contains access routines to find and use the device drivers corresponding to the table entries. The HDT contains a list of all entries, together with their semantics. Any program which needs to

access a sensor or actuator object defined in the HDT, can now do so by defining a handle. Each group entry has to provide the handle data type and an initialization function.

5 5.1

Sample Application RoboCup Philosophy

”The Robot World Cup Initiative (RoboCup) is an attempt to foster AI and intelligent robotics research by providing a standard problem where wide range of technologies can be integrated and examined. For this purpose, RoboCup chose to use a soccer game, and organize RoboCup: The Robot World Cup Soccer Games and Conferences. In order for a robot team to actually perform a soccer game, various technologies must be incorporated including: design principles of autonomous agents, multi-agent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor-fusion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. RoboCup also offers a software platform for research on the software aspects of RoboCup.” (from the RoboCup web site www.robocup.org)

Figure 8 The SoccerBot

Figure 9 CIIPS Glory Robot Soccer Team

5.1.1

Communication for Robot Soccer

Our robot team “CIIPS Glory” is playing in the small size league. The small size league plays on a ping-pong-size field. It allows the use of external sensors, especially the use of an overhead camera, which allows absolute positioning of robots (own and opponent), ball and goals. Most of the teams so far follow this approach, which requires a broadcast wireless transmission from the image processing workstation to the robots. This led to the development of remote controlled units (one team actually used remote controlled toy cars) rather than mobile robots with local sensors. Most teams have no sensors at all on the robots, and communication is only a one-way broadcast from the workstation to the robots. In that respect our robot team is an exception [Braunl98a], since we exclusively use local sensors (vision and infrared) with on-board computation. Since our robots are true autonomous mobile agents, this naturally requires a much more sophisticated communication protocol. Each of our robots has to be able to send and receive messages individually to other robots without the need for a central host station. We are interested in developing locally intelligent, autonomous mobile agents, which can be programmed for a number of tasks and are not restricted to a single specific application like the soccer game. The special challenges of robot soccer are: real-time, the ball has to be tracked quick enough and reached before the opponent, cooperation, five robots have to cooperate on the task of soccer with a common plan opponent. Each team has an opponent team with competing aims fault-tolerance each team has to operate if it loses a player, each player has to operate if it loses part of its functionality (e.g. a sensor) or has faulty or incomplete data The main problem for each robot is to identify the ball (orange golf ball) and the opponents goal (color coded in yellow or blue). If possible, locating walls, team mates and opponent players is an advantage. From the robot's color image, the ball position is determined and then translated via a table in x,y-

coordinates. Since we are using local vision with on-board image processing only we achieve about 4 frames per second in 24bit color, 60x80 pixels resolution [Braunl98a].

6

Conclusion

We presented a new wireless communication schema which can be applied for groups of autonomous mobile robot agents. The network is completely homogeneous without the need for a fixed master. Individual nodes will negotiate and take on the master role whenever necessary. Incoming new nodes will automatically be integrated into the network via “wild card” and “sync” messages, while exiting or faulty nodes will be automatically eliminated. The physical communication port is a serial interface. The wireless transceiver operates on a single frequency using time division multiplex. A graphical user interface enables remote robot control, remote debugging, and monitoring of experiments of groups of robots.

7

Acknowledgement

The authors would like to acknowledge the work of Daniel Storey, Tysun Chan, Thomas Lampart, Petter Reinholdtsen and Klaus Schmitt, who participated in this project.

8

References

[ACR99] [Agah95] [Age97] [Balch95] [Bräunl97] [Bräunl98a] [Bräunl98b] [Cao95] [Conard89] [Dudek93] [Farmer91] [Franklin95]

[Fukuda94a]

Australian Conference on Robotics and Automation ACRA'99, Mar./Apr. 1999. Agah, A.; Bekey, G. In a Team of Robots the Loudest is not Necessarily the Best.. published in [Sys95], pp. 584-592. Proc. of The First International Conference on Autonomous Agent (Agents-97)), Marina del Ray, 1997. ACM Press. Balch, T.; Arkin, R. Communikation in Reactive Multiagent Robotic Systems. Autonomous Robots, 1:27-52, 1995. Bräunl, Thomas. Improv and EyeBot - Real-Time Vision on-board Mobile Robots. published in [M2V97], pp. pp.131-135 (5). Bräunl, Thomas. CIIPS Glory - Autonomous Robot Soccer Players with Local Intelligence. [Rob98], pp. pp. 497-502 (6). Bräunl, Thomas; Graf, Birgit. Robot Soccer with Local Vision. published in [PRI98], pp. 14-23 (10). Cao, Y.; Fukunaga, A.; Kahng, A.; Meng, F. Cooperative Mobile Robotics: Antecedents and Directions. [IEE95a], pp. 226-234. Conard, J. Character-Oriented Link Control, chapter 2, pp. 83-107. published in [Sunshine89], 1989. Dudek, G.; Jenkin, M.; Milios, E.; Wilkes, D. A Taxonomy for Swarm Robots. published in [IEE93], pp. 441-447. Farmer; al. et., Hrsg. Artificial Life II. Addison-Wesley, MA, USA, 1991. Franklin, R.; Harmon, L. Elements of Cooperative Behavior. Technical Report ERIM TR 655404-1-F, Environmental Research Michigan, USA, 1995. page 698709. Fukuda, F.; Sekiyama, K. Communication Reduction with Risk Estimate for Multiple Robotic Systems. [IEE94a], pp. 2864-2869.

[Fukuda94b] Fukuda, T.; Ueyama, T. Cellular Robotics and Micro Robotic Systems. World Scientific, Singapore, 1994. [Fung94] Fung, C.; Eren, H.; Nakazato, Y. Position Sensing of Mobile Robots for Team Operations. [IMT94], pp. 185-188. [IEE90] IEEE Workshop on Intelligent Robots and Systems, 1990. [IEE93] IEEE, RSJ. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 1993. [IEE94a] IEEE. Proc. of the IEEE Conference on Robotics and Automation, 1994. [IEE94b] IEEE, RSJ, GI. Proceedings of the IEEE/RSJ/GI, 1994. [IEE95a] IEEE Workshop on Intelligent Robots and Systems, 1995. [IEE95b] IEEE. Proc. of the IEEE Conference on Robotics and Automation, 1995. [IEE97] IEEE Workshop on Intelligent Robots and Systems, 1997. [IJC95] IJCAI. Proceedings IJCAI-95 Workshop on Entertainment and AI/Alife,, 1995. [IJC97a] Proc. of IJCAI-97, 1997. [IJC97b] Proc. of the Fifteenth Intl. Joint Conf. on Artificial Intelligence, San Francisco, CA, USA, 1997. [IMT94] Proceedings of IMTC, 1994. [IRE61] Proceedings of the IRE, 1961. [Jablonski85] Jablonski, J.; Posey, J. Robotics Terminology, pp. 1271-1303. In Nof [Nof85], 1985. [Kitano + 97] Kitano, H.; Tambe, M.; Stone, P.; Veloso, M.; Coradeschi, S.; Osawa, E.; Matsubara, H.; Noda, I.; Asada, M. The RoboCup synthetic agent challenge 97. [IJC97a], pp. 24-29 (6). [Lin94] Lin, F.; Hsu, J. Cooperation and Deadlock-Handling for an Object Sorting Task in an Multi-Agent Robotic System. [IEE94a], pp. 2580-2585. [M2V97] 4th Annual Conference on Mechatronics and Machine Vision in Practice (M2VIP), Toowoomba QLD, Australia, Sep. 1997. [MacLennan91] MacLennan, B. Synthetic Ecology: An Approach to the Study of Communication. published in [Farmer91], 1991. [Mataric92] Mataric, M. Integration of Representation into Goal-Driven Behavior Based Robots. IEEE Transactions on Robotics and Automation, pp. 304-312, 1992. [Nof85] Nof, S., Hrsg. Handbook of Industrial Robotics. J. Wiley, NJ, USA, 1985. [Peterson61] Peterson, W.; Brown, T. Cyclic Codes for Error Detection. [IRE61], pp. 228-235. [Premvuti90] Premvuti, S.; Yuta, S. Consideration on the Cooperation of Multiple Autonomous Mobile Robots. [IEE90], pp. 59-63. [PRI98] Pacific Rim International Conference on Artificial Intelligence, Singapore, Nov. 1998. [Res93] Proceedings Workshop on Needs for Research in Cooperating Robots, Atlanta, USA, 1993. [RF Solutions97] RF Solutions. FM R.F. Transceiver Module. Technical Report FMBiMXXX-F REG No 277 4001, RF Solutions, 1997. pp. 1-17. [Rob97] RoboCup as a Research Program, IROS-97, 1997. [Rob98] Proceedings RoboCup Workshop, 2nd Robot World Cup Soccer Conference, Paris, 1998. [RoboCup97] RoboCup. RoboCup: A Challenge AI Problem. AI Magazine, Spring 1997. [RoboCup98] RoboCup. The RoboCup Physical Agent Challenge Phase-I RoboCup-97. Springer, 1998. [Rude + 97] Rude, M.; Rupp, T.; Matsumoto, K.; Sutedjo, S.; Yuta, S. IRoN: An Inter Robot Network and Three Examples on Multiple Mobile Robots' Motion Coordination, published in [IEE97], pp. 1437-1444.

[Startech96] Startech. ST16C552 Universal Asynchronous Receiver/Transmitter with FIFO and Parallel Printer Port with Power Down Capability. Tech. Rep. ST16C552, Startech, August 1996, pp. 331-360. [Storey98] Storey, Daniel. Wireless Communication for Intelligent Robotic Agents. Honour's Thesis, Dept. Electrical and Electronic Engineering, The University of Western Australia, October 1998. [Sunshine89] Sunshine, C., (Ed.). Computer Network Architectures and Protocols. Plenum Press, Santa Monica, CA, USA, 1989. [Sys95] Proceedings of the International Conference on Systems, Man and Cybernetics, 1995. [Wang94] Wang, J.; Premvuti, S. Resource Sharing in Distributed Robotic Systems based on a Wireless Medium Access Protocol. [IEE94b], pp. 784-791. [Werner90] Werner, G.; Dyer, M. Evolution of Communication in Artificial Organisms. Tech. Rep. UCLA-AI-90-06, University of California at Los Angeles, UCLA, CA, USA, June 1990. [Whiting75] Whiting, J. An Efficient Software Method for Implementing Polynomial Error Detection Codes. Computer Design, pp. 73-77, 1975. [Yoshida + 95] Yoshida, E.; Yamamoto, M.; Arai, T.; Ota, J.; Kurabayshi, D. A Design Method of Local Communication Area in Multiple Mobile Robot Systems. published in [IEE95b], pp. 2567-2572. [Yuta93] Yuta, S. Cooperative Behavior of Multiple Autonomous Mobile Robots. published in [Res93].