Real-Time Identification and Predictive Control of Fast Mobile Robots ...

3 downloads 317 Views 760KB Size Report
Particle filtering [5] techniques applied to mobile robot vision problems have been shown to ..... The transitions between S1, S2 and S3 are similarly determined.
PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005.

Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing Gourab Sen Gupta1,4, C. H. Messom2, S. Demidenko3 1

IIS&T, Massey University, Palmerston North, New Zealand 2 II&MS, Massey University, Albany, New Zealand 3 School of Engineering & Science, Monash University, Kuala Lumpur, Malaysia 4 School of EEE, Singapore Polytechnic, Singapore Email: [email protected], [email protected], [email protected]

Abstract – This paper presents a predictive controller for intercepting mobile targets. A global vision system is used to identify fast moving objects and uses a colour threshold technique to calculate their position and orientation. The inherent systemic noise in the raw sensor data as well as vision quantization noise is smoothed using Kalman filtering before being fed to the controller, and it is shown that this leads to superior accuracy of the controller. The predictive controller is based on State Transition Based Control (STBC) technique. As a case study, STBC has been applied to a goalkeeper’s behaviour in robot soccer which includes interception and clearance of ball. Further evaluation of the controller has been done for shooting the ball towards a target position. The system is examined for both stationary and moving objects. It is shown that predictive filtering of rough sensor data is essential to increase the reliability and accuracy of detection, and thus interception, of fast moving objects. Keywords – Vision Sensing, Real-Time Image Processing, Kalman Filtering, State Transition Based Control, Prediction, Interception, Mobile Robots

I.

INTRODUCTION

This paper deals with a robotic system comprising three major components: the global-vision based image analysis subsystem acting as a prime source of raw data, the Kalman filter which is applied to the data obtained from the vision system and the predictive controller which uses the filtered data. The role of each component and their impact on the overall system functionality and performance are discussed. The major focus and significance of the presented work is that it shows the importance of filtering raw sensor data to improve the accuracy of a vision-based intelligent controller for fast mobile robotic applications. When developing an intelligent control system, the filter design cannot be left as an afterthought but must be integrated into the system. Many automated learning based intelligent controllers employ neural networks, genetic algorithms and genetic programming techniques on raw sensor data in the expectation that the control algorithm will learn the noise characteristics of the data [1]. This study shows that explicitly developing a filter and evolving it along with the controller is preferable, rather than relying purely on the intelligent control algorithm.

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. Most commodity vision systems use video signals, from a CCD camera, as input to the image capture and analysis subsystems. Such systems provide frame rates of 30 Hz or field rates of 60 Hz for interlaced images. Processing these images with useful resolution (320 x 240 and above) in the 33.3 ms and 16.67 ms sample times respectively is a significant challenge. Interlaced images introduce the ‘image scattering’ problem for a moving object because of the time delay between the two fields of the image. To overcome this problem, the odd and even scan fields are processed separately. However, because of quantization errors in each field, a stationary object may appear to be in two different locations. Filtering techniques are required to minimize this problem. Kalman filtering [2, 3] has proved its usefulness in autonomous and assisted navigation, guidance systems and radar tracking of moving objects by virtue of its ability to generate an optimal estimate from a sequence of noisy observations. Alternatives to the Kalman filter that have been investigated for mobile robots include Fuzzy Filters [4], particle filtering [5] and probabilistic data association filters [6]. Extensions to Kalman filtering, such as distributed Kalman filtering, have also been considered [7]. Many studies present results in environments that have a significantly higher level of noise (as is the case when a sonar is the main sensor) than in our experiment. Particle filtering [5] techniques applied to mobile robot vision problems have been shown to have errors up to 30 cm, but scaled to the field of view of up to approx 10 m this is equivalent to an error of 6 cm on our system. This is much larger than the errors observed on our unfiltered system. Probabilistic data association filters [6] have been shown to be better than Kalman filters and particle filters in scenarios where a Gaussian does not correctly model the noise characteristics of the system or occlusions lead to degenerative cases. It is likely that this approach will improve the performance of our systems in scenarios where the robot moves around an obstacle (another robot) since the predictive model of the robot’s position should have bimodal characteristics (the robot cannot move through the obstacle, it has to pass to one of two sides). The distributed Kalman filter has been used to improve the localization of a group of robots [7], however this technique is unlikely to improve our system as the position of all the robots are available from the single global vision system, rather than a set of distributed sensors. Robotic interception of moving objects in industrial settings has gained much attention in recent years. Among the most efficient approaches is the Active Prediction, Planning and Execution (APPE) system [8]. The key feature of this system is the ability of a robot to perform a task autonomously without complete a priori information. The APPE system has been tested for an object moving at a maximum velocity of 4.5 cm/s. While APPE is effective, the resulting speed of the robot system is limited. This led to the investigation of a predictive control system based on STBC which is shown to reliably control a mobile robot at significantly higher speeds, in excess of 70 cm/s.

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. The experimental part of the research presented in this paper was based on the use of the robotic soccer system. Such a choice of the experimental platform was justified by the fact that in the last decade robot soccer has been widely used as a test bed for adaptive control of dynamic systems in a multi-agent collaborative environment [9]. Soccer robots must exhibit actions like positioning at a designated location, moving to the ball, blocking an opponent, turning to an advantageous angle, circling the ball and shooting it into a goal. Highly complex game-play, combined with the agility and speed of the robots, requires accurate path planning and prediction of moving targets. The path plans must be updated to reflect the changing game situations, meaning that they must be created in real-time [10]. This makes robot soccer an ideal test-bed for sensor systems and robotic control algorithms. Several techniques have been tested for robot path planning including genetic algorithms [11], fuzzy logic [12, 13], potential fields [14] and State Transition Based Control (STBC) [15]. The genetic algorithm approach is limited by the need to evolve the robotic control on a real system. This is time consuming and subjects the robots to significant wear and tear. The fuzzy logic approach suffers because a large number of rules must be specified for the large number of variables. This makes it impractical to use fuzzy logic unless tools are provided for automatically generating the control rules. The potential field approach, though simple to implement, often results in systems with undesirable limit cycles and unpredictable behaviour due to zeros in the potential field. A STBC approach, however, can be designed quickly and intuitively, enabling rapid development and testing of a controller for new robot behaviours. This approach forms the basis of the predictive control system presented in this paper.

To evaluate the performance of the STBC system this paper employs a case study of a robot goalkeeper which intercepts the ball, clears the ball from the goal area or sits idle when the ball is in the opponent’s half of the field. This implementation also demonstrates robotic targeting (a shoot function in robot soccer). It is shown that filtering substantially improves shooting accuracy on both a stationary and moving object, in this case, the ball. Several studies have implemented real-time image processing [16, 17], ball interception [18] and robot localisation [19] in the robot soccer domain. However, they have not managed to achieve the target interception success rate reported in this paper. The proposed technique has been developed and successfully tested for a fast moving target at speeds of up to 1.3 m/s. The system successfully predicts the trajectory of the target through the robot’s workspace, plans the robot’s motion to reach the target at a given angle and executes that plan. The controller is robust in environments with substantial noise from variation in ambient light, shadows and reflections. Section II details the experimental setup and briefly explains the software hierarchy of the robotic controller. The global vision system and the sources of error are explained in Section III. The proposed State Transition Based Control (STBC) to

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. drive the robot behaviour is presented in Section IV. Section V explains the algorithm for object interception while Section VI details the control algorithm for object targeting. The predictive filtering technique is explained in Sections VII and VIII. The experimental results for a stationary object, robot motion path and moving target interception are summarised in Section IX. Lastly, the system limitations and future work are highlighted in Section X.

II. SYSTEM SETUP The experimental setup, shown in Figure 1, consists of a Pulnix 7EX NTSC camera and a FlashBus MV Pro image capture card [20]. The image is captured at a resolution of 320x480 at a sampling rate of 30Hz. The odd and even fields are processed separately; hence the effective image resolution is 320x240 delivered at a sampling rate of 60 Hz. The image captured is processed on a 1.6 GHz Pentium 4 PC with 512 MB RAM. The image capture card was configured for off-screen capture, as shown in Figure 2, which facilitates fast processing of the image from the system RAM.

Image Processing & Filtering Layer Strategy (behaviour) Layer

Control Layer (Orientation and movement) Communication Software Robot

Wireless Communication

Figure 1. Experimental Setup and software hierarchy of a robotic controller

Video

Capture Card

VGA Card RAM

CCD Camera PCI Bus System RAM Figure 2. Image capture in off-screen capture mode

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. The predictive controller uses a global vision system to identify the target’s position and orientation. The vision data is filtered using a Kalman filter thereby reducing the effect of noise. The STBC controller uses the filtered system variables to calculate and execute the required robot behaviour.

III. GLOBAL VISION The global vision system used has a single camera to detect and track several objects using incremental tracking [21]. Incremental tracking is the approach whereby a small region around the last known position or predicted position of the object is processed rather than the whole image, thereby decreasing the processing time required. By limiting this experiment to two objects it has been possible to increase the incremental tracking window size and yet keep the vision processing time within the sample period of 16.67 ms. Using larger incremental tracking window sizes, the object being tracked is nearly always identified. In the event that it is “lost” (i.e. the object is not inside the predicted tracking window), the fault tolerant software will revert to full-tracking, whereby the whole image is analyzed to ‘recover’ the object’s position.

A computationally inexpensive vision processing algorithm using Run Length Encoding (RLE) has been discussed in [22] and [23]. RLE is an image compression technique that preserves the topological features of an image, allowing it to be used for object identification and location [24]. Though the RLE algorithm can be implemented on commodity hardware in robot soccer vision systems [25, 26], its significant processing speed advantage is when there are a number of objects to track. In this paper the system is restricted to two objects, for which the proposed incremental tracking algorithm is equally efficient.

Vision is the robot's main sensor system. It captures and processes images at 60 scan-fields per second [27]. The odd and even scan fields of the interlaced images are processed separately. The vision system has three sources of errors-

(i) Separate processing of odd and even scan fields of an interlaced bit mapped image. A stationary object may be reported at different locations in each frame due to different quantization errors.

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005.

Figure 3. Bit-map of an interlaced image

In Figure 3, the object is of a square shape. To calculate the position, we use the zero-order moment and centre-of-gravity equations (1). For an m x n binary image, the coordinates are given by:

Area, A =

n

m

B[i, j ] i =1 j =1 n

m

n

m

jB[i, j ]

COG, x =

i =1 j =1

A

iB[i, j ] y=

i =1 j =1

A

(1)

For the illustrated example, the coordinates of the object is (7.5, 8.0) in the Odd scan and (7.5, 9.0) in the Even scan. The separate processing of odd and even scan fields do not affect the X Coordinate. Only the Y Coordinate has a shift of 1 pixel. Looking at a physical area of 170 cm x 150 cm and working with an image resolution of 320x240 pixels, 1 pixel translates into a shift of ~0.62 cm in the Y Coordinate of the object. Moreover, this offset will not be constant for a moving object since the shift can occur both in a positive or negative direction.

(ii) Variation of light intensity While errors due to separate processing of odd and even scans could be pre-dominant under controlled light conditions, nonetheless, these errors are further compounded by variation in light intensity from one frame to another. In the real-world applications of collaborative robotics, it is often not possible to create ideal light conditions. To partially overcome this problem, YUV colour thresholding can be employed with the Y (intensity) range extended to the maximum limits. However, extending the colour boundaries too much, results in the threshold values of two or more colours overlapping each other. Also

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. extending the threshold values wider will make the vision system error prone as stray pixels from the background will be picked up too. This limits the colour tolerance.

(iii) Inherent Sensor Noise Certain errors are inherent in the system. These originate from the camera, the frame grabber card, connecting cables, etc.

Errors in vision-generated data have a significant impact on targeting accuracy even when intercepting or striking a stationary object. The tests carried out on moving targets, however, are more significant as interception accuracy suffers when the target is moving. This is due to the fact that actions are initiated based on predicted future positions which are different from the current position.

Blob Identification using colour thresholding To facilitate individual identification and separation of objects, a colour identifier comprising two colour patches is used on each robot, as shown in Figure 4. The centres of the two colour patches are first calculated from the image. The inclination of the line joining the two centres gives the orientation of the robot while the coordinates of the centre of the line give its position. For a target such as a ball, only the centre of the colour patch is calculated as it has no orientation. The velocity of the target is

Y

Cr

X Ct Cr Robot Colour Ct Team Colour

used to add a direction vector to its position. Figure 4. Identification of Robot

The object tracking algorithm searches the image to determine if the pixels belong to one of the calibrated colours. The pixels are then grouped to create colour patches using a sequential component labelling algorithm. This algorithm uses a twopass labelling technique [28] with identifiers that increment from the value of 1. Ideally, the number of labels is equal to the

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. number of objects in the image with the required colour. The procedure for checking the membership and grouping of pixels consists of 5 steps split into two passes.

A. FIRST PASS (Steps 1 to 4) 1.

Process the image in the tracking window from left to right, top to bottom, analyzing each pixel.

2.

If the pixel in the image is within the YUV threshold values of the colour of interest, then (a)

If only one of its upper and left neighbours has a label, copy the label.

(b)

If both upper and left neighbours have the same label, copy that label.

(c)

If both upper and left neighbours have different labels, copy the upper pixel’s label and enter the labels in an equivalence table as equivalent labels.

(d)

If not (a), (b) or (c) assign a new label to this pixel and enter it in the equivalence table.

3.

If there are more pixels to consider, repeat step 2 for additional pixels, otherwise proceed to step 4.

4.

Find the lowest label for each equivalent set in the equivalence table and add to the equivalence table.

B. SECOND PASS (Step 5) 5.

Process the picture by replacing each label with the lowest label in its equivalent set. To illustrate the algorithm, a binary image (rather than a YUV image) is used in the following example. Figure 5 shows a

representation of the binary image before the first pass of the algorithm. The 0's represent the background of the image and the 1's represent the objects of interest. It can be seen that there are two objects of interest in the image, one in the left and the other in the right. 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

0

0

0

0

0

0

1

0

0

1

1

1

1

0

0

0

0

1

1

0

1

0

0

1

1

1

1

1

1

0

0

1

1

1

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Figure 5. The binary image

Figure 6 shows the image after the first pass of the algorithm. The objects now have multiple labels as the first pass of the algorithm was not able to correctly label all shapes (i.e. the object on the left has labels 3 and 1 and the object on the right has labels 2 and 4). Figure 7 shows the image after the second pass of the algorithm, which resolves the problem of multiple labels

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. for single objects. Having identified the separate objects, the centre of each object is calculated using a centre of gravity calculation. 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

0

0

0

0

0

0

2

0

0

3

1

1

1

0

0

0

0

4

4

0

2

0

0

3

1

1

1

1

1

0

0

4

4

4

2

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Figure 6. The image after the first pass 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

0

0

0

0

0

0

2

0

0

1

1

1

1

0

0

0

0

2

2

0

2

0

0

1

1

1

1

1

1

0

0

2

2

2

2

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Figure 7. Image after the second pass

IV. STATE TRANSITION BASED CONTROL The data generated by the vision processor, namely the ball position and the position and orientation of the robots, is passed to the control system. It uses a STBC approach to drive the robot behaviour. The state transition diagram [7] for goalkeeper behaviour is shown in Figure 8. S0 Initial State

S0S1

S0S2

S1S2 S1 Clear Ball

S0S3

S2S3 S3 Idle

S2 Track Ball S2S1

S3S2

Figure 8. State Transition Diagram for Goalkeeper Behaviour

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. To show the robustness and accuracy of the controller, three different behaviours of the goalkeeper were tested based on the on-field ball position (Figure 9). 1.

Clear the ball, when ball is within the goal area (G).

2.

Track ball position when ball is within the halfway line and the goal area (D) with a view to intercept (block) it if it is deemed to be entering the goal.

3.

Idle in the goal centre position when the ball is over the halfway line in the opponent area (O). The position of the ball determines which behaviour the goalkeeper exhibits.

D

G

O 60 cm

Figure 9. Goal area, defensive and offensive regions of the field

To change state, the state transition conditions must be met. The state transitions from the initialisation state are determined by the position of the ball.

S0S1 - Change state if ball position within goal area S0S2 - Change state if ball position between goal area and centre line S0S3 - Change state if ball position beyond centre line The transitions between S1, S2 and S3 are similarly determined. However, transition between S1 (clear the ball) and S3 (idle) is not possible. In practice, the ball travels at a maximum velocity of 1.5m/s. It takes 400 ms for the ball to move from region G to region O (shortest distance of 60 cm). Since the vision processor analyses one frame every 16.67 ms, it will never fail to locate the ball as it crosses region D. Therefore, while the ball is in region D, the robot will exhibit behaviour of state S2 (track ball).

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. V. OBJECT INTERCEPTION The robot must reach the position of the object as quickly as possible in order to intercept it. Unlike object targeting (discussed in section VII), object interception does not require the robot to approach the object from a given direction. In robot soccer, one of the most important object interception tasks is to block the ball as it heads toward the goal. From the vision data, the velocity of the ball is calculated and updated each frame. One test of the accuracy of the system is the success rate of ball interception by the goalkeeper when the ball is moving towards the goal and is predicted to be entering the goal (Figure 10).

GOALSIZE

GOALIESTANDX

Y (0, 0) A X Robot Ball Velocity vector Ball ballYEstimate PhysicalY/2 Figure 10. Calculating the ball interception position

The flow chart, showing the algorithm to predict where the robot must be positioned to intercept the ball, is shown in Figure 11. If the ball position is not detected accurately, the robot is unlikely to intercept the ball. This is even more so when the ball is travelling at very high or very low speed. At a high speed the time available for the robot to reach the target position is very short. On the other hand, at low speeds the numerical accuracy of finding the angle of the direction of the ball is low, making the calculation of the target position noisy. To overcome this problem we propose to use predictive filtering which is explained below in Section VIII.

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. Calculate physical velocity vector (Ballvel.x and Ballvel.y) of ball from present and past positions

Calculate m, gradient of line of ball motion, using velocity vector components

Calculate constant in BallPos.y = m*BallPos.x + C using ball position

Estimate the Y position where the ball crosses goal line ballYEstimate = m*0.0 + C

Is ball travelling toward goal?

No

Yes Set final Y position to m*GOALIESTANDX + C

Set final X position to GOALIESTANDX and position goalkeeper at the final position

Figure 11. Flow chart for predicting the point of interception

VI. CONTROL ALGORITHM FOR OBJECT TARGETING The targeting (shoot) function is implemented by providing the robot with a target position (in line with the ball and the goal) from where it will progressively approach and then guide and/or hit the object (the ball) towards the target (the goal). This is position P in Figure 12, a point behind the stationary object (the ball), in line with the target. The velocities of the left and right wheels of the robot, VL and VR, which move it to position P and orient it to face the target, are calculated by the targeting (shoot) algorithm.

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. dx1 P Ball

φ2-φ1

φ2

φ1 φ2-φ1

θe

φ2 θr

dy1 Target

dy

φ3

dx = RobotBallVector.x dy = RobotBallVector.y

dx

dx1 = BallTargetVector.x dy1 = BallTargetVector.y

Robot

Figure 12. Control over a stationary object interception

The angles shown in Figure 12 are:

φ1 - TargetAngle (negative value in the case under consideration) φ2 - BallAngle (positive value) φ2−φ1 - BallTargetAngle φ3 - DesiredDirectionAngle θe - AngleError, and θr - RobotAngle The robot’s desired direction angle φ3 is

φ3 = φ2 + (φ2 – φ1) = 2φ2 – φ1

(2)

The angle error θ e between the required direction and the current robot direction is

θe = φ3 - θr

(3)

The wheel velocities are calculated using the following pair of equations: VL = Vc – Kapθe, and VR = Vc + Kapθe

(4)

Kap is the constant proportional angular gain. When the robot is in line with the target the angle error θe is equal to zero. The robot then moves straight towards the target with a constant velocity Vc.

The stability of the system is greatly dependant on the correct choice of Kap. An under-damped system will result in an oscillating motion of the robot towards the target position, whereas an over-damped system will make the motion sluggish. The data captured from the vision software was analysed and used to choose the optimal value of Kap. The effect of proportional angular gain on the system stability has been studied in detail. Figure 13 depicts the graph of the system response for different gain constants.

Angle (Degrees) ---->

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. 110 105 100 95 90 85 80 75 70 65 60 55 50 45 40 35 30 25 20 15 10 5 0 -5

Under Damped

Near-Critically Damped

Over Damped

Kap = 1.25 Kap = 1.0 Kap = 0.85 Kap = 0.8 0

5

10

15

20

25

30

35

40

45

50

Frame Count / Time (x 16.67 mSec) ---->

Figure 13. Effect of changing proportional gain, Kap

In the experiments conducted, the robot, oriented towards 0o was given a stimulus to turn to 90o. To study the turning performance of the robot, the linear velocity component (Vc) was zeroed out so that the robot turned on the spot. The proportional gain, Kap was scaled up or down. The best response was achieved when Kap is 0.85; i.e., the system was close to being critically damped. For Kap of 0.8, the robot did not reach the steady state value of 90o; i.e., the system was underdamped. For values of Kap greater than 0.85, the system exhibited substantial overshoots; i.e., it was under-damped. The steady state conditions were achieved well within 25 frames, i.e., 0.5 Sec.

The control algorithm resulted in extremely reliable stationary targeting. The target (in our case a ball) was consistently struck. However, the level of directional accuracy was not high. Thus the ball’s resulting direction of motion was not reliable. As the image was processed in two parts – the odd and even scan fields separately – the position of the stationary ball could shift by as much as 0.62 cm (approximately 1 pixel) - Figure 14. This resulted in incorrect calculation of wheel velocities.

The problem is more severe when the ball is close to the target. This is due to the fact that a small change in the ball position alters the Targetangle by a large amount. From one field to the next the difference in angle error δθe is large, resulting in wider variations of VL and VR as the robot takes the shot. As such, the trajectory followed by the robot is not smooth. To overcome this problem, in this work, we explore the suitability and benefits of Kalman Filtering and investigate how the filter parameters affect the system’s accuracy.

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. Ball position in Odd scan field

Podd

φ2-φ1

φ2 φ1

Peven

θe

Ball position in Even scan field

φ2-φ1

Target

φ2

Robot Figure 14. Object position in odd and even scan fields

VII. APPLICATION OF KALMAN FILTERING Kalman filtering of the data obtained from the vision system resulted in significant reduction of variation in calculated position P. The filter can be described by the following equations:

h xn = xˆn + ( yn − xˆn ) T

(5)

xn = xˆn + g ( yn − xˆn )

(6)

where,

xn is the filtered (smoothed) estimate of velocity at discrete time n

xˆn is the predicted estimate of velocity at time n based on the sensor values at time n-1and all preceding times xn is the filtered (smoothed) estimate of position at time n xˆ n is the predicted estimate of position at time n based on the sensor values at time n-1and all preceding times T is the sample time (scan-to-scan period) for the system

y n is the sensor reading at time n h and g are the filter parameters The above equations provide an updated estimate of the present object velocity and position based on the present measurement of object position y n as well as prior measurements. The predicted velocity and position at time n are calculated based on a constant velocity model:

xˆn = xn−1

(7)

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005.

xˆn = xn −1 + Txn −1 = xn−1 + Txˆn

(8)

Equations (7) and (8) allow transition from the velocity and position at time n-1 to the velocity and position at time n. These transition equations together with (5) and (6) allow tracking of an object and are combined to give just two tracking filter equations:

h xˆn = xˆn −1 + ( yn−1 − xˆn −1 ) T

(9)

xˆn = xˆn −1 + Txˆn + g ( yn −1 − xˆn−1 )

(10)

Equations (9) and (10) are used to obtain running estimates of target velocity and position. Kalman filtering addresses two major issues. It lessens the vision quantization error due to separate processing of odd and even scan fields and it significantly reduces the negative effect of the variation in the calculation of target position due to light intensity fluctuations.

VIII.

FILTERED PREDICTION OF THE OBJECT’S POSITION

In order to approach a moving target (e.g., the ball in a robot soccer game) its future position must be predicted. Kalman filtering has been used to achieve much greater target prediction and interception accuracy. The ball and robots’ positions are predicted using a constant velocity model (11, 12):

xˆ nb+t = xˆ nb + t * Tx nb−1

(11)

xˆ nr +t = xˆ nr + t * Tx nr−1

(12)

The constant velocity model is realistic for a fast moving ball that does not decelerate significantly. However the constant velocity model only provides an approximation of the motion of an accelerating robot or a robot moving on a circular path.

The position and time of the intersection between the ball and robot can be found by solving the equations (11) and (12) simultaneously. In general, if the two paths intersect, there is one real valued solution. Positive t values are required for the collision points that will occur in the future while negative values indicate collision points that could have occurred in the past. When there is no common solution to the equations, the robot and ball paths are predicted to miss each other; that is, their paths are parallel.

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. The predicted intersection points are used to calculate the robot's desired angle (2) and angle error (3) which are required for the calculation of the control action (4).

IX. EXPERIMENTAL RESULTS The vision processing, in the presented system, was completed in 6 ms. This allowed a margin of over 10 ms to complete the strategy and control calculations. The STBC strategy implementation is computationally efficient even for sophisticated behaviours and completes well within the 16.67 ms time constraint. In this section we present detailed experimental results under three categories – stationary object, robot motion trajectory and intercepting or blocking a moving target. (i)

Stationary Object A stationary object (the ball) was measured over several hundred frames and the variation in position, over time, calculated

using equation (13), was recorded for different filter parameters.

∆x b = xnb − xnb−1 ∆y b = ynb − ynb−1 Here

(13)

( x nb , y nb ) and ( xnb−1 , ynb−1 ) are the position of the ball in the current and previous frames respectively. Figure 15 plots the

ball position for unfiltered data and filtered data with filter parameters g=0.8 and h=0.2. The segment of data shown is the region in which the maximum unfiltered ∆x and ∆y values occurred. The maximum non-filtered ∆x was 0.19 cm which was reduced to 0.13 cm while maximum non-filtered ∆y was 0.24 cm which was reduced to 0.17 cm.

Figure 15. Plot of ball positions (∆x) and (∆y) for g=0.8, h=0.2

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. Figure 16 plots ball position for unfiltered data and filtered data with filter parameters g=0.6 and h=0.4. The maximum nonfiltered ∆x was 0.25 cm which was reduced to 0.08 cm after filtering while maximum non-filtered ∆y was 0.23 cm which was reduced to 0.12 cm. The higher h value dampens the response of the system giving better performance in the case of a stationary object.

Figure 16. Plot of ball positions (∆x) and (∆y) for g=0.6, h=0.4

The vision sensor error significantly affects the performance of the object interception system because the predicted position of the object is used as the interception point. A typical situation is when the goalkeeper needs to respond very quickly as a penalty shot is taken by the opponent striker (Figure 17). To realise an accurate and fast responding interceptor robot, the interception algorithm must use the future position of the mobile object, by predicting it a few frames ahead, to calculate the robot’s position. The robot’s vertical position along the Y-axis is calculated from the object’s position and velocity (14) while its horizontal position along the X-axis is kept constant so that it moves in a straight line, parallel to the Y-axis, with minimal deviation.

Yng+1 = y nb + f c y nb X ng+1 = x ng where

(14)

( X ng+1 , Yng+1 ) is the target position for the goalkeeper, ( x ng , y ng ) is the current position of the goalkeeper, ( x nb , y nb ) is

the current ball position,

( x nb , y nb ) is the current ball velocity and fc is the frame correction factor.

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. For a stationary ball (as is the case during a penalty shot) the noise on the ball’s position is interpreted as a velocity, resulting in large variations in the predicted position. The filters significantly improve the performance of the goalkeeper facing a penalty shot. The position of a static ball on the penalty spot was measured and the required interception position of the goalkeeper was calculated. The variation in the calculated position, ∆x and ∆y, using various filter parameters are presented in Figure 18.

Opponent Striker Ball

Goalkeeper

Figure 17. Penalty shot taken by the opponent striker

Figure 18. Plot of calculated goalkeeper positions (∆x) and (∆y) for a penalty shot

The maximum non-filtered ∆x value was 0.44 cm which was reduced to 0.19 cm on filtering, while the maximum nonfiltered ∆y value was 1.40 cm which was lowered to 0.94 cm. This significantly dampened the goalkeeper’s movement when the ball was static, thereby improving its initial defensive position.

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. The filter performance was also evaluated by measuring the variation of angle of a stationary robot. Figure 19 shows the filtered and non-filtered angle variation for filter parameters of g=0.8, h=0.2. The improvements in the maximum angle deviations (for different filter parameters) are summarized in Figure 20. The Kalman filter equations (9) and (10) were used to obtain running estimates of the target angular velocity and orientation. The constant angular velocity model is suitable for control over the turning robot as it does not accelerate or decelerate significantly.

Plot of Robot Orientation (g=0.8, h-0.2) 9 7

dtheta (degrees) ---->

5 3 1 -1 -3 -5 -7 -9 -11 100

110

120

130

140

150

160

Frame Count ---->

170

180

190

Non-Filtered

200 Filtered

Figure 19. Angle variation of a stationary robot (g=0.8, h=0.2)

Max Non-Filtered Angle Deviation

Max Filtered Angle Deviation

g=0.8, h=0.2

8.61o

6.57o

23.69

g=0.7, h=0.3

9.29o

6.69o

27.98

g=0.6, h=0.4

10.62o

5.64o

46.89

Filter Parameters

% Improvement

Figure 20. Filter evaluation and comparison for a stationary robot

(ii)

Robot Motion Trajectory The effect of filtering on the system’s dynamic response was investigated. Figure 21 shows the performance of the system

with and without filtering. In the first phase of the experiment, the robot was commanded to move from point A (15.0, 65.0) to point B (130.0, 65.0) which entails no angular motion, only linear motion, resulting in a change in the X coordinate only.

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. Plot of Robot movement (scaleFactor = 0.8) 130

RobotPos.X

Robot Position (cm) ---->

110

90 RobotPos.Y

70

50

30

Without Filter With Filter (g=0.8, h=0.2)

10 0

20

40

60 80 100 120 140 Frame Count / Time (x 16.67 mSec) ---->

160

180

200

Figure 21. Improvement in system response with filtering

At the 60th frame, the robot’s X coordinate is 6.77 cm more than it was without filtering. The response is faster with filtering, yet the system shows no over-shoot. In the second phase of the experiment, the robot was placed at location (20.0, 20.0) at an angle of 90o. It was commanded to move to point (130.0, 110.0) with a final orientation of 0o. Figure 22 shows the spatial movement of the robot for different values of filter parameters.

Plot of Robot Motion Path (scaleFactor = 1.0)

o

120

(130.0, 110.0, 0 )

Y Coordinate (cm) ---->

100

80

60

40 With Filter (g=0.8, h=0.2) Without Filter

20 (20.0, 20.0, 90o)

With Filter (g=0.6, h=0.4)

0 0

20

40

60 80 100 X Coordinate (cm) ---->

120

Figure 22. Robot trajectory with and without filtering

140

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. As can be seen in Figure 22, the path taken by the robot to move from its initial position to the final position is unstable without any filtering. The robot performs best with filter parameters of g=0.8 and h=0.2.

(iii)

Intercepting or blocking a moving Object

The prediction algorithm of the controller, as presented in Section VI, was validated by experiments conducted on a moving target (ball in this case) that was intercepted by a robot (goalkeeper). The target velocity, initial distance between robot and target and the target trajectory was varied and the system’s performance recorded.

Moving Target Interception 130 120

Target Trajectory 110

Robot Trajectory

100

Y Coordinate (cm) -->

90 80 Interception Point 70 60

Target Speed = 1.318 m/s

50 Robot Start Position

40 30

Target Start Position

20 10 0 0

10

20

30

40

50

60

70

80

90

100

110

120

130

140

150

X Coordinate (cm) -->

Figure 23 Goalkeeper intercepting a fast moving target

In Figure 23, the target is moving at a high speed of 1.318 m/s. The robot is still able to intercept, though barely. The average robot speed is 25.44 cm/s and the time to intercept is 47 frames (783.5 ms). A slower moving target is intercepted better as shown in Figure 24. The target speed is 53.44 cm/s and the average speed of the robot is 9.4 cm/s. The time to intercept is 100 frames (1667 ms).

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. Moving Target Interception 130

Target Trajectory Robot Trajectory

120 110 100

Y-Coordinate (cm) -->

90 80 Interception Point

70 60

Target Speed = 0.5344 m/s

50 40

Robot Start Position

30 20

Target Start Position

10 0 0

10

20

30

40

50

60

70

80

90

100

110

120

130

140

150

X-Coordinate (cm) -->

Figure 24 Goalkeeper intercepting a slow moving target

An experiment was conducted to evaluate the performance of the controller when the target is rebound from an edge resulting in a sudden change of direction and a sharp drop in velocity. The performance is shown in Figure 25. The target was intercepted successfully. After rebound, the target speed dropped drastically from 1.93 m/s to 1.0m/s. Interception of target on rebound 130 Rebound Point 120

Moving Target Trajectory Robot Trajectory

110 100

Y-Coordinate (cm) -->

90 Target Speed = 1.0 m/s 80 Target Speed = 1.93 m/s 70

Interception Point

60 50 Robot Start Position

40 30 20

Target Start Position

10 0 0

10

20

30

40

50

60

70

80

90

100

110

120

130

140

150

X-Coordinate (cm) -->

Figure 25 Goalkeeper intercepting a target on rebound

With uniform lighting conditions and the initial position of the robot in line with the goal line, the robot is able to intercept the ball on every shot (at low ball speeds and speeds of up to 1.3m/sec), thus proving the robustness of the vision tracking system and the predictive controller. In a game situation, however, the robot may not be able to block all shots when it is not in

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. line with the goal line while the shot is taken. This may occur when the ball is being cleared from the goal area or the robot has just blocked a shot.

Target Interception 130 120 110 Robot Start Position

Target Start Position

Y-Coordinate (cm) -->

100 Target Speed = 0.422 m/s

90 80

Robot Speed = 0.651 m/s 70 60 50

Interception Point

40 30

Target Trajectory Robot Trajectory

20 10 0 0

10

20

30

40

50

60

70

80

90

100

110

120

130

140

150

X-Coordinate (cm) -->

Figure 26 Robot intercepting a moving target

Target Interception 130 120

Target Trajectory Robot Trajectory

110 Target Start Position 100

Target Speed = 0.492 m/s

Y-Coordinate (cm) -->

90 80

Interception Point

70 60 50 Robot Speed = 0.702 m/s

40 30 20

Robot Start Position

10 0 0

10

20

30

40

50

60

70

80

90

100

110

120

130

140

150

X-Coordinate (cm) -->

Figure 27 Robot intercepting a moving target

Figure 26 and 27 show two more examples of target interception with the target and robot starting at different locations and at varying distance from each other. For the experimental results of Figure 23 to 27, the filter parameters were set to g=0.8 and h=0.2. The performance of the controller was below par when the filtering and prediction was turned off. As can be see in

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. Figure 28, the robot failed to intercept the target. The position of the target and the robot at the 80th and 110th frames are show.

Target Interception (without Filtering/Prediction) 130

Target Trajectory Robot Trajectory

120 110

Target Start Position

Robot Start Position

Y-Coordinate (cm) -->

100 90 80

Target Speed = 0.509 m/s

70 60

Robot Position (80th Frame)

Target Position (80th Frame)

50 40

Target Position (110th Frame)

Robot Position (110th Frame)

30 20 10 0 0

10

20

30

40

50

60

70

80

90

100

110

120

130

140

150

X-Coordinate (cm) -->

Figure 28 Failure to intercept a moving target without filtering/prediction

X. SYSTEM LIMITATIONS AND FUTURE WORK The constant velocity model is suitable for the ball, moving on a smooth wooden surface, as it does not accelerate or decelerate significantly unless it hits an obstacle. However, tracking the ball rolling on a carpet (on which it will decelerate significantly), using a constant velocity model, results in a steady state tracking error. Similar problem exists when tracking the robots because the robots regularly accelerate and decelerate or change direction.

To overcome this limitation, a constant acceleration model should be used to filter the robot’s position. The filter is described by (5), (6) and (15):

2k xn = xˆn + 2 ( yn − xˆn ) T

(15)

where k is the filter parameter; xn is the filtered estimate of acceleration at discrete time n; and xˆn is the predicted estimate of acceleration at time n based on sensor values at time n-1 and all preceding times. Based on a constant acceleration model, the transition equations for prediction of position, velocity and acceleration are:

xˆn = xn−1

(16)

xˆn = xn −1 + Txn −1

(17)

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005.

xˆn = xn −1 + Txn −1 +

T2 xn −1 2

(18)

The constant acceleration object will then be tracked with zero lag error in the steady state. The constant velocity model is realistic for a fast moving ball on a smooth wooden surface as it does not decelerate significantly, while the constant acceleration model is suitable for a robot. The ball position will be predicted using the constant velocity model (11) while the robot’s positions will be predicted using a constant acceleration model (19):

xˆ nr+t = xˆ nr + t * Tx nr−1 + ( t * T ) x nr−1 / 2 2

(19)

Equations (11) and (19) will be solved simultaneously to calculate the future collision point. The current system successfully predicted the trajectory of a ball through the robot’s workspace. Depending on the distance of the robot from the ball (i.e. the magnitude of the RobotBallVector in Figure 12) a ball prediction of 4 to 10 frames ahead was tested for shooting a moving ball with reasonable success. Failures occurred when the target (the ball) was near an obstacle (another robot) or the edge of the soccer field. In these scenarios a higher level behaviour selection module is required to select alternative strategies. Further, the velocity calculation (4) will be enhanced to incorporate a derivative of the angle error to further dampen sudden swings in the robot’s angular position. The modified equations areVL = Vc - Kapθe + Kad θr VR = Vc + Kapθe - Kad θr

(20)

where Kap and Kad are the proportional and derivative gains, respectively.

In each frame, the robot’s present orientation is calculated and the angular velocity computed using equation (21).

θr = θr n − θr n−1

(21)

XI. CONCLUSIONS Tracking moving objects in real time with a high degree of accuracy is a significant technical challenge when using commodity hardware. The system presented in this paper can track objects in real time to within ±0.25 cm positional accuracy. Filtered vision data has been used to implement a very robust and reliable robot control system based on State Transition Based

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. Control (STBC) approach. This system is applicable to environments where reliable tracking of fast mobile objects and/or object interception with high precision are of paramount importance.

Kalman filtering has been applied to motion tracking and trajectory prediction using a real-time video image processing subsystem on commodity hardware. The filter was evaluated on a targeting and interception application (i.e., shooting or blocking the ball in a robot soccer system). The experiments show that the filter significantly reduced measurement errors and variations. The interception rate was also improved once predictive filtering was applied to both the target and the interceptor.

Further research challenges include vision processing for multiple objects (without violating the real-time constraint) and improvement in prediction of target positions once collisions with obstacles, such as the walls of the soccer field, are taken into account.

XII. ACKNOWLEDGEMENT The collaborative work was supported by a grant from ASIA 2000 HEEP funds and carried out jointly at Massey University and the Advanced Robotic and Intelligent Control Centre, Singapore Polytechnic.

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. REFERENCES [1] Messom C.H. and Walker M.G., “Evolving Cooperative Robotic Behaviour using Distributed Genetic Programming”, Seventh International Conference on Control, Automation, Robotics And Vision, Singapore 2002, pp. 215-219 [2] Barry Cipra, “Engineers Look to Kalman Filtering for Guidance”, SIAM News, Vol 26, No. 5, August 1993, pp.8 [3] R. E. Kalman, “A new approach to linear filtering and prediction problems”, Trans. ASME J. Basic Eng., Vol 82D, 1960, pp. 35 – 45. [4] M. Delgado, A.F. Gómez Skarmeta, H. Martínez Barberá, J. Gómez, "Fuzzy Range Sensor Filtering for Reactive Autonomous Robots", Fifth Intl. Conference on Soft Computing and Information / Intelligent Systems (IIZUKA’98), Fukuoka, Japan, October 1998, ISBN: 981-02-3632-8 [5] N. Vlassis, B. Terwijn and B. Krose, “Auxiliary particle filter robot localization from high-dimensional sensor observations", Proc. IEEE Int. Conf. on Robotics and Automation, Washington D.C., USA, May 2002, pp. 7-12 [6] D. Schulz, W. Burgard, D. Fox and A. B. Cremers, “Tracking Multiple Moving Objects with a Mobile Robot”, Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition , Hawaii, USA, December 2001, pp. 371-377 [7] Roumeliotis S.I. and Bekey G.A., “Collective Localization: A distributed Kalman filter approach to localization of groups of mobile robots”, Proc. IEEE International Conference on Robotics and Automation, San Francisco, California, April 2228, 2000, pp. 2958-2965 [8] Damir Hujic et el, “The Robotic Interception of Moving Objects in Industrial Settings: Strategy Development and Experiment”, IEEE/ASME Transactions on Mechatronics, Vol 3, No 3, September 1998, pp. 225 – 239. [9] C. H. Messom, “Robot Soccer – Sensing, Planning, Strategy and Control, a distributed real time intelligent system approach”, AROB, Oita, Japan, 1998, pp. 422 – 426. [10] Jacky Baltes and Nicholas Hildreth, “Adaptive Path Planner for Highly Dynamic Environments”, P. Stone, T. Balch, G. Kraetzschmar (Eds.): RoboCup 2000: Robot Soccer. World Cup IV Spinger Verlag 2001, pp. 76 - 85. [11] Piao Songhao, Hong Bingrong, “Robot path planning using genetic algorithms”, Journal of Harbin Institute of Technology, Vol. 8, No. 3, Sept 2001, pp. 215 – 217. [12] Meng Qingchun, Yin Bo, Xiaong Jianshe, “Intelligent learning technique based on fuzzy logic for multi-robot path planning”, Journal of Harbin Institute of Technology, Vol. 8, No. 3, Sept 2001, pp. 222 – 227.

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. [13] Hamid Haidarian Shahri, “Towards Autonomous Decision Making in Multi-agent Environments Using Fuzzy Logic”, Lecture Notes in Artificial Intelligence 2691, Springer Verlag, 2003, pp. 247 - 257. [14] Stefan J. Johansson and Alessandro Saffiotti, “Using the Electric Field Approach in the RoboCup Domain”, Lecture Notes in Artificial Intelligence 2377, Springer Verlag, 2002, pp. 399 - 404. [15] G. Sen Gupta, C. H. Messom, HL Sng, “State Transition Based Supervisory Control for a Robot Soccer System”, 1st IEEE International Workshop on Electronic Design test and Applications DELTA’2002, January 29-31, 2002, Christchurch, New Zealand, pp. 338 - 342. [16] Pieter Jonker, Jurjen Caarls, and Wouter Bokhove, “Fast and Accurate Robot Vision for Vision Based Motion”, P. Stone, T. Balch, G. Kraetzschmar (Eds.): RoboCup 2000: Robot Soccer. World Cup IV Spinger Verlag 2001, pp. 149 -158. [17] Mark Simon, Sven Behnke, and Raúl Rojas, “Robust Real Time Color Tracking”, P. Stone, T. Balch, G. Kraetzschmar (Eds.): RoboCup 2000: Robot Soccer. World Cup IV Spinger Verlag 2001, pp. 239 – 248. [18] Frieder Stolzenburg, Oliver Obst, and Jan Murray, “Qualitative Velocity and Ball Interception”, Lecture Notes in Artificial Intelligence 2479, Springer Verlag, 2002, pp. 283 – 298. [19] Jens-Steffen Gutmann, Thilo Weigel, and Bernhard Nebel, “Fast, Accurate, and Robust Self-Localization in the RoboCup Environment”, Manuela Veloso, Enrico Pagello, and Hiroaki Kitano (Eds.): RoboCup-99. Robot Soccer World Cup III, Springer Verlag, 2000, pp. 304 – 317. [20] Integral Technologies Inc, USA, FlashBus MV User’s Guide, May 1998 [21] C.H.Messom, G. Sen Gupta and H.L. Sng, “Distributed Real-time Image Processing for a Dual Camera System”, CIRAS 2001, Singapore, 2001, pp. 53-59. [22] C.H.Messom, S. Demidenko, K. Subramaniam and G. Sen Gupta, “Size/Position Identification in Real-Time Image processing using Run Length Encoding”, IMTC, Alaska, USA, 2002, pp. 1055-1059. [23] J. Bruce, T. Balch and M. Veloso, “Fast and Inexpensive Color Image Segmentation for Interactive Robots”, IROS 2000, San Francisco, 2000, pp. 2061 – 2066. [24] F. Ercal, M. Allen, and F. Hao, "A Systolic Image Difference Algorithm for RLE-Compressed Images”, IEEE Transactions on Parallel and Distributed Systems, Vol. 11, No. 5, May 2000, pp. 433 – 443. [25] H. Kitano, M. Asada, Y. Kuniyoshi, I. Noda, E. Osawa and H. Matsubara, "RoboCup: A Challenge Problem for AI”, RoboCup-97: Robot Soccer World Cup I, Springer Verlag, London, 1998, pp. 1 – 19.

PREPRINT: Sen Gupta,G., Messom, C. H., Demidenko, S., "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp 200-214 2005. [26] J. Baltes. “Practical camera and colour calibration for large rooms”, in Manuela Veloso, Enrico Pagello, and Hiroaki Kitano, editors, RoboCup-99: Robot Soccer World Cup III, pages 148-161, New York, 2000. Springer, pp. 148 – 161. [27] M. Jamzad, B.S. Sadjad, V.S. Mirrokni, M. Kazemi, H. Chitsaz, A. Heydarnoori, M.T. Hajiaghai, and E. Chiniforoosh, “A Fast Vision System for Middle Size Robots in RoboCup”, A. Birk, S. Coradeschi, S. Tadokoro (Eds.): RoboCup 2001: Robot Soccer World Cup V, Spinger Verlag 2002, pp. 71 – 80. [28]

M. Ramesh Jain, R. Kasturi, and B.G. Schunck, Machine Vision, McGraw-Hill International Editions, Computer Science Series, International Edition, 1995.