The ANSER Project: Airport Nonstop Surveillance ... - IEEE Xplore

9 downloads 0 Views 307KB Size Report
The ANSER Project: Airport Nonstop Surveillance Expert Robot. Francesco Capezio, Fulvio Mastrogiovanni, Antonio Sgorbissa and Renato Zaccaria.
Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems San Diego, CA, USA, Oct 29 - Nov 2, 2007

TuC10.4

The ANSER Project: Airport Nonstop Surveillance Expert Robot Francesco Capezio, Fulvio Mastrogiovanni, Antonio Sgorbissa and Renato Zaccaria

Abstract— This paper describes ANSER, a system designed to perform surveillance in civilian airports and similar wide outdoor areas. Whereas an intelligent system - possibly controlled by a human supervisor - is able to integrate the information originating from different sources (i.e., fixed devices and sensors distributed throughout the environment) and to coordinate their behaviors in case of anomalies, the mobile robot is a significant part of the overall system: its main subsystems, i.e., autonomous surveillance, localization (performed using only a non-differential GPS and a laser rangefinder) and navigation are investigated in depth. Experimental results validate the robustness and reliability of the approach.

I. I NTRODUCTION The work described in this paper is part of ANSER1 , a project for surveillance in airports and similar wide outdoor areas. Within this framework, a system composed of two parts is foreseen: (i) an architecture designed for indoor and outdoor surveillance (possibly under direct control of human supervisors), exploiting such devices as video cameras, microphones, Passive InfraRed (PIR) sensors, and (ii) an autonomous mobile robot (henceforth referred to as UGV - Unmanned Ground Vehicle), whose sensors and actuators have been especially crafted to successfully perform night patrols. In a sense, the traditional surveillance paradigm is extended by adding a mobile platform able to transport sensors where needed, thus increasing the autonomy and the range of action of the surveillance system and - consequently - reducing the need for human intervention in many situations. During the past few years, several autonomous surveillance systems based on mobile platforms have been presented. An interesting example is MDARS, a joint USA Army-Navy project [1]. The MDARS goal is to provide multiple mobile platforms performing random patrols within assigned areas of warehouses and storage sites, both indoor and in semi-structured outdoor environments. MDARS-E apparently meets the requirements of the ANSER domain. However, it is immediate to notice that high performances are obtained by over equipping the system with a huge set of different sensorial devices and - consequently - providing adequate onboard computing power to process the huge amount of available data. For example, the localization and navigation subsystems require the joint use of a differential GPS, a fiber-optic gyro and the recognition of retroreflective F. Capezio, F. Mastrogiovanni, A. Sgorbissa and R. Zaccaria are with DIST, Department of Communication, Computer and System Sciences, University of Genova, Genova, Italy. Email:{francesco, fulvio, sgorbiss, renato}@dist.unige.it. 1 ANSER is an acronym for Airport Night Surveillance Expert Robot, and the latin name for “goose” (capitoline geese, according to tradition, neutralized a sneak attack by the Gauls during the siege of Rome).

1-4244-0912-8/07/$25.00 ©2007 IEEE.

landmarks via a laser-based proximity sensor. In [2] a network of mobile all-terrain vehicles and stationary sentries are exploited in an autonomous surveillance and reconnaissance system. The vehicles are equipped with video cameras, thus being able to detect and track moving objects, which are classified using learning algorithms. Each robot relies for localization on both a differential GPS and an Inertial Measurement Unit (IMU); a PC/104 for locomotion, and three networked PCs for planning, perception and communication are required. In [3] a team of UAVs (Unmanned Aerial Vehicles) and UGVs pursue a second team of evaders adopting a probabilistic game theoretic approach (see also [4]). Again, the robots need enough computational power to manage a differential GPS receiver, an IMU, video cameras and a color-tracking vision system. In [5] a multirobot surveillance system is presented, describing how a group of miniature robots (called Scouts) accomplishes simple surveillance tasks using an onboard video camera. Because of limitations on the space and power supply available onboard, Scouts rely on remote computers to manage all the resources, to compute decision processes, and finally to provide them with control commands. With the sole exception of [5], autonomous navigation and self-localization capabilities are fundamental prerequisites in all the previous systems. Starting from a minimal configuration including an IMU and a carrier phase differential GPS receiver (e.g., see [6] [7] [8] [9]) a common approach is to equip the mobile platform with redundant sensory, thus requiring high computational power and complex data filtering techniques. In partial contrast with this “over equipping” philosophy, self-localization in ANSER mostly relies on both a standard (non-differential) GPS unit and an augmented state vector approach to estimate the low frequency components of the GPS noise [10] [9] [11]. The overall surveillance task is performed by integrating the flexibility of laser rangefinderbased approaches to surveillance in open spaces with the reliability of presence detectors (such as, e.g., PIRs) either near walls or in cluttered areas. Section 1 introduces the surveillance subsystem. Section III describes the adopted localization techniques, investigating the properties of the augmented state vector approach. Section IV discusses how robot navigation is affected by the design choices. Finally, experimental results - obtained with a realistic simulator and in a field set-up at the Albenga Airport - are discussed. Conclusions follow. II. C OORDINATED S URVEILLANCE As reliability in mobile robot-based surveillance mainly depends on the localization capabilities of the robot itself,

991

Fig. 1.

Patrol rounds at Albenga Airport.

it seems reasonable to investigate how to distribute the surveillance tasks within an architecture integrating both distributed devices and mobile robots. In principle, distributed devices can be placed wherever is needed: unfortunately, the network topology can not change according to specific needs. On the opposite, mobile robots are rather limited in their capabilities: nonetheless, they can move where needed at the moment. Our work originates from the following consideration: in a complex surveillance scenario different locations are characterized by different requirements. For example, in open spaces, it is often unfeasible to place several fixed sensors: it would be preferable to have a single sensor to be moved around. At the same time, in cluttered areas, it is not a trivial task for cognitive algorithms onboard mobile robots to detect intruders due to possibly wrong data association: in this case, it would be valuable to consider a well structured and specialized system, composed by a given number of fixed devices providing information whose error probability is low. The philosophy of the ANSER project is to integrate heterogeneous sources of information, using different subsystems whereas the areas to be monitored allow for their use. Therefore, near walls and in cluttered areas, a fixed surveillance system based on PIR detectors is preferred to laser-based surveillance: on the opposite, in open spaces, the range and precision of the laser rangefinder are expected to produce more accurate results. In particular, an intelligent supervisor maintains a coherent world model by dividing the Workspace into different areas to be monitored [14]: whereas some areas is assigned to fixed distributed devices, other areas are monitored by mobile robots. Whenever needed (i.e., something anomalous is detected by distributed devices), the system is able to plan a course of action for the robot in order to investigate the situation: specifically, it could be asked to reach the area where the anomaly has been detected in order to provide camera images to a remote operator. In the following, we specifically focus on mobile robot autonomous surveillance capabilities. A. Laser-based Surveillance Despite the technological improvement they caused in mobile robotics, laser rangefinders were initially designed for

a different purpose, i.e., for improving safety and security in the industry. The main idea is to continuously monitor an area of interest (e.g., in the proximity of a machine tool, in front of an AGV, etc.), in which the presence of people or objects should cause an immediate arrest of the system (in matter of safety and security, conservative approaches are often reasonable). In a similar spirit, a very conservative laser-based surveillance algorithm is used in ANSER to identify anomalies. First, the whole area to be monitored by the mobile robot (e.g., the airport terminal and runways) is split into n convex polygonal regions {ai }, which define an interesting region (see Fig. 1). Next, when performing patrol rounds (according to an a priori defined sequence of areas to be visited, or as a result of an anomaly previously detected by fixed devices), line segments are periodically extracted from the raw measurements provided by an outdoor version of the SICK sensor (with a sensing range of about 80 meters). The system assumes that the monitored area must be empty and every line segment that is found inside it must be considered as a potential intruder (line 4 in the algorithm in Algorithm 1): the ANSER main task consists in carrying around the laser rangefinder, thus allowing to cyclically monitor all the areas. Algorithm 1 Laser-based Surveillance Require: robot patrolling Ensure: detection of unexpected objects or intruders 1: start patrol 2: while patrol is not completed do 3: move & sense 4: if lj is found such that lj ∈ {ai } then 5: stop moving 6: ask potentiali intruder to exhibit RFID 7: wait for RFID response 8: if RFID response = authorized ID then 9: do nothing 10: else 11: alert supervisor 12: wait for command from supervisor 13: while command 6= resume do 14: if command = remote control then 15: supervisor moves the robot 16: else if command = map object then 17: map object shape and position 18: end if 19: wait for command from supervisor 20: end while 21: end if 22: end if 23: end while When a potential intruder is found, the robot stops (line 5). Next, if the intruder owns a RFID badge that classifies him as “authorized personnel”, the robot simply resumes its patrol (lines 6 to 10); otherwise, it warns the intelligent supervisor that an unexpected object has been detected (lines 11 and 12). A human supervisor can manually tele-operate the robot,

992

using video feedback provided by the pan/tilt/zoom video camera to further investigate the scene. During this operation, a number of data is collected from the scene: estimated object position, shape and dimension, etc. Finally, if the situation is classified by the supervisor as “normal” (an object left unattended, a vehicle parked in an unusual place) the robot resumes its patrol, and ignores the same object in the future (lines 13 to 18) if the position, shape and dimensions match. Obviously, human intervention can be required to move the object or the vehicle to a safe place. III. M OBILE ROBOT S ELF -L OCALIZATION In our scenario, ANSER is asked to continuously patrol wide areas and - in particular - to devote special care to selected regions, according to the surveillance priorities. Therefore, the localization process is the fundamental task upon which all the surveillance modules rely to fulfill their goals: as a matter of fact, the accuracy of the localization process greatly affects the reliability of the robot surveillance capabilities. Thus, in order to increase robustness and maintaining - at the same time - a reasonable low number of sensors on the mobile base, the ANSER localization subsystem relies only on a single non-differential GPS receiver and a laser rangefinder. Furthermore, in order to improve outdoor localization accuracy, an augmented state vector approach is used to estimate the GPS low frequency noise through an extended Kalman filter (referred to as EKF). A. GPS-based Localization A single non-differential GPS receiver provides the mobile robot with absolute position measurements, which are used to correct the pose estimate provided by odometry. Unfortunately, the measurement process is corrupted by different error sources [9], introducing into the GPS measure a greatly colored noise with a significant low-frequency component. Approximately, this can be modeled as a nonzero mean value in GPS errors that varies slowly in time (in the following, it will be referred to as a bias in GPS measurements). The analysis of longitude and latitude data collected at a fixed location during 24 hours shows this effect: by considering the Fast Fourier Transform (FFT) of GPS longitude and latitude data (the latter is shown in Fig. 2), low-frequency components can be noticed, corresponding to a slow variation of the signal over time. By estimating this bias in GPS measurements, one can expect that - at least in theory - the precision of GPS data should improve, therefore making localization more accurate. To estimate the bias, an augmented state vector x is defined, comprising both the x, y, and θ components of the robot position, and the xGP S and yGP S components of the low-frequency bias in GPS measurements. Notice that, by separating the colored components from Additive White Gaussian components of GPS noise, the system gets closer to the conditions for the EKF to be applicable. B. Laser-based Localization When navigating indoor, the robot is provided with an a priori map of the environment; the laser-based localization

subsystem simply updates the robot pose by comparing this map with the features detected by the laser rangefinder. The robot is thus equipped with two laser range finders (see Figure 4 at the top left): the former - used for surveillance - is hidden within the chassis and it is located about 50cm above the ground, whereas the latter - used for self-localization - is located on top of a pole about 2m high. The smart position of the rangefinder used for localization has been formally motivated in [12], where the authors detail how line features are extracted from raw laser data and then compared with the a priori model: (i) line extraction produces a number of lines {lj }; (ii) the Mahalanobis distance associated with each tuple (lj , mi ) is computed (where {mi } is a set of oriented line segments defining the a priori map); (iii) for each lj , the line mi for which such a distance is minimum is selected and then fed to the EKF. When moving outdoor, lines within the a priori map mostly correspond to the external walls of buildings. In this situation, a smaller number of features is available (see Figure 1), since the robot mostly traverses areas where no buildings are present at all (especially in a typical airport scenario). However - when line features are available - they are sufficient to estimate the full state vector and - under a number of assumptions - the pose estimate is still valid even when the rangefinder does not provide any further information. C. An Augmented State Vector Approach to Localization As anticipated, the proposed approach to localization relies on the idea of “guessing” the bias affecting GPS measurements at a given time, by including it within the state to be estimated. The resulting augmented state vector  is xT = x, y, θ, xGP S , yGP S . This includes both the robot pose and the two components of the bias in GPS measurements with respect to a fixed frame Fw . Once integrated the dynamic equations of the system through a standard Euler approximation with step size ∆t = 0.1s, the system is described by the following finite difference Equations:    xk = xk−1 + ∆sk−1 cosθk−1    yk = yk−1 + ∆sk−1 sinθk−1 (1) θk = θk−1 + ∆ωk−1   xGP S,k = xGP Sk −1    y GP S,k = yGP Sk −1 where ∆s = (δr + δl)/2 and ∆ω = (δr − δl)/D. The first three equations represent the discrete approximation of the robot inverse kinematics, by assuming a unicycle differential drive model. As usual, δr and δl indicate the linear displacements of the right and left driving wheels, while D is the distance between them. In the last two Equations, the dynamic of the GPS bias  xTGP S = xGP S , yGP S is modeled. Notice that a constant dynamic is assumed for xGP S , since no cues are available to make more accurate hypotheses. This means that, when predicting the new state in the time update phase of the EKF, xGP S is left unchanged. However, the predicted value of

993

Fig. 3.

Fig. 2.

The mobile robot approaching dij during navigation.

Fast Fourier Transform of GPS latitude data.

xGP S is updated whenever new measurements are available (i.e., in the correction phase of the EKF), thus finally producing an estimate that changes over time and - hopefully - approximates the actual bias in GPS measurements. When considering the other noise that affects x, the process can be described as governed by a non linear stochastic difference equation in the form: xk = f (xk−1 , uk−1 , wk−1 )

(2)

where x ∈ R5 , u is the driving function, i.e., the 2D vector describing the current wheels displacements, and w = N (0, W) represents the process noise. By assuming that w has a zero-mean Gaussian distribution with covariance matrix W, systematic odometry errors - due to the approximate knowledge of the robot geometric characteristics - are not explicitly considered: in principle, geometric parameters should be included within the augmented state vector as well, as proposed in [11]. Observations are provided both by the GPS and - when available - by the laser rangefinder. Measurements zGP S provided by the GPS are a non linear function of the state, because the relationship between georeferenced data (i.e., latitude and longitude) and the estimated x- and ycoordinates varies with the latitude itself, as a consequence of the non planarity of the earth surface. In other words, we have that zGP S = f (x). Conversely, for each line lj originating from laser data, the a priori line mi that best matches lj - according to the Mahalanobis distance - can be expressed using two parameters representing, respectively, the distance ρ between the line itself and the robot, and the angle α between the line and the robot, thus providing a measurement that is a linear function of the state, i.e., the measurement associated with laser data can be expressed as zlaser = (ρ, α) = Hx. Equation 1 is then used to compute the a priori state estimate at the time instant k. Next, whenever new GPS or laser measurements are available, they are fused with the a priori EKF estimate to produce a new estimate, thus reducing errors which are inherently present in odometry and - consequently - providing a new estimate for the GPS bias. During normal operations, the laser rangefinder and the GPS return observations asynchronously, and each observation is used to update the state as soon as available. The observability analysis (not shown here) demonstrates that, even if each measurement is able to correct the state

only partially, the state is fully observable when several measurements are considered in cascade. Unfortunately, in outdoor areas it often happens that the rangefinder is unable to detect any usable feature: in this case, the localization algorithm relies only on GPS data. The Kalman observability matrix analysis confirms the intuition that - when no laser data are available - only the subspace defined by x+xGP S , y+yGP S and θ is observable: when ∆sk−1 6= 0 holds [13], the robot orientation is still corrected by GPS data and the position is characterized by a permanent error that depends on the current estimate of the GPS bias. In this case, the innovation due to GPS measurements is distributed by the Kalman gain onto the x-, y-, xGP S and yGP S components of the state according to the current value of the state covariance matrix. Since a constant dynamic is assumed for xGP S and yGP S , corrections are adequately distributed onto the state components only if the actual bias in GPS changes slowly, and provided that a new area where the laser rangefinder is able to guarantee full observability will soon be available. This is reasonable when assuming a cyclic patrol path for autonomous surveillance, with periodic visits to outdoor areas that are mapped in the a priori model and observable by the rangefinder. IV. M OBILE ROBOT NAVIGATION As previously discussed, the ANSER mobile robot periodically visits a sequence of locations, which can be selected by either an automated or human supervisor according to specific events occurring within the monitored environment. If no anomalies are discovered, the patrolling simply consists of a cyclical going around. In our approach, we define two main groups of areas: the former includes interesting areas (to be monitored carefully either by mobile robots or fixed devices), whereas the latter consists of traversable areas (i.e., areas which are not relevant for the patrolling task). As a consequence, the robot navigation capabilities vary according to the current area: when moving traversable areas the mobile robot follows a selected direction, with the only purpose to reach as soon as possible the next interesting region; once this region is reached, the robot switches to a more accurate navigation algorithm, e.g., to estimate the dimension of an unexpected object detected by the laser rangefinder. When planning the robot mission, the supervising station provides the robot with a sequence of n connected via points pi to be visited in order to reach the next area.

994

When traversing a given area from pi (xi , yi ) to pj (xj , yj ), the robot moves toward pj while minimizing its distance dij from the line segment ij connecting pi to pj , computed as follows: ax + by + c dij = √ (3) a2 + b2 where x and y correspond to the current estimated robot pose, and a, b and c are the parameters of the implicit equation of ij, i.e., expressed in the form ax + by + c = 0. Notice that dij can be negative, depending on the relative position of the robot with respect to the line. To minimize dij , the navigation algorithm employs a typical PID controller to regulate vp , the component of the robot velocity which is perpendicular to ij (Figure 3). In particular, since we want both to minimize dij and to follow a straight direction towards pj , the desired approaching speed vp,ref is set as a linear function of the inverse tangent of the distance, i.e., vp,ref = −k × tan−1 (dij ). Notice that, the sign of vp,ref is negative since the approaching speed is meant to minimize dij ; when the robot is getting closer to the line, the desired approaching speed vp,ref tends to zero. Finally, the jog is used to regulate the error vp − vp,ref using the PID control loop, whereas the robot linear speed is arbitrarily chosen. It should be noted here that only the x and y components of the robot pose are used within the navigation control loop, whereas θ does not affect the algorithm. In spite of this, it easy to demonstrate that - in case of unicycle kinematics the heading of the robot tends asymptotically to the direction of the line, whereas the distance between the robot and the line tends asymptotically to zero: asymptotical stability guarantees robustness in presence of significant errors in outdoor positioning (when the robot’s goal is to traverse an area, going straight is probably the best solution). The area is considered traversed (and a new target fed to the algorithm) once the robot has reached the perpendicular to ij passing through pj ; notice that, if dij is small, this means also reaching the target pj . When visiting an interesting area, or when mapping an unexpected object, this navigation model is not adequate, as it generates only very simple trajectories. Traditional approaches in literature for trajectory generation and following are thus employed, at the price of a higher computational complexity. V. E XPERIMENTAL R ESULTS Both the system in its entirety and - in particular - the self-localization subsystem, have been tested. Fig. 4 shows a number snapshots taken from a video recorded for the first public demonstration of ANSER in October 20062 . The rover is performing patrol rounds, both outdoor and indoor (Figure 4, top line). When either the robot or distributed Passive Infrared Sensors detect a potential intruder, a question mark appears in the supervision GUI, meaning that a further investigation is required (Figure 4 on the bottom left). The human supervisor - possibly operating in a room very far 2 Video

is available at www.laboratorium.dist.unige.it.

Fig. 4.

Robot ANSER patrolling the Albenga Airport.

Fig. 5.

Tests performed at 0.9 m/s.

away - is alerted (see Figure 4 on the mid left), and can remotely control the robot to better investigate the area. Video feedback is provided through the pan/tilt/zoom camera on board the robot (see Figure 4 on the mid right); in this case, an abandoned luggage is found, and the supervisor simply updates the question mark in the GUI with a different symbol, meaning object to be removed (handling it with care...). The rover can continue its patrol. In Figure 4 on the bottom a different situation is shown: the rover has detected a moving object (i.e., a person), and asks him to exhibit his credentials to check if he is allowed to be in that area. If the person is provided with a RFID badge (which happens to be true in this case), the robot simply resumes its patrol, without requiring to alert the human supervisor. For what concerns the localization subsystem performance, many experiments at the Albenga Airport (see Figure 4) have been performed. During the tests, the robot is manually driven at 1.0m/s along a pre-established cyclic path that is about 500 meters long; walls for laser-based

995

Fig. 6.

Estimated position in a real scenario.

localization are “visible” only in a very limited area of the Airport (see walls m1 and m2 in Fig. 5), which forces the rover to rely exclusively on GPS measurements most of the time. Under these conditions, tests have been performed in two modalities: A-tests correspond to localization without GPS bias estimation, and B-tests are performed with bias estimation. In Fig. 5 the rover’s actual path during a cyclic B-test is plotted in light gray, whereas the rover’s estimated path is plotted in dark gray. Different experimental runs have been performed (each lasting about 3 hours), by recording the robot estimated position in a number of selected locations along the path. Figure 6 shows the estimated robot pose in 8 different locations along the real path (Figure 6 could be superimposed onto Figure 5 to infer where features for laser-based localization are detected). Experiments show that the estimation of GPS bias (B-tests) increase the localization accuracy (of about 25%); however, in the real scenario, B-tests exhibit a smaller improvement in performance with respect to simulation. This is probably due to the fact that the state is fully observable (and hence the GPS bias can be correctly estimated) only when laser data are available. However, this happens in the proximity of buildings (e.g., walls m1 and m2 in Figure 5 correspond to a hangar); unfortunately, near buildings the GPS signal is less precise, as a consequence of multipath distortion and the occlusion of low altitude satellites. Finally, the navigation algorithm has proven to be very suited to the considered scenario, allowing smooth and stable trajectories even when moving at high speed. VI. C ONCLUSIONS The paper describes ANSER project (Airport Nonstop Surveillance Expert Robot), where a network of surveillance devices cooperates with a mobile robot. In particular, an overview of the entire surveillance system has been presented, focusing on the specific capabilities of the robot: autonomous surveillance, localization and navigation. In ANSER, autonomous surveillance requires to detect differences between perceived and expected environmental conditions, on the basis of a simple laser scanner-based algorithm. To guarantee the necessary accuracy for navigation and surveillance, ANSER localization sub-system plays a fundamental role. Instead of equipping the robot with a huge

amount of expensive sensors - and the computing power that is adequate to deal with them - a simple approach is chosen that relies exclusively on a non-differential GPS unit and a laser rangefinder (i.e., inertial sensors are absent). Laser measurements are exploited only in specific areas of the outdoor patrol path of the robot, i.e., where it is possible to detect line features and to match them against an a priori model of the environment. Wherever this is not possible, the network of surveillance devices is used. Along the rest of the path, the robot relies on GPS-based localization. An extended Kalman filter algorithm is employed to estimate an augmented state vector comprising both the robot pose and the low frequency components (bias) of the GPS error. The experiments performed at Albenga Airport have confirmed the expectations, showing that the approach reasonably improves the overall system capabilities. R EFERENCES [1] T. Heath-Pastore, H.R. Everett, and K. Bonner. Mobile Robots for Outdoor Security Applications. In American Nuclear Society - 8th Int. Topical Meeting on Robotics and Remote Systems, Pittsburgh, PA, 1999. [2] M. Saptharishi, C.S. Oliver, C.P. Diehl, K.S. Bhat, J.M. Dolan, A. Trebi-Ollennu, P.K. Khosla. Distributed Surveillance and Reconnaissance Using Multiple Autonomous ATVs: CyberScout. In IEEE Transaction on Robotics and Automation, vol. 18, no. 5, 2002. [3] R. Vidal, O. Shakernia, H.J. Kim, D.H. Shim, S. Sastry. Probabilistic PursuitEvasion Games: Theory, Implementation, and Experimental Evaluation. In IEEE Transaction on Robotics and Automation, vol. 18, no. 5, 2002. [4] I. Volkan, K. Sampath, K. Sanjeev. Randomized Pursuit-Evasion in a Polygonal Environment. In IEEE Transaction on Robotics, vol. 21, no. 5, 2005. [5] P.E. Rybski, S.A. Stoeter, M. Gini, D.F. Hougen, N.P. Papanikolopoulos. Performance of a Distributed Robotic System Using Shared Communications Channels. In IEEE Transaction on Robotics and Automation, vol. 18, no. 5, 2002. [6] S. Panzieri, F. Pascucci, G. Ulivi. An Outdoor Navigation System Using GPS and Inertial Platform. In IEEE Transaction on Mechatronics, vol. 7, no. 2, 2002. [7] T. Schnberg, M. Ojala, J. Suomela, A. Torpo, A. Halme. Positioning an Autonomous Off-Road Vehicle by Using Fused DGPS and Inertial Navigation. In Proceedings of the 2nd IFAC Conference on Intelligent Autonomous Vehicles (IAV), Espoo, Finland, June 12–14, 1995. [8] G. Dissanayake, S. Sukkarieh, E. Nebot, H. Durrant-Whyte. The Aiding of a low-cost Strapdown Inertial Measurement Unit Using Vehicle Model Constraints for Land Vehicle Applications. In IEEE Transactions on Robotics and Automation, vol. 17, no. 5, pp. 731– 747, 2001. [9] J.A. Farrell, T. Givargis, M. Barth. Real-time Differential Carrier Phase GPS-aided INS. In IEEE Transaction on Control and System Technology, vol. 8, pp. 709–721, 2000. [10] J.Z. Sasiadek, Q. Wang. Low Cost Automation Using INS/GPS Data Fusion for Accurate Positioning. In Robotica, vol. 21, pp. 255 – 260, 2003. [11] A. Martinelli. The Odometry Error of a Mobile Robot with a Synchronous Drive System. In IEEE Transactions on Robotics and Automation, vol 18, no. 3, pp 399–405, 2002. [12] F. Capezio, F. Mastrogiovanni, A. Sgorbissa, R. Zaccaria. Fast Position Tracking of an Autonomous Vehicle in Cluttered and Dynamic Indoor Environments. In Proceedings of the 8th International IFAC Symposium on Robot Control (SYROCO-06), Bologna, Italy, September 2006. [13] F. Capezio, A. Sgorbissa, R. Zaccaria. GPS-based Localization for UGV Performing Surveillance Patrols in Wide Outdoor Areas. In Proceedings of the 2005 International Conference on Field and Service Robotics (FSR-05), Australia, July 2005. [14] F. Mastrogiovanni, A. Sgorbissa, R. Zaccaria. A Distributed Architecture for Symbolic Data Fusion. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07), Hyderabad, India, January 9-12, 2007.

996