Localization and Navigation of Autonomous Systems in ... - CiteSeerX

10 downloads 265 Views 10MB Size Report
(UGV) based on Nanotron Chirp Spread Spectrum (CSS) Real Time Localiza- ...... PSD of UWB signals make difficult detection mingling with the background.
Università Politecnica delle Marche Scuola di Dottorato di Ricerca in Scienze dell’Ingegneria Curriculum in Ingegneria dell’Informazione

Localization and Navigation of Autonomous Systems in Complex Scenarios

Ph.D. Dissertation of: Alessandro Benini Advisor: Prof. Sauro Longhi Curriculum Supervisor: Prof. Sauro Longhi XII edition - new series

Università Politecnica delle Marche Scuola di Dottorato di Ricerca in Scienze dell’Ingegneria Curriculum in Ingegneria dell’Informazione

Localization and Navigation of Autonomous Systems in Complex Scenarios

Ph.D. Dissertation of: Alessandro Benini Advisor: Prof. Sauro Longhi Curriculum Supervisor: Prof. Sauro Longhi XII edition - new series

Università Politecnica delle Marche Scuola di Dottorato di Ricerca in Scienze dell’Ingegneria Facoltà di Ingegneria Via Brecce Bianche – 60131 Ancona (AN), Italy

To my Family and Vera

Acknowledgements First, I would thank my advisor Prof. Sauro Longhi, for his availability and continuous advice during the three years of Ph.D. I also thank all the colleagues from Thales Italia, especially Riccardo Minutolo, Mauro Montanari, Francesco Barcio and Francesco Monai for their continuous support during the development of the R3-COP Project. They have given me the possibility to make a lot of experience, working in a pleasant and friendly team. I’m also grateful to Adriano Mancini and all the other people from Thales Italia, and in particular Simone Di Nisio for their wide technical support. Finally a special thanks coming from my heart to my Family and Vera without whom my life would not have been the same.

ix

Abstract Autonomous systems represent a promising evolving area, in particular with respect to research in artificial and embedded systems. In the European Industry, many solutions of autonomous systems focused on solving a very specific problem are available, but the majority of these solutions, however, is proprietary. While maintaining the property of a particular technology ensures a position in the market, on the other hand this becomes very costly especially in long term. Due to the proliferation of proprietary solutions, the Robotic Industry presents a high degree of fragmentation, which cause a slowdown in the development of new systems, especially nowadays, where an even more increasing number of applications of robotic systems comes. Most of the issues that must be addressed in the development of Autonomous Systems are common to all areas of applications: These problems relate, for example, the localization in indoor/outdoor environments, the ability to avoid obstacles, the ability to make simple decisions based on the occurrence of certain events. It is therefore clear that having a framework of tools that solves common problems in all areas of application, and from which is possible to start for the development of new particular systems used for certain tasks, produces a remarkable advantage. Therefore, in order to become competitive in a highly dynamic market it is necessary to fill some gaps that currently slow down the diffusion of autonomous systems: The lack of platforms for the integration of components from various technology suppliers, the unavailability of high-performance embedded platforms and the absence of a framework of methodologies are just some of the obstacles to overcome. In this context, the European Project R3-COP (www.r3-cop.eu), made possible by funds of the ARTEMIS Joint Undertaking as well as different National Funding Authorities and thanks to the collaboration of 27 partners from different European Countries, aims to advance over the state of the art providing a contribution from both perspectives: Technology and Methodology. In particular, it aims to propose new technologies and methodologies that enable the production for the European Industry of advanced, robust, autonomous and cooperating robotic systems at a reduced cost. This Thesis has been developed in the context of the R3-COP Project, in xi

order to overcome some of the addressed problems, and provides a contribution in both directions: Technology and Methodology. Concerning the technology, the developed algorithms, based on variants of the Kalman Filter, demonstrate the ability to leverage the latest technologies radio belonging to the IEEE 802.15.4a Standard (Ultra Wide Band and Chirp Spread spectrum) in indoor localization. Going more in detail, a Localization System for Unmanned Ground Vehicle (UGV) based on Nanotron Chirp Spread Spectrum (CSS) Real Time Localization System is proposed. The Nanotron RTLS Kit provides a system for ranging especially in outdoor environments using a proprietary ranging technology called Symmetrical Double-Sided Two Way Ranging (SDS-TWR). This technique tries to overtake the limitations of the classical Received Signal Strength Indication (RSSI) (e.g., Wi-fi mapping) that does not ensure good performance especially in structured environments. The set of these devices allows to create a Wireless Sensor Network (WSN) that is suitable for cooperative tasks where the data link is fundamental to share data and support the relative localization. The proposed algorithm allows to model the bias of ranging data considering also the faults in the measurements in order to have a more reliable position estimation when ranging data are not available. The management of fault measurements allows also to reduce the errors on ranging when there is not Line of Sight between the anchors and the tag, in which situation the performances of the system decreases. Furthermore, a Localization Algorithm for Unmanned Aerial Vehicle (UAV) based on UbiSense Ultra Wide Band (UWB) is presented: The proposed solution allows to use a low-cost Inertial Measurement Unit (IMU) in the prediction step and the integration of vision-odometry for the detection of markers nearness the touchdown area. The ranging measurements allow to reduce the errors of inertial sensors due to the limited performance of accelerometers and gyros. The obtained results show that an accuracy of 15 cm can be achieved. Concerning the methodology, the second part of this Thesis presents a 3D Simulation Environment for the prototyping and validation of cooperative tasks for unmanned systems as part of the framework of methodologies addressed by the R3-COP project. The proposed Simulation Environment, provides also the possibility to convert the control software from Simulated Scenario into real application using the tools provided by MatLab/Simulink. Some example of use of the developed Simulation Environment, such as formation control of many UAVs using Networked Decentralized Model Predictive Control, and mission management, using Finite State Machines and cooperation among autonomous agents realized through exchange of information, are explained. All the developed systems have been focused on the Final Air-Borne Demonxii

strator of the R3-COP Project in which the Autonomy and the Cooperation between the mobile agents have been proved. The results of the research have been successfully reviewed and published into international conferences and journal and applied in the Final Air-Borne Demonstrator of the R3-COP Project. Ancona, January 30, 2014. Alessandro Benini

xiii

Riassunto I sistemi autonomi rappresentano un’area di sviluppo molto promettente, in particolare per quanto riguarda la ricerca sui sistemi artificiali intelligenti ed embedded. In campo industriale molte soluzioni focalizzate a risolvere un ben determinato problema sono disponibili, ma la gran parte di queste soluzioni è tuttavia proprietaria. Se da una parte il mantenimento della proprietà di alcune tecnologie garantisce una propria posizione nel mercato, dall’altra comporta dei costi soprattutto nel lungo periodo. A causa del proliferare di soluzioni proprietarie, l’industria robotica presenta un forte grado di frammentazione, che si traduce in un rallentamento nello sviluppo di nuovi sistemi, soprattutto in un periodo come quello presente, dove un sempre maggior numero di applicazioni dei sistemi robotici viene alla luce. Basti pensare ad esempio al forte sviluppo che stanno avendo i robots dedicati al supporto dei pazienti in ambiente ospedaliero, od ancora i robots adibiti alla pulizia dei pavimenti della abitazioni. Gran parte delle problematiche che devono essere affrontate nello sviluppo dei Sistemi Autonomi sono comuni a tutti gli ambiti applicativi degli stessi: tali problematiche riguardano, ad esempio, la localizzazione in ambienti indoor/ outdoor, la capacità di evitare ostacoli, la possibilità di prendere semplici decisioni sulla base del verificarsi di determinati eventi. Risulta pertanto chiaro che avere un framework di strumenti e tools di base, che risolva i problemi comuni a tutti gli ambiti applicativi, e dal quale partire di volta in volta per lo sviluppo di particolari sistemi adibiti a determinati compiti, comporti un notevole vantaggio. Pertanto, per diventare competitivi in uno scenario fortemente dinamico è necessario colmare alcune lacune che attualmente rallentano il processo di diffusione dei sistemi autonomi: la mancanza di piattaforme per l’integrazione di componenti da vari fornitori di tecnologia, la non disponibilità di piattaforme embedded altamente performanti e l’assenza di un framework di metodologie di sviluppo sono solo alcuni degli ostacoli da superare. In questo contesto, il progetto Europeo R3-COP mira a fornire un avanzamento rispetto allo stato dell’arte nel campo dei sistemi autonomi, fornendo un contributo sia dal punto di vista tecnologico che metodologico. Il progetto R3-COP (www.r3-cop.eu), reso possibile dai fondi stanziati dal xv

Consorzio ARTEMIS Joint Undertaking, grazie alla collaborazione di 27 partners provenienti da diverse realtà Europee, mira a rappresentare una pietra miliare rispetto allo stato dell’arte nello sviluppo dei Sistemi Autonomi fornendo un framekwork di nuove tecnologie e metodologie che permettano all’Industria Europea la produzione di sistemi robotici avanzati, robusti, autonomi e cooperanti ad un costo sempre più ridotto. Dal punto di vista tecnologico, il progetto R3-COP da una parte mira allo sviluppo di piattaforme embedded altamente performanti basate su architettura multi-core fault-tolerant, dall’altra alla definizione di tecnologie innovative nel campo della percezione dell’ambiente (tramite anche tecniche di Sensor Fusion), ma anche di capacità cognitive e di coooperazione fra i differenti tipi di Sistemi Autonomi. Dal punto di vista metodologico, R3-COP mira a fornire un framework di base di strumenti di sviluppo, validazione e verifica che possa rappresentare il punto di partenza dei futuri Sistemi Autonomi. La presente Tesi è stata sviluppata nell’ambito del progetto R3-COP, al fine di fornire un contributo originale nella risoluzione di alcune delle problematiche affrontate dal progetto stesso. In particolare, si mira a fornire un contributo in entrambe le direzioni del progetto: tecnologia e metodologia. Relativamente alla tecnologia, il lavoro condotto nella prima parte di questa Tesi descrive il contributo offerto nello sviluppo di algoritmi di localizzazione per ambienti indoor/outdoor di sistemi autonomi. Gli algoritmi sviluppati, basati su varianti del Filtro di Kalman dimostrano la possibilità di sfruttare le recenti tecnologie radio appartenenti allo standard IEEE 802.15.4a quali Ultra Wide Band e Chrip Spread Sprectrum nella localizzazione robotica specialmente in ambienti indoor. Andando più nel dettaglio, in questa Tesi si presenta un sistema di localizzazione per Unmanned Ground Vehicles basato sul sistema Nanotron Chirp Spread Spectrum. Il kit fornito da Nanotron rappresenta una possibile scelta nello sviluppo di un sistema di ranging specialmente grazie alla tecnologia proprietaria chiamata Symmetrical Double Sided Two Way Ranging (SDS-TWR). Questa tecnica di ranging permette di superare le limitazioni dei classici sistemi di ranging basati sull’intensità del segnale ricevuto (RSSI) che non assicurano performance soddisfacenti particolarmente in ambienti strutturati. L’algoritmo proposto permette anche di modellare il bias nei dati di ranging considerando inoltre le misure affette da fault al fine di avere una stima della posizione maggiormanete affidabile, anche quando le informazioni di ranging di alcune ancore non sono disponibili. Successivamente, si propone un algoritmo di localizzazione per Unmanned Aerial Vehicles (UAV) sfruttando il kit di sensori UbiSense basato sullo standard IEEE 802.15.4a Ultra Wideband (UWB). L’algoritmo proposto permette xvi

di stimare la posizione di un mini UAV sfruttando le informazioni provenienti da sensori inerziali low-cost e un sistema di visione per il rilevamento del marker nella zona di atterraggio. Le misure di ranging fornite dall’ltraWide Band, con un errore di circa 25cm, permetteno di ridurre gli errori dei sensori inerziali. I risultati dimostrano che il connubio di sensori inerziali ed UltraWide Band permette di localizzare l’UAV con una precisione di 15cm. Da un punto di vista metodologico, la seconda parte della Tesi presenta il sistema software di simulazione sviluppato per la veloce prototipazione di leggi di controllo cooperative e per la validazione delle stesse con la possibilità di sfruttare tools per la generazione automatica del codice ed l’integrazione dello stesso nella piattaforme hardware reali. In aggiunta, sono discussi alcuni esempi di utilizzo del Simulatore come il test di un algoritmo per il volo in formazione di più UAV basato sul Networked Decentralized Model Predictive Control ed algoritmi per la gestione delle missioni sfruttando le Macchine a Stati Finiti e la cooperazione degli agenti mobili realizzata mediante lo scambio di informazioni. Tutto il lavoro svolto è stato finalizzato alla realizzazione dei relativi aspetti del dimostratore finale Air-Borne del progetto R3-COP nel quale le capacità di localizzazione, navigazione e cooperazione dei Sistemi Autonomi terrestri ed aerei dovevano essere dimostrate. I risultati delle ricerche condotte sono stati pubblicati con successo in varie riviste ed atti di conferenze internazionali. Ancona, 30 Gennaio 2014. Alessandro Benini

xvii

Contents 1. Introduction

1

2. State of the Art 2.1. Localization Technologies and Algorithms for Autonomous Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2. Software Systems for Robotic Simulations . . . . . . . . . . . .

5

3. Inertial Sensors and Inertial Navigation 3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2. MEMS Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1. Source of Errors in Inertial MEMS Sensors . . . . . . . 3.3. Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4. Gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5. Magnetometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6. Barometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7. Calibration Procedures for MEMS Accelerometer and Gyroscope 3.7.1. Six-Static Position Test . . . . . . . . . . . . . . . . . . 3.7.2. MPU-6050 Calibration . . . . . . . . . . . . . . . . . . . 3.8. Attitude Representation using Inertial Sensors . . . . . . . . . 3.8.1. Euler Angles . . . . . . . . . . . . . . . . . . . . . . . . 3.8.2. Axis-Angle representation . . . . . . . . . . . . . . . . . 3.8.3. Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . 3.9. Dead-reckoning . . . . . . . . . . . . . . . . . . . . . . . . . . .

11 11 12 14 17 19 20 22 24 24 26 30 30 32 34 34

4. Ranging Sensors 4.1. Introduction . . . . . . . . . . . . . . . . . . 4.2. The NanoLoc Localization System . . . . . 4.3. The UWB UbiSense Real-Time Localization 4.3.1. Strengths of UWB Technology . . .

39 39 40 41 42

. . . . . . . . . . System . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs 5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 7

45 45 xix

Contents 5.2. Adaptive Extended Kalman Filter for Indoor/Outdoor UGV Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1. The Extended Kalman Filter . . . . . . . . . . . . . . . 5.2.2. Detection of faulty range measurements . . . . . . . . . 5.2.3. The Extended Kalman Filter with bias modelling . . . . 5.2.4. Experimental Results . . . . . . . . . . . . . . . . . . . 5.2.5. Configuration of the experiments . . . . . . . . . . . . . 5.2.6. Experimental results in Environment n◦ 1 . . . . . . . . 5.2.7. Experimental results in Environment n◦ 2 . . . . . . . . 5.3. An IMU/UWB/Vision-Based EKF for mini-UAV Localization . 5.3.1. The Parrot Ar Drone . . . . . . . . . . . . . . . . . . . 5.3.2. The Hardware Setup . . . . . . . . . . . . . . . . . . . . 5.3.3. The Software Setup . . . . . . . . . . . . . . . . . . . . 5.3.4. Testing of Localization Algorithm in Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5. IMU Characterization . . . . . . . . . . . . . . . . . . . 5.3.6. The Kinematic Model of the Quadrotor . . . . . . . . . 5.3.7. The Matematical Model . . . . . . . . . . . . . . . . . . 5.3.8. The Extended Kalman Filter . . . . . . . . . . . . . . . 5.3.9. Experimental Results . . . . . . . . . . . . . . . . . . . 6. 3D Simulator for cooperation between UAV and UGV 6.1. The Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1. Management of three-dimensional environment actors Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2. Drone Module . . . . . . . . . . . . . . . . . . . . . . . 6.1.3. Avionic Instruments Module . . . . . . . . . . . . . . . . 6.1.4. Network Services Module . . . . . . . . . . . . . . . . . 6.1.5. Vision Module . . . . . . . . . . . . . . . . . . . . . . . 6.2. Simulator Interface with Matlab . . . . . . . . . . . . . . . . . 6.2.1. The Quadrotor Model . . . . . . . . . . . . . . . . . . . 6.3. A PID controller for position tracking . . . . . . . . . . . . . . 6.4. Formation Control via Networked Decentralized Model Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1. The Kinematic Model . . . . . . . . . . . . . . . . . . . 6.4.2. The Predictive Model . . . . . . . . . . . . . . . . . . . 6.4.3. Physical Constraints . . . . . . . . . . . . . . . . . . . . 6.4.4. The Leader-Follower Problem . . . . . . . . . . . . . . . 6.4.5. Decentralized MPC . . . . . . . . . . . . . . . . . . . . 6.4.6. Main Results . . . . . . . . . . . . . . . . . . . . . . . . 6.4.7. Simulation Results . . . . . . . . . . . . . . . . . . . . . xx

46 46 49 49 50 51 51 53 55 55 56 57 58 58 62 63 64 67 73 73 75 76 77 77 77 78 78 80 81 82 84 84 85 85 86 88

Contents 6.5. State Machines for Mission Management . . . . . . . . . . . . . 6.5.1. MatLab StateFlow for Mission Management . . . . . . . 7. Conclusions and Future Works A. Air Borne Demonstrator A.1. General Description . . . . . . . . . . . . . . . . . . . . . A.2. Detailed Description . . . . . . . . . . . . . . . . . . . . . A.2.1. Indoor Mission . . . . . . . . . . . . . . . . . . . . A.2.2. Transit between Indoor and Outdoor Environment A.2.3. Outdoor Mission . . . . . . . . . . . . . . . . . . . A.2.4. Features to be demonstrated and Innovation . . . A.2.5. Pictures from the Real Demonstrator . . . . . . .

92 94 97

. . . . . . .

. . . . . . .

. . . . . . .

101 101 102 103 103 104 105 106

xxi

List of Figures 3.1. General Inertial Measurement Unit . . . . . . . . . . . . . . . . 3.2. A surface micro-machined electro-statically-actuated micro-motor fabricated by the MNX. This device is an example of a MEMSbased micro-actuator. Source: [1] . . . . . . . . . . . . . . . . . 3.3. General Accelerometer output in static conditions . . . . . . . . 3.4. Position Estimation in static conditions with bias . . . . . . . . 3.5. Non ortogonality of Axes . . . . . . . . . . . . . . . . . . . . . 3.6. Temperature Drift . . . . . . . . . . . . . . . . . . . . . . . . . 3.7. General Accelerometer structure . . . . . . . . . . . . . . . . . 3.8. MEMS Vibrating Gyroscope. Source: "Modeling the MEMS Gyroscope" K. Craig, http://www.designnews.com/ . . . . . . 3.9. Different applications of Pressure Sensor . . . . . . . . . . . . . 3.10. Bias Estimation on Z axis using Six Static Position Test . . . . 3.11. Comparison between uncalibrated and calibrated data - X axis accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12. Comparison between uncalibrated and calibrated data - Y axis accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.13. Comparison between uncalibrated and calibrated data - Z axis accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.14. Comparison between uncalibrated and calibrated data - X axis gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.15. Comparison between uncalibrated and calibrated data - Y axis gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.16. Comparison between uncalibrated and calibrated data - Z axis gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.17. On the left a normal situation: the three gimbals are independent. On the right the Gimbal lock phenomenon: two out of the three gimbals are on the same plane. . . . . . . . . . . . . . . . 3.18. Axis-Angle representation . . . . . . . . . . . . . . . . . . . . . 3.19. Dead Reckoning . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.20. Inertial Navigation System . . . . . . . . . . . . . . . . . . . . 4.1. Symmetrical Double Sided Two Way Ranging . . . . . . . . . .

12

13 15 15 16 17 17 20 22 24 28 28 28 29 29 29

32 33 35 36 41 xxiii

List of Figures 5.1. Robotic platform . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2. Map of the environment n◦ 1 . . . . . . . . . . . . . . . . . . . . 5.3. Test n◦ 1 in environment n◦ 1 . . . . . . . . . . . . . . . . . . . 5.4. Test n◦ 2 in environment n◦ 1 . . . . . . . . . . . . . . . . . . . 5.5. Map of the environment n◦ 2 . . . . . . . . . . . . . . . . . . . . 5.6. Test n◦ 1 in environment n◦ 2 . . . . . . . . . . . . . . . . . . . 5.7. The Parrot Ar Drone quadcopter with body frame axis orientation 5.8. The hardware architecture used during tests . . . . . . . . . . . 5.9. The Software Architecture . . . . . . . . . . . . . . . . . . . . . 5.10. Temperature trend for Ar Drone accelerometer . . . . . . . . . 5.11. ADC output with z axis upward . . . . . . . . . . . . . . . . . 5.12. ADC output with z axis downward . . . . . . . . . . . . . . . . 5.13. Comparison between calibrated data and phys_data - x axis . . 5.14. Comparison between calibrated data and phys_data - y axis . . 5.15. Euler angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.16. Sensor fusion algorithm flow-chart . . . . . . . . . . . . . . . . 5.17. An example of target detection using the bottom side Ar.Drone webcam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.18. An example of QRCode used to improve the localization task in the considered indoor environment . . . . . . . . . . . . . . . . 5.19. Map of the indoor environment . . . . . . . . . . . . . . . . . . 5.20. Experimental Results - Test n◦ 1 . . . . . . . . . . . . . . . . . 5.21. Experimental Results - Test n◦ 2 . . . . . . . . . . . . . . . . . 5.22. Experimental Results - Test n◦ 2 - x axis . . . . . . . . . . . . . 5.23. Experimental Results - Test n◦ 2 - y axis . . . . . . . . . . . . . 5.24. Experimental Results using also vision odometry . . . . . . . . 5.25. Experimental Results using also vision odometry - x axis . . . . 5.26. Experimental Results using also vision odometry - y axis . . . .

52 52 53 53 54 54 56 57 58 60 60 61 62 62 63 65

6.1. 6.2. 6.3. 6.4. 6.5. 6.6. 6.7. 6.8.

75 76 78 79 80 80 82

Framework structure diagram . . . . . . . . . . . . . . . . . . . Virtual Urban Environment . . . . . . . . . . . . . . . . . . . . Vision Module . . . . . . . . . . . . . . . . . . . . . . . . . . . Simulink block of quadrotor . . . . . . . . . . . . . . . . . . . . Data flow between simulator and Simulink control system . . . Control System for stabilization and position tracking of quadrotor A formation of quadrotor in a leader-follower scheme . . . . . . Relative configuration of aircraft V j with respect to aircraft V i for the tracking error system . . . . . . . . . . . . . . . . . . . 6.9. A sample of scenario where two UAV have performed a path following by MPC control law . . . . . . . . . . . . . . . . . . . 6.10. Example of path for virtual aircraft: units are in meters . . . . xxiv

67 67 68 69 69 70 70 71 71 71

83 86 87

List of Figures 6.11. Position Error between leader and follower among x axis. Graphs relating to Simulations n◦ 3 and 11 . . . . . . . . . . . . . . . . 6.12. Position Error between leader and follower among y axis. Graphs relating to Simulations n◦ 3 and 11 . . . . . . . . . . . . . . . . 6.13. Position Error between leader and follower among x axis. Graphs relating to Simulations n◦ 5 and 13 . . . . . . . . . . . . . . . . 6.14. Position Error between leader and follower among y axis. Graphs relating to Simulations n◦ 5 and 13 . . . . . . . . . . . . . . . . 6.15. Position Error between leader and follower among x axis. Graphs relating to Simulations n◦ 7 and 15 . . . . . . . . . . . . . . . . 6.16. Position Error between leader and follower among y axis. Graphs relating to Simulations n◦ 7 and 15 . . . . . . . . . . . . . . . . 6.17. Paths of Leader and Follower among x-axis. Graphs relating to Simulation n◦ 13 . . . . . . . . . . . . . . . . . . . . . . . . . . 6.18. Paths of Leader and Follower among z-axis. Graphs relating to Simulation n◦ 13 . . . . . . . . . . . . . . . . . . . . . . . . . . A.1. A.2. A.3. A.4. A.5. A.6. A.7.

Indoor Mission . . . . . . . . . . . . . . . . . Handover . . . . . . . . . . . . . . . . . . . . Outdoor Mission . . . . . . . . . . . . . . . . Indoor Mission - Final Demonstrator . . . . . UbiSense System - Final Demonstrator . . . . Outdoor Mission - Final Demonstrator . . . . Thermocamera Output - Final Demonstrator

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

89 89 90 90 91 91 92 92 104 104 105 106 106 107 107

xxv

List of Tables 3.1. MPU6050 Gyroscope Specifications . . . . . . . . . . . . . . . . 3.2. MPU6050 Accelerometer Specifications . . . . . . . . . . . . . .

27 27

5.1. Bosh BMA 150 Accelerometer Specifications . . . . . . . . . . . 5.2. Comparison between acceleration values provided by Ar Drone and the calibrated acceleration values . . . . . . . . . . . . . .

56

6.1. Input and Outputs list of S-function . . . . . . . . . . . . . . . 6.2. Simulation results of MPC with horizon prediction equals to 10 6.3. Simulation results of MPC with horizon prediction equals to 20

79 88 88

61

xxvii

Chapter 1. Introduction Autonomous systems represent a promising evolving area, in particular with respect to research in artificial and embedded systems. In the European Industry, many solutions of autonomous systems focused on solving a very specific problem are available, but the majority of these solutions, however, is proprietary. While maintaining the property of a particular technology ensures a position in the market, on the other hand this becomes very costly especially in long term. Due to the proliferation of proprietary solutions, the Robotic Industry presents a high degree of fragmentation, which cause a slowdown in the development of new systems, especially nowadays, where an even more increasing number of applications of robotic systems comes. Most of the issues that must be addressed in the development of Autonomous Systems are common to all areas of applications: These problems relate, for example, the localization in indoor/outdoor environments, the ability to avoid obstacles, the ability to make simple decisions based on the occurrence of certain events. It is therefore clear that having a framework of tools that solves common problems in all areas of application, and from which is possible to start for the development of new particular systems used for certain tasks, produces a remarkable advantage. Therefore, in order to become competitive in a highly dynamic market it is necessary to fill some gaps that currently slow down the diffusion of autonomous systems: The lack of platforms for the integration of components from various technology suppliers, the unavailability of high-performance embedded platforms and the absence of a framework of methodologies are just some of the obstacles to overcome. In this context, the European project R3-COP, made possible by funds of the ARTEMIS Joint Undertaking as well as different National Funding Authorities and thanks to the collaboration of 27 partners from different European Countries, aims to advance over the state of the art, providing a contribution from both perspectives: Technology and Methodology. In particular, it aims to propose new technologies and methodologies that 1

Chapter 1. Introduction enable the production for the European Industry of advanced, robust, autonomous and cooperating robotic systems at a reduced cost. With respect to technology, R3-COP aims to develop: • A new fault tolerant high performance embedded hardware platform, based on a multi-core architecture taking into account autonomous systems’ functional and non-functional requirements; • Innovative system components for robust perception of the environment (including sensor fusion), for reasoning and reliable action control, and for communication and cooperation among autonomous vehicles. With respect to methodology, R3-COP aims to design a methodology-based development framework for autonomous systems. This will be built upon an extensible knowledge base, comprising all state-of-the-art information about the enabling technologies (hardware, algorithms etc.) for robust autonomous systems. The application of this methodology will be supported by a tool platform, which will cover design, implementation, and verification/testing. This Thesis has been developed in the context of the R3-COP Project, in order to overcome some of the addressed problems, and provides a contribution in both directions: Technology and Methodology. In particular the work carried out and presented in this Thesis aims to provide a genuine contribution on the robust perception of the environment presenting some sensor fusion algorithms for indoor/outdoor localization and navigation of autonomous systems; and an advance in validation and test methods for autonomous systems, designing a 3D Simulation Environment for the prototyping and validation of cooperative tasks for unmanned systems, as part of the framework of methodologies addressed by the R3-COP project. Going more in details, the activities carried out in this Thesis are: • The investigation of the IEEE 802.15.4a technologies, UWB (Ultra Wide Band) and CSS (Chirp Spread Spectrum), in the context of Robot Localization especially in indoor environments where the GPS signal is typically not available; • Integration of MEMS Inertial Sensors and Cameras with Ranging Sensor for a robust localization throughout Sensor Fusion Algorithms; • Development of a 3D Simulation Environment for testing and validating the developed algorithms with regards to Localization and Navigation tasks. • Testing of Control and Localization Algorithms using the developed Simulation Environment: Some example of use of the developed Simulation 2

Environment, such as formation control of many UAVs using Networked Decentralized Model Predictive Control, and mission management using Finite State Machines are proposed and explained. All the work carried out in this Thesis has been focused on the Final AirBorne Demonstrator of the R3-COP Project in which the Autonomy and the Cooperation between the mobile agent have to be proved. A detailed description of the Air-Borne Demonstrator is provided in Appendix A. Since this Thesis provides a contribution in both direction of the R3-COP project, Technology and Methodology, the work is divided in two distinct parts. In the first part, including chapters 3,4,5 a complete overview of the work carried out in the Localization of Autonomous Systems is proposed. Chapter 2 contains an overview of the current state of art in both fields addressed by this Thesis: Localization Systems for Robot Navigation and Methodological Frameworks for validation and verification of autonomous systems. Chapter 3 will deal with a detailed analysis of the most important types of MEMS Inertial Sensors, with particular reference to MEMS gyroscope and accelerometer as primary source for Inertial Navigation Systems. MEMS Inertial Sensors are becoming even more attractive in Robotics fields, thanks especially to their low-cost small dimensions even if they are characterized also by poor performance that need to be managed in order to get these devices reliable. Therefore in Chapter 3 the most important source of error of MEMS Inertial Sensors will be investigated. With calibration procedures the MEMS Sensors can be used in the development of Inertial Navigation Systems based on Dead-Reckoning, described in the last part of Chapter 3. In Chapter 4 an overview of the Ranging Techniques in Anchor Based Localization System and the Ultra-Wide Band Technology for localization is carried out. Chapter 4, together with Chapter 3, will provide all the necessary knowledge base for the Localization Algorithms developed and described in Chapter 5. Chapter 5 aims to provide a genuine contribution on the perception of the environment presenting some robust sensor fusion algorithms for indoor/outdoor localization and navigation of autonomous systems. In this chapter, a Localization System for UGV based on Chirp Spread Spectrum is proposed. The algorithm allows to model the bias of ranging data considering also the faults in the measurements in order to have a more reliable position estimation when ranging data are not available. Furthermore, a Localization Algorithm for miniUAVs based on Ultra Wide Band is presented: The proposed solution allows to use a low-cost Inertial Measurement Unit (IMU) in the prediction step and the integration of vision-odometry for the detection of markers nearness the touchdown area. The ranging measurements allow to reduce the errors of iner3

Chapter 1. Introduction tial sensors due to the limited performance of accelerometers and gyros. The obtained results show that an accuracy of 10 cm can be achieved. The second part of this Thesis deals with a detailed analysis of the developed 3D Simulation Environment. Therefore, chapter 6 describes the 3D Simulation Environment developed for the testing of some of the Localization and Navigation algorithms. This framework, based on the nVidia PhysX physical Engines provides also an interface with the MatLab/Simulink Environment. The Simulation Environment is then used for testing different cooperative scenarios like Navigation of many UAVs in formation using Networked Decentralized Model Predictive Control. Thanks to its interface with MatLab and Simulink, the Framework have been also tested and used by other R3-COP partners for the development of Mission Management Systems based on Finite State Machines, described in the last part of this Chapter. The possibility to interface the virtual environment with MatLab/Simulink, to develop custom mobile robots specifying physical properties, to simulate different kinds of sensors, make the developed simulator an interesting tool especially in the development of cooperative robotic systems. All the material presented in Chapter 5 and 6 have been published in International Journal and Conferences.

R3-COP Logo. Source: www.r3-cop.eu

4

Chapter 2. State of the Art This Thesis provides a contribution in both direction of the R3-COP project, Technology and Methodology, and therefore the work is divided in two distinct parts. The first part of this chapter provides an analysis of the current state of the art in Localization Technologies and Algorithms for Autonomous Systems. The second part a review of the current Software Systems for Robotic Simulations is carried out.

2.1. Localization Technologies and Algorithms for Autonomous Systems Indoor and outdoor localization of mobile robots is very attractive in many application. In last few years, Unmanned Aerial Vehicles (UAVs) have attracted attention in different fields, both civilian and military. They are able to perform tasks in hostile environments were access to humans is impossible or dangerous. Precision localization and navigation for these vehicles is a critical aspect that needs to be deeply analyzed. In outdoor environments, UAV can localize itself exploiting different spacebased satellite navigation systems, such as Global Positioning System (GPS). But the precision of satellite navigation systems is very limited, especially in civilian applications. The position accuracy that can be achieved with GPS is ≤ 1m in military applications, and around 10m in civilian applications. However using a technique known as differential GPS (D-GPS) or EGNOS corrections, in which a separate base receiver is fixed at a known point, civilian accuracy may be improved to 5m. Although this is not as good as can be achieved using high frequency radar, it may still be adequate for some applications [2]. In indoor environments, Wireless networks can be successfully used not only for communication between devices but also for localization. The main problems that affect range measurements using wireless networks consists in multipath and non-line-of-sight (NLOS) measurements. As example, localization based on Received Signal Strength (RSSI) could provide very poor 5

Chapter 2. State of the Art localization informations, expecially in cluttered environments or when there are many obstacles from the transmitter and the receiver. The recent standard IEEE 802.15.4a specifies two additional physical layers (PHYs): a PHY using ultra-wideband (UWB) and a PHY using chirp spread spectrum (CSS) both with a precision time-based ranging capability [3, 4, 5, 6, 7, 8]. The UWB PHY is operating in three frequency bands: below 1 GHz, between 3 and 5 GHz, and between 6 and 10 GHz. The UWB physical layer channels have large bandwidth and provide high ranging accuracy, up to 25 cm. The main advantages of UWB technology are high data transfer, low power consumption, high spatial capacity of wireless data transmission and sophisticated usage of radio frequencies. UWB technology is based on sending and receiving carrierless radio impulses using extremely accurate timing. UWB can be used for example in such applications where high bandwidth signals must be transmitted between devices. In a nutshell, UWB is characterized by the following characteristics: • High immunity to multipath effect; • Very low power consumption; • Capacity to penetrate obstacles (at lowest frequencies); • Very low probability to be detected and intercepted; • Low interference to existing wireless systems. These characteristics make the UWB very appreciable in the developments of localization systems expecially in indoor environments or when a high precision in position estimation (less than 25 cm) is required. The CSS PHY is operating in 2.4 GHz ISM band. The chirp solution does not support ranging, but the first 802.15.4a CSS chip (nanoLOC) developed by Nanotron has the ranging as additional (proprietary) function. It offers a unique solution for devices moving at high speeds because of its immunity to Doppler Effect and provides communicating at longer ranges. In real scenarios, very often the precision provided by the only ranging sensors are not enough. In order to improve the accuracy, different methods based on fusion between different kind of source of data, have been developed. Some of these improve the ranging or satellite accuracy by mean of data-fusion with inertial navigation systems (INS) [9, 10]. Others methods are based on vision [11, 12] or a combination of INS and Vision [13]. The basic idea of the Sensor Fusion applied to the problem of localization is to provide an estimation of the position with an accuracy greater respect to the accuracy reachable by the various devices used individually. For the realization 6

2.2. Software Systems for Robotic Simulations of sensor fusion several techniques are available, many of which are based on a probabilistic representation of the information. Such assumption is particularly interesting since the measurements produced by the sensors are inevitably affected by noise and / or errors that do not allow a precise estimation of the position of an object. The most promising solutions are based on the theory of the Kalman Filter (and variants) and the Particle Filter. Such algorithms, widely studied in the literature, provide a great help to the problem of localization, allowing also the use of low-cost sensors, integrating the limitations and weaknesses of a type of sensor with information provided by other devices. In recent years, inertial sensors based on MEMS (MEMS IMU) have found use in many areas, mainly due to the their low cost and small size [14]. But, generally low-cost sensors are characterized also by poor performance. The most important problems of a MEMS IMU are [15]: • Bias

• Scale factor

• Non-ortogonality

• Temperature Drift

Therefore, in order to provide an effective improvement of performances in Sensor Fusion algorithms, it is necessary to analyze in detail the behavior of such sensors and implement appropriate calibration procedures, both under static and dynamic conditions. Similar considerations apply also in the case of the magnetometer and pressure sensor. Such set of sensors, suitably calibrated, are the basis for an Inertial Navigation System (INS). In this context, the proposed solutions aim to provide a genuine contribution on the development of robust sensor fusion algorithms for indoor/outdoor localization and navigation of autonomous systems exploiting the advantages of IEEE 802.15.4a technologies throughout: • The investigation of the IEEE 802.15.4a technologies, UWB (Ultra Wide Band) and CCS (Chirp Spread Spectrum), in the context of Robot Localization especially in indoor environments where the GPS signal is typically not available; • Integration of MEMS Inertial Sensors and Cameras with Ranging Sensor for a robust localization throughout Sensor Fusion Algorithms.

2.2. Software Systems for Robotic Simulations Given the complexity of control laws that regulate the dynamics of mobile robots, and the difficulty of designing robust control systems, a very good 7

Chapter 2. State of the Art simulation platform is necessary. Simulation activities are essential by making it possible to study and compare different approaches in solving a particular problem, providing not only a drastic reduction in development time but also in costs. Matlab/Simulink is an excellent choice for rapid prototyping of control systems. However, in the Simulink environment the mathematical model of the system under simulation is required. These mathematical models are in some cases too difficult to manage or not available at all. Moreover, it is difficult to model mathematically complex robot behaviors in realistic environments, as in the case of cooperative tasks. For such situations a large number of three-dimensional simulators has been developed: • Player/Stage Gazebo [16]: the Player Project based on ODE physic engine (formerly the Player/Stage Project or Player/Stage/Gazebo Project) is a project to create free software for research into robotics and sensor systems. Its components include the Player network server and Stage / Gazebo robot platform simulators. • FlightGear [17] : FlightGear Flight Simulator (often shortened in FlightGear or FGFS) is a free, open-source and multi-platform flight simulator developed by the FlightGear project since 1997. The simulation engine in FlightGear is called SimGear. It is used both as an end-user application and in academic and research environments. Flight Dynamics Models (FDM) are how the flight of an aircraft is simulated in the program. FlightGear uses a variety of internally written and imported flight model projects. Any aircraft must be programmed to use one of these models. R Robotics De• Microsoft Robotics Developer Studio [18] : the Microsoft� R veloper Studio 2008 R3 (Microsoft RDS) is a Windows�-based environment for academic, hobbyist, and commercial developers to easily create robotics applications across a wide variety of hardware. The integration of the NVidia PhysX Technologies enables leveraging a very strong physics simulation.

• SimplySim SimplyCube [19] : the software integrates nicely with the Microsoft Robotics Developer Studio providing a complete and detailed 3D environments and sensor models. In fact, any 3D environment created using one of the many tools included in Simply Suite can be exported and used in a robotic simulation within MRDS. The SimplySim SimplyCube is currently implemented for two of the most accurate physic engines on the market: NVidia PhysX [20] and Newton Game Dynamics [21]. • X-Plane : X-Plane [22] is a flight simulator produced by Laminar Research for Android, IOS, Linux, Mac OS X and Windows. X-Plane can be bundled with other software to create and edit new aircraft and scenery. 8

2.2. Software Systems for Robotic Simulations The flight model of X-Plane is based on the so-called "Blade Element Theory": In practice, the aircraft aerodynamic surfaces (wings, tail surfaces, etc..) are split into several parts, then the force acting on each one of them is calculated by applying a fluid. Among all the solutions outlined above, the SimplySim provides good opportunities and it can be programmed using the language of the framework .NET. In this way, it is possible to interface it with Microsoft Robotics Developer Studio and other platforms. Further it includes a full suite of tools for the realization not only of three-dimensional models of robots (also in terms of physical characteristics) but also for the realization of complete three- dimensional scenes. This makes it quite interesting for the creation of highly realistic simulation systems. The developed Simulation Environment combines the high realism of the simulations carried out in a three-dimensional virtual environment (in which the most important laws of physics act) with the easiness of Simulink for fast prototyping of control systems. This work extends the previous framework [23] [24] that was based on modularity and stratification in different specialized layers; the limitation was the missing of a physical engine and an advanced 3D visualization interface.

9

Chapter 3. Inertial Sensors and Inertial Navigation This chapter will deal with a detailed analysis of the most important types of MEMS Inertial Sensors, with particular reference to MEMS gyroscope and accelerometer as primary source for Inertial Navigation Systems. MEMS Inertial Sensors are becoming even more attractive in Robotics fields, thanks especially to their low-cost small dimensions even if they are characterized also by poor performance that need to be managed in order to get these devices reliable. Therefore in this chapter the most important source of error of MEMS Inertial Sensors will be investigated. With calibration procedures the MEMS Sensors can be used in the development of Inertial Navigation Systems based on DeadReckoning, described in the last part of this chapter.

3.1. Introduction An Inertial Measurement Unit (IMU) is typically composed by a sensor capable to measure the current rate of linear accelerations (accelerometer) and current angular velocities (gyroscope). Both sensors measure the corresponding physical quantities in a tridimensional environment and they are characterized by a own reference system. Ordinarily, the two reference system are considered overlapped: The origin of the accelerometer reference system corresponds to the origin of the gyroscope reference system (Figure 3.1). In recent years, inertial sensors based on MEMS (MEMS IMU) have found use in many areas, mainly due to the their low cost and small size [14]. But, generally low-cost sensors are characterized also by poor performance. The most important problems of a MEMS IMU are [15]: • Bias • Scale factor • Non-ortogonality 11

Chapter 3. Inertial Sensors and Inertial Navigation

Figure 3.1.: General Inertial Measurement Unit • Temperature Drift

In order to obtain good performance it is necessary to analyze in detail the behavior of such sensors and implement appropriate calibration procedures, both under static and dynamic conditions. Similar considerations apply also in the case of the magnetometer and pressure sensor. Such set of sensors, suitably calibrated, are the basis for an Inertial Navigation System (INS). In this chapter, after a brief introduction to MEMS technology, the source of errors in MEMS Inertial Sensors are analyzed. Subsequently mathematical model for each MEMS sensor and calibration procedures are explained. The calibration procedures described hereafter consists in offline procedures necessary to reduce the effect of systematic errors, such as static bias and scalefactor. The considerations about source of errors and calibration procedures will provide a series of methods and techniques for managing sensor input data in the developments of Inertial Navigation System based on Dead-Reckoning methods, described in the last part of this chapter.

3.2. MEMS Sensors Micro-Electro-Mechanical Systems, or MEMS, are composed by a set of miniaturized mechanical and electro-mechanical elements realized using the techniques of micro-fabrication. These devices have been recognized as one of the most promising technologies of the XXI century, able to revolutionize both the industrial and consumer world products [1]. The electro-mechanical microsystems are nothing more than a set of devices of various kinds (mechanical, electrical and electronic) in a highly miniaturized 12

3.2. MEMS Sensors integrated system on the same silicon substrate. These microsystems combine the electrical properties of the semiconductor integrated with opto-mechanical properties. They are therefore "intelligent" systems that combine electronic functions, fluid management, optical, biological, chemical and mechanical properties in a small space, integrating the technology of sensors and actuators and the most different functions of process management. A technology for more miniaturized systems, however, is emerging. These new system, also known as Nano Electro-Mechanical Systems, or NEMS, are able to reduce the size of the devices to the nanometric dimensions [25]. The microsystem technology is adopted in many different fields of application: most of complex opto-electronic devices are based on microscopic mirrors or lenses oscillating in single or arrays used to build devices such as laser signals for switches, sensors for telescopes, distorting lenses, projectors and advanced displays. But also inertial sensors, accelerometers, retinal scanners, digital shutters, interferometers, sensors for sophisticated measures are gaining an uncontested advantage from this promising technology. Within the electronics of the microwave (1 GHz - 100 GHz), the MEMS device is used as a single switch to realize more complex applications such as phase shifters, matching networks, resonant filters, networks feeds for array antennas and generally reconfigurable systems. Even in the technology of chemistry and bio-engineering, MEMS devices are used for new solutions. Among these applications, electric motors with a diameter of two millimeters (Figure 3.2) and a length of ten planetary gear included are also available. Therefore, the integration of mechanical systems, sensors and electronic circuits on the same substrate opens new possibilities in various sectors.

Figure 3.2.: A surface micro-machined electro-statically-actuated micro-motor fabricated by the MNX. This device is an example of a MEMSbased micro-actuator. Source: [1]

13

Chapter 3. Inertial Sensors and Inertial Navigation

3.2.1. Source of Errors in Inertial MEMS Sensors MEMS-based inertial sensors such as MEMS gyroscopes and MEMS accelerometers are gaining an even more interest as aiding sensors to improve low-cost navigation system development. However, the source of errors in MEMS based inertial sensors must be appropriately managed in order to turn the raw sensor measurements into reliable data for attitude and position determination [26]. The following paragraphs will deal with a deep explanation of the most important systematic errors in MEMS Inertial Sensors. Bias The bias is defined as a constant noise in addition to the sensor output and is undoubtedly the most important cause that degrades the performance of an inertial sensor. As example, in an accelerometer due to the bias the value of the measured acceleration can be higher or lower (depending on the sign of the bias) of the actual one and in static conditions the mean-value in output to the sensor is different from 0g. Therefore, the 0g bias level describes the DC output voltage level of the accelerometer when it is not in motion or being acted upon by the earth’s gravity. From the mathematical point of view, the acceleration read in output from the sensor (see Figure 3.3) can be described in first approximation by the following formula: as (t) = ab (t) + b (3.1) where: • as (t) is the sensor output • ab (b) is the true value • b is the bias Following the considerations just described, it is evident how the bias on the acceleration needs to be appropriately treated. Simply by performing a double numerical integration for calculating the position from accelerometer measurements, is easily demonstrated that the position estimation becomes totally unreliable after a few seconds (Figure 3.4). Non Ortogonality The non-orthogonality error is caused by a non perfect alignment between the triad of physical axes of the sensor and the ideal triad taken as reference for the 14

3.2. MEMS Sensors

Figure 3.3.: General Accelerometer output in static conditions

Figure 3.4.: Position Estimation in static conditions with bias

15

Chapter 3. Inertial Sensors and Inertial Navigation calculation of the output signal. This error depends essentially on the degree of accuracy attainable during the construction phase. Considering again an 3-axial accelerometer, the non-orthogonality of the axes is such that the accelerometer mounted on a single axis is also sensitive to forces applied on the other axes. In Figure 3.5 a typical example of non-orthogonality is shown. In black are the ideal axes sensor (xb, yb, zb), in green real axes (x, y, z).

Figure 3.5.: Non ortogonality of Axes

Scale Factor Sensitivity, or Scale Factor, is the ratio of the sensor’s electrical output when a mechanical input is applied. In an accelerometer, the scale factor is typically described in terms of mV/g. Since most sensors are influenced by the temperature, sensitivity is also valid only over a narrow temperature range, typically 25 ±5◦ C. Sensitivity is sometime specified with a tolerance, usually ±5% or ±10%. This assures that the sensitivity will remain within the stated tolerance deviation from the nominal sensitivity value [27]. Temperature Offset The temperature offset consists essentially in the change of the bias value during variation of sensor internal temperature. Most of the currently available MEMS sensor are equipped with an internal temperature sensor in order to use temperature information during calibration procedures to enable reliable long-term measurements of the sensor. In figure 3.6 the typical output of a MEMS accelerometer recorded in a period of 7 hours (with a datarate of 200 Hz) shows the change in the sensor output while the temperature of the device increases. 16

3.3. Accelerometer

Figure 3.6.: Temperature Drift

3.3. Accelerometer An accelerometer is a device capable to measure proper accelerations. Accelerometers have multiple applications in many fields, such as industry and science. As examples, high-performances accelerometers are embedded in inertial navigations systems for aircrafts or submarine vehicles. All of the recently developed portable devices such as smart-phones of tablet integrate accelerometers in order to improve the user-experience. Most accelerometers are Micro-Electro-Mechanical Sensors (MEMS). In MEMS accelerometer the basic principle of operation consists in the displacement of a small proof mass connected into the silicon surface of the integrated circuit and suspended by small support beams. A general structure of MEMS accelerometer is shown in the following Figure 3.7.

Figure 3.7.: General Accelerometer structure Accelerometers are based on the Newton’s second law of motion: When an acceleration is applied to the device, the developed force displaces the proof mass. The support beams act as a spring, and the fluid (typically air) contained on the IC acts as a damper, resulting in a second order physical system. Both 17

Chapter 3. Inertial Sensors and Inertial Navigation the deformation of the suspension and the displacement of the proof mass can be exploited to obtain and electric signal, e.g. by mean of capacitive effects. Let’s consider a prof mass M, a suspension beam with a spring constant K, a damping factor D. By using the Newton’s second law, the mechanical transfer function can be obtained as follows: H(s) =

x(s) 1 = 2 D a(s) s + Ms+

K M

(3.2)

where a is the external acceleration, x is the displacement of the proof mass, � ωr = K/M is the natural resonance frequency and Q is the quality factor. As stated in Eq. 3.2, the resonance frequency of the structure beam-mass can be increased by increasing the spring constant and decreasing the proof mass, while the quality factor decreases as the damping coefficient decreases or the mass increases [28]. Typically the proof mass can have six degrees of freedom but in a monodimensional accelerometer is designed in order to have a dominant direction of motion. The 3-axes accelerometer are generally built using 3 distinct monodimensional accelerometers. Typical parameter to consider when choosing a MEMS accelerometer can by summarized as follows: • Sensitivity • Maximum operation range • Frequency response • Resolution • Offset • Off-axis sensitivity • Shock survival Mathematical Model Following, a mathematical model that takes in consideration the source of errors described in the previous paragraphs as proposed in [29, 14] is shown. Considering a tri-axial accelerometer, the acceleration measurement on each axis can be modeled as follows (Eq. 3.3): az = a + ba + sa a + cT (T − T0 ) + v where 18

(3.3)

3.4. Gyroscope • az is the acceleration measured in output from the device • a is the true acceleration • b is the bias • sa is the scale factor • cT is the correcting factor for thermal drift • T is the current device’s temperature • T0 is the device’s initial temperature • v is the noise

3.4. Gyroscope A gyroscope is a mechanical device composed essentially of a rotating mass which, due to the conservation of angular momentum, tends to maintain fixed the axis of rotation. This feature allows to measure, with a degree of reliability dependent strongly on the construction technology of the gyroscope, the rotation with respect to a reference axis. Typically gyroscopes used in naval and aeronautical fields are composed of a triad of gyroscopes, each sensitive to the rotation around one of the three Cartesian axes. In recent years, gyroscopes have experienced an impressive process of miniaturization. Such miniaturization, made possible thanks to the MEMS technology, has led to the realization of gyroscopes with the size of a few square millimeters. However, the principle of operation of MEMS gyroscopes is different than mechanical ones. Unlike mechanical gyros based on the conservation of angular momentum, the MEMS gyroscopes based its operation on Coriolis effect. All vibratory gyroscopes are based on the transfer of energy between two vibration modes of a structure caused by Coriolis acceleration [30] (Figure 3.8). The electrical signal produced is proportional to the velocity of rotation. Mathematical Model Considering a tri-axial gyroscope, the angular velocity around each axis can be modeled as proposed in [29, 14] (Eq. 3.4): ωa = ω + ωe sin φ + bg + sg ωe sin φ + cT (T − T0 ) + v

(3.4)

where 19

Chapter 3. Inertial Sensors and Inertial Navigation • ωa is the angular velocity measured in output from the device • ω is the true angular velocity • ωe is the Earth’s rotation speed (� 4.1781 · 10−4 deg/sec) • φ is the latitude • bg is the bias • sg is the scale factor • cT is the correcting factor for thermal drift • T is the current device’s temperature • T0 is the device’s initial temperature • v is the noise

Figure 3.8.: MEMS Vibrating Gyroscope. Source: "Modeling the MEMS Gyroscope" K. Craig, http://www.designnews.com/

3.5. Magnetometer The magnetometer is a device capable of measuring the intensity and direction of a magnetic field. The modern MEMS magnetometers base their operation on the Hall effect: in the presence of a magnetic field, the flow of electrons inside a conductor is distorted orthogonally to the direction of the current. This distortion produces a potential difference proportional to the intensity of the orthogonal magnetic field. A MEMS magnetometer has most of the problems which characterize other types of sensors made with the same technology: • scale factor 20

3.5. Magnetometer • non-orthogonality of axes In addition, the MEMS magnetometers must also take into account other sources of uncertainty: • Hard Iron Error: This error is caused by the presence of other magnetic fields within the environment in which the sensor operates. Such magnetic fields can be caused, for example, by electronic equipment in function. • Soft Iron Error: When a non-magnetized material is immersed in a magnetic field, it produces a distortion of the magnetic field itself. Most of the errors that occur in the measurement made by a magnetometer can be removed by appropriate calibration procedures. Mathematical Model Considering a tri-axial magnetometer, the magnetic field on each axis can be modeled as (Eq. 3.5): µa = µ + bg + sg µ + cT (T − T0 ) + µH + µS + v

(3.5)

where • µa is the magnetic field intensity measured in output from the device • µ is the true magnetic field intensity • bg is the bias • sg is the scale factor • cT is the correcting factor for thermal drift • T is the current device’s temperature • T0 is the device’s initial temperature • µH is the contribution of the Hard Iron error • µS is the contribution of the Soft Iron error • v is the noise 21

Chapter 3. Inertial Sensors and Inertial Navigation

3.6. Barometer A pressure sensor, or barometer, is able to measure the pressure exerted by the collisions of the molecules of a fluid against a sensitive support. This type of sensor can be used in a multitude of different applications, from medical to automotive industry. The pressure sensors can be divided into 4 basic categories, listed below: • Relative pressure (gage pressure): Pressure measured relative to atmospheric pressure; • Absolute pressure: Measured pressure relative to the sea level; • Differential Pressure: pressure measured relative to a reference pressure; • Vacuum gauge: Degree of vacuum measured relative to atmospheric pressure. Figure 3.9 shows a comparison of the different types of barometers as a function of the measurable pressure range, in relation to atmospheric pressure.

Figure 3.9.: Different applications of Pressure Sensor

Mathematical Model Following a mathematical model that takes in consideration the source of errors described in the previous paragraphs, the pressure measurement can be modeled as follows (Eq. 3.6): ψa = ψ + ba + sa ψ + cT (T − T0 ) + v where: • ψa is the pressure measured in output from the device • ψ is the true pressure 22

(3.6)

3.6. Barometer • b is the bias • sa is the scale factor • cT is the correcting factor for thermal drift • T is the current device’s temperature • T0 is the device’s initial temperature • v is the noise

23

Chapter 3. Inertial Sensors and Inertial Navigation

3.7. Calibration Procedures for MEMS Accelerometer and Gyroscope 3.7.1. Six-Static Position Test The calibration procedure widely used for obtaining a good estimation of bias, scale-factor and non-ortogonality of axes in MEMS accelerometers and gyroscope is the Six Static Position Test[31, 32]. This procedure, initially developed only for gyroscopes, consists in placing the sensor on a leveled surface and acquiring measures from the sensor in six different position: For each axis the procedure acquires data considering that axis first upward and the downward (see Figure 3.10). If we suppose to know with sufficient precision the Earth’s rotation velocity, the gravity acceleration at the point where the measurements are performed and the direction of the North, it is possible to estimate by means of linear combinations of measures the approximated values of bias, scale factor, and non-orthogonality of the axes.

Figure 3.10.: Bias Estimation on Z axis using Six Static Position Test Keeping the model described in Equations 3.3 without considering the noise, from the acceleration measurements gathered putting the i-th axis vertically first upward and then downward, the following relationships can be deduced (Eq. 3.7 and 3.8): z¨up = ba + sa (−g) + cT (T − T0 ) (3.7) z¨down = ba + sa (g) + cT (T − T0 )

(3.8)

If we consider to acquire measurements after the warm-up of the sensor and for about 60 minutes, we can neglect the temperature parameter and estimate 24

3.7. Calibration Procedures for MEMS Accelerometer and Gyroscope the bias and scale factor using the following relationships: ba =

(¨ zup + z¨down ) 2

(3.9)

sa =

(¨ zdown − z¨up ) 2g

(3.10)

To obtain an estimation of the non-orthogonality of axes, is necessary to consider simultaneously all the measurements acquired in the six different positions, and then express the equations described above in the following form (Eq. 3.11): 

  lax sa,x + ma,xx    = l  ay   ma,yx laz ma,zx

ma,xy sa,y + ma,yy ma,zy

that can be also described as: 

  lax sa,x     lay  = ma,yx laz ma,zx where:

and

ma,xy sa,y ma,zy 

sa,x  E = ma,yx ma,zx

     ma,xz z¨x ba,x      ma,yz  ·  z¨y  +  ba,y  sa,z + ma,zz , z¨z ba,z (3.11)

ma,xz ma,yz sa,z

  z¨x ba,x  z¨   y ba,y  ·   z¨z ba,z , 1

ma,xy sa,y ma,zy

ma,xz ma,yz sa,z



 z¨x  z¨   y  a=   z¨z  1



  =E·a 

 ba,x  ba,y  ba,z ,

(3.12)

(3.13)

(3.14)

The elements along the main diagonal represent the scale factors, while the others elements represent the non orthogonality of the axes. The last column represents the bias. By combining the measurements carried out during the six static position test, these measurements can be grouped in the following way. The vectors describing the gravity acceleration:     g −g � �     a1 =  0  , a2 =  0  (3.15) 0 0

25

Chapter 3. Inertial Sensors and Inertial Navigation 

   0 0 �     a3 =  g  , a4 =  −g  0 0     0 0 �   �   a5 =  0  , a6 =  0  g −g �

(3.16)

(3.17)

can be grouped as follows:





a A= 1 1



a2 1





a3 1

a4 1



a5 1



a6 1



while the measures taken on each of the six static positions:     lax lax     u1 =  lay  , u2 =  lay  laz X Axis upward laz X Axis downward 

 lax   u3 =  lay  laz Y   lax   u5 =  lay  laz Z

becomes:



Axis

Axis

 lax   , u4 =  lay  laz Y upward   lax   , u6 =  lay  laz Z upward

(3.18)

(3.19)

(3.20) Axis downward

(3.21) Axis downward

U = [u1 u2 u3 u4 u5 u6 ]

(3.22)

The objective is to estimate the parameters of E matrix (Eq. 3.12). This typically can be achieved using a least-square approach (Eq. 3.23): E = U · AT · (A · AT )−1

(3.23)

3.7.2. MPU-6050 Calibration The calibration procedure described have been tested on the InvenSense MPU6050 MEMS IMU. The MPU-6050 combines a 3-axis gyroscope and a 3-axis accelerometer on the same silicon die together with an onboard microcontroller designed for providing motion fusion algorithms using a proprietary set of algorithms. In addition to the calibrated data, the MPU6050 can also provide raw data from accelerometers and gyroscopes as well as device’s temperature using a I 2 C communication channel. The most important features 26

3.7. Calibration Procedures for MEMS Accelerometer and Gyroscope are reported in the following tables (Tables 3.1 and 3.2 ). More information can be found in [33]. Parameter Cross Axis Sensitivity Non Linearity Initial Zero Tolerance (ZRO) ZRO Variation Over Temperature Output Data Rate

Condition 25◦ C 25◦ C 25◦ C −40◦ C to +85◦ C Programmable

Value 0.2 ±2 ±20 ±20 8000

Unit % % ◦ /s ◦ /s Hz

Table 3.1.: MPU6050 Gyroscope Specifications Parameter Cross Axis Sensitivity Non Linearity Initial Calibration Tolerance Sensitivity vs Temperature Output Data Rate

Condition 25◦ C 25◦ C X and Y axes Z axis −40◦ C to +85◦ C ±2g Programmable

Value 0.5 ±2 ±50 ±80 0.02 1000

Unit % % mg mg %/◦ C Hz

Table 3.2.: MPU6050 Accelerometer Specifications Figures 3.11,3.12,3.13,3.14, 3.15,3.16 shows the results of the six static position test carried out on a the MEMS IMU InvenSense MPU6050. The blue lines are the uncalibrated data provided by the sensor, while the red ones describe the calibrated data. The bias reduction on the Z axis of the accelerometer is more evident respect to X and Y axes, according to data reported in Table 2.2 where the typical value of the Bias on Z axis is ±80mg, while the typical value on X and Y axes is ±50mg. Concerning the gyroscope, the slope of the uncalibrated data is compatible with the data reported in the datasheet and summarized in Table 3.1. The calibration test has been carried out when the sensor has reached a stationary temperature around 25◦ C.

27

Chapter 3. Inertial Sensors and Inertial Navigation

Figure 3.11.: Comparison between uncalibrated and calibrated data - X axis accelerometer

Figure 3.12.: Comparison between uncalibrated and calibrated data - Y axis accelerometer

Figure 3.13.: Comparison between uncalibrated and calibrated data - Z axis accelerometer

28

3.7. Calibration Procedures for MEMS Accelerometer and Gyroscope

Figure 3.14.: Comparison between uncalibrated and calibrated data - X axis gyroscope

Figure 3.15.: Comparison between uncalibrated and calibrated data - Y axis gyroscope

Figure 3.16.: Comparison between uncalibrated and calibrated data - Z axis gyroscope

29

Chapter 3. Inertial Sensors and Inertial Navigation

3.8. Attitude Representation using Inertial Sensors 3.8.1. Euler Angles A minimal representation of orientation can be obtained using a set of 3 angles ζ = [φ θ ψ]T . Let’s consider an elementary rotation of an angle alpha around one of the coordinate axes. A generic rotation in space can be obtained by means of a composition of three elementary rotations in appropriate sequences of rotation in such that two consecutive rotations do not occur around parallel axes. This means that only 12 of the 27 possible combinations are feasible. Each of the 12 triad is a set of three Euler angles. However, in the avionic systems is a common practice to use the ZXY representation, also called RPY or roll-pitchyaw representation. The 3D rotation using RPY angle is defined as follows: • The first rotation of an angle ψ is around the x axis (yaw): This rotation is described by the rotation matrix Rx (ψ) • The second rotation of an angle θ is around the y axis (pitch): This rotation is described by the rotation matrix Ry (θ) • The third rotation of an angle φ is around the z axis (roll): This rotation is described by the rotation matrix Rz (φ) The global 3D matrix rotation can obtained multiplying the elementary rotation matrices as follows: R(ζ) = Rz (φ)Ry (θ)Rx (ψ) 

cφ cθ  R(ζ) = sφ cθ −sθ

cφ sθ sψ − sφ cψ sφ sθ sψ + cφ cψ cθ sψ

The inverse solution, is defined as:  r11  R = r21 r31

30

r12 r22 r32

 cφ sθ cψ + sφ sψ  sφ sθ cψ − cφ sψ  cθ cψ  r13  r33  r33

(3.24)

(3.25)

(3.26)

3.8. Attitude Representation using Inertial Sensors with θ ∈ (−π/2, π/2): φ = Atan2(r21 , r11 ) � 2 + r2 ) θ = Atan2(−r31 , r32 33

ψ = Atan2(r32 , r33 )

(3.27) (3.28) (3.29)

while with θ ∈ (π/2, 3π/2): φ = Atan2(−r21 , −r11 ) � 2 + r2 ) θ = Atan2(−r31 , − r32 33

ψ = Atan2(−r32 , −r33 )

(3.30) (3.31) (3.32)

Both solutions degenerate when cos(θ) = 0. This problem is known in literature as Gimbal Lock and will be discussed in the next paragraph. Drawbacks of Euler Angle Representation The most important drawback on the Euler Angle Representation is the Gimbal Lock Phenomenon. The Gimbal lock is the phenomenon for which two rotational axes of an object point in the same direction. Any system that uses Euler angles has this problem. The reason is that Euler angles evaluate each axis independently in a definite order. As example, let’s consider the ZXZ representation. In this case the rotation matrix R is described as follows:     cos(α) − sin(α) 0 1 0 0 cos(γ) − sin(γ) 0     R =  sin(α) cos(α) 0 0 cos(β) − sin(β)  sin(γ) cos(γ) 0 0 0 1 0 sin(β) cos(β) 0 0 1 (3.33) with α, γ ∈ [−π, π] and β ∈ [0, π]. If β = 0:   cos(α) − sin(α) 0 1   R =  sin(α) cos(α) 0 0 0 0 1 0

0 1 0

  0 cos(γ) − sin(γ) 0   0  sin(γ) cos(γ) 0 1 0 0 1

(3.34)

the second matrix has not effect in the multiplication. Therefore, the 3D rotation matrix becomes:   cα cγ − sα sγ −cα sγ − sα cγ 0   R = sα cγ + cα sγ −sα sγ − cα cγ 0 (3.35) 0 0 1 31

Chapter 3. Inertial Sensors and Inertial Navigation and using trigonometric formulas:  c(α+γ)  R = s(α+γ) 0

−s(α+γ) c(α+γ) 0

 0  0 1

(3.36)

we note that changing the values of α and γ in the above matrix the same effects is produced: The last column and the last row in the matrix won’t change. The only solution to exit from Gimbal Lock is to change β to some value other than 0. A similar problem appears when β equals to π. In Figure 3.17 (Source: http://en.wikipedia.org/wiki/Gimbal_lock) the Gimbal Lock phenomenon in shown.

Figure 3.17.: On the left a normal situation: the three gimbals are independent. On the right the Gimbal lock phenomenon: two out of the three gimbals are on the same plane.

3.8.2. Axis-Angle representation The axis-angle representation of a rotation describes a rotation of a rigid body in a three-dimensional space using two values: • a unit vector r which indicates the direction of an axis of rotation • an angle θ describing the magnitude of the rotation around r axis. This representation comes from Euler’s Rotation Theorem, which implies that any rotation or sequence of rotations of a rigid body in a three-dimensional space is equivalent to a pure rotation around a single fixed axis. Let’s consider a vector r = [rx ry rz ]T describing an axis of rotation with respect to the reference system O-xyz. In order to obtain the rotation matrix R(θ, r) describing a rotation θ around r axis, we proceed as follows: • Overlapping of r on z axis, obtained as a rotation around z of a −α angle and a rotation around y of a −β angle; 32

3.8. Attitude Representation using Inertial Sensors • Rotation around z of an θ angle; • Rotation around y of a β angle and a rotation around z of a α angle.

Therefore, the rotation matrix R(θ, r) is built as:

R(θ, r) = Rz (α)Ry (β)Rz (θ)Ry (−β)Rz (−α)

(3.37)

From vector r we get: sin(α) = �

ry rx2

sin(β) =

+

ry2

cos(α) = �

� rx2 + ry2

rx rx2

+ ry2

cos(β) = rz

(3.38)

(3.39)

The rotation matrix corresponding to Axis-Angle notation is:

Figure 3.18.: Axis-Angle representation 

rx2 (1 − cθ ) + cθ  R(θ, r) = rx ry (1 − cθ ) + rz sθ rx rz (1 − cθ ) − ry sθ

rx ry (1 − cθ ) − rz sθ ry2 (1 − cθ ) + cθ ry rz (1 − cθ ) + rx sθ

 rx rz (1 − cθ ) + ry sθ  ry rz (1 − cθ ) − rx sθ  rz2 (1 − cθ ) + cθ (3.40) 33

Chapter 3. Inertial Sensors and Inertial Navigation For matrix described in Eq. 3.40 the following property is valid: R(−θ, −r) = R(θ, r)

(3.41)

that leads to a non unique representation of the attitude: a rotation of −θ around −r is equal to a rotation of θ around r. The inverse of R(θ, r) can be obtained as follows:   r11 r12 r13   R = r21 r22 r33  (3.42) r31 r32 r33 and

r11 + r22 + r33 − 1 ) 2   r32 − r23 1   r=  r13 − r31  2 sin θ r21 − r12

θ = cos−1 (

(3.43) (3.44)

when sin θ �= 0. It should be noticed that the three components of vector r satisfy the following constraint: rx2 + ry2 + rz2 = 1

(3.45)

If sin θ = 0, the relationships in equation 3.44 are not satisfied.

3.8.3. Quaternions Drawbacks using Euler Angles or Axis-Angles representations can be solved using a attitude’s representation based on 4 parameters, called quaternion. Quaternions have important applications in the study of the group of rotations of three-dimensional space and physics (the Theory of Relativity and Quantum Mechanics). Quaternions are used also in robotics to identify the spatial position of the mechanical arms, attitude control and the 3D computer graphics since the calculation using quaternions is more stable. A more detailed theroretical analysis of quaternion algebra can be found in [34].

3.9. Dead-reckoning Navigation is the determination of the position and velocity of a moving object in a known reference system, while the methods to obtain such information are called navigation techniques. As said before, an Inertial Navigation System 34

3.9. Dead-reckoning is composed by a set of devices that automatically determines not only attitude information but also position and speed using basically accelerations and angular velocities, as well as magnetometers and barometer. The concept of localization unlike the navigation is the determination of the position of an object relative to a known reference system, but it doesn’t provide information about linear and angular speed. When we study the problem of localization, we realize that it is closely related to the type of application considered. In particular we have to distinguish the case in which the user wants to determine the position of a stationary object, from the case in which the object moves in space. Therefore it is necessary to know the positions during the time, a problem known as tracking. Most of the navigation techniques and therefore also for localization, are based on two fundamental methods: Position Fixing and Dead Reckoning. Position Fixing are based on ranging measurements and it basically consists in identifying the unknown location of an object from measurements distances made with other objects whose positions are known, followed by techniques for the calculation of the position. Dead reckoning is the process of calculating the current position of an object by using a previously determined position, or fix, and advancing that position based on or estimated speeds and eventually acceleration over elapsed time (see Figure 3.19).

Figure 3.19.: Dead Reckoning In a Dead-Reckoning system the sensing part is closely connected to the physical phenomenon to be measured and is made in most cases by three mutually orthogonal accelerometers aligned with three gyroscopes (see Sections 3.3 and 3.4). The navigation process integrates the outputs of the IMU to provide position, velocity and attitude of the object equipped with such sensors, as is shown in Figure 3.20. The navigation performance may vary by several orders 35

Chapter 3. Inertial Sensors and Inertial Navigation

Figure 3.20.: Inertial Navigation System of magnitude and depend fundamentally on the quality of the inertial sensor. The systems used for example in airliners and military use of high-precision sensors, so they will have higher costs, but the quality of the navigation is better, because often integrated with the global satellite navigation system. The advantages of inertial navigation can be summarized in the following points: • autonomous systems that do not need or "aid" external, nor visibility conditions, and can also operate in a hostile humans, such as in tunnels or in the water; • Possibility of integration with other navigation systems in particular with GNSS; • Immunity to jamming because it transmits and receives signals via antenna, so it can not be detected by radar systems; • New sensors are manufactured with MEMS technology: so weights and sizes are reduced. MEMS sensors are characterized by reduced costs Among the disadvantages of the inertial sensors are: • Errors in navigation increase with time (see Figure 3.19); • Initial calibration is required; • Energy consumption could be high if the sampling frequency with which measurements are performed is high; 36

3.9. Dead-reckoning • Necessity to perform temperature controls when power consumption is high.

37

Chapter 4. Ranging Sensors In this chapter, an overview of the Ranging Techniques in Anchor Based Localization System and the Ultra-Wide Band Technology for localization is carried out. This Chapter together with Chapter 3, will provide all the necessary knowledge base for the Localization Algorithms developed and described in Chapter 5.

4.1. Introduction Ranging sensors are a set of devices used for measuring the distance between two or more nodes. These nodes consist of Tags and Anchors. Tags are mobile nodes whose position needs to be determined. Anchors are more complex nodes usually used by the system to locate the position of the tag. In an anchor based system, the position of the anchors is known while the position of the tag, knowing the displacement of the anchors, can be carried out estimating the distance of the tag from each anchors. The estimated measurements of distances can be exploited to determine the tag position with a precision depending strongly by the number of anchors, the technique used for estimating the distance Anchor-Tag, e the distance from the devices. For estimating the distance between the Anchor and the Tag, different ranging techniques are available, the most important methodologies for ranging calculation are listed below: • Angle of Arrival (AoA): The Angle of Arrival is determined by measuring the angle between a line that runs from the anchor to the tag and a line from the reader and a predefined direction, for instance the position of a known point [35]. • Time of Arrival (ToA): The Time of Arrival (ToA), sometime called Time of Flight (ToF), is a method based on the measurement of the propagation delay of the radio signal between a transmitter (tag) and one or more receivers (anchors). 39

Chapter 4. Ranging Sensors • Time Difference of Arrival (TDoA): Systems that use the TDoA method measure the difference in transmission times between signals received from each of the transmitters to a tag. • Received Signal Strength Indication (RSSI): The RSSI (Received Signal Strength Indication) allows the location of a device based on the strength of signals sent from the anchors to the device that is to be localized [35]. In the next part of this chapter some Ranging Sensors based on the Ranging techniques presented and belonging to the IEEE 802.15.4a Standard are presented. The Ranging Sensors presented has been used in the development of Localization Systems, described in the next chapter.

4.2. The NanoLoc Localization System Nanotron Technologies produced a Real-Time Localization Systems (RTLS) based on Chirp Spread Spectrum Technology. The developed RTLS is the first based on CSS and operating in the 2.4 Ghz ISM band able to perform communication and localization simultaneously. Concerning the localization, the ranging technology adopted to estimate the distance between two nodes is known as Symmetrical Double Sided Two Way Ranging (SDS-TWR). The SDS-TWR technology is similar to the Time Of Flight ranging technique. Furthermore it avoids the need of a precise synchronization between the clock of each node exploiting some physical properties of wireless transmission: • Propagation delay of the signal between two devices • Time needed for signal processing inside devices During the SDS-TWR measurement, a node A send a packet to a second node B and start a timer t1. When the node B receives the packet, it starts a timer t2 in order to measure the time elapsed from the the reception of the packet until the propagation of packet back to the node A. When the node A receives the ACK from node B, it stops the time counting and store the value (see Figure 4.1). The acknowledgement sent back to the first node includes in its header two delay values: • The Signal Propagation Delay • The Processing Delay The entire process is then repeated from node B to node A. In this way, 4 time measurements are used to calculate two ranging measurements. A more accurate distance estimation is then carried out by averaging the two ranging values. As reported in [36], SDS-TWR technology demonstrates a superiority 40

4.3. The UWB UbiSense Real-Time Localization System over other localization methods in that it removes the time synchronization between devices, which is a quite demanding requirement.

Figure 4.1.: Symmetrical Double Sided Two Way Ranging

4.3. The UWB UbiSense Real-Time Localization System The Ubisense Real-Time Localization System (RTLS) is a precision measuring instrument based on the IEEE 802.15.4a Ultra Wide Band technology. The UbiSense system is an anchor based localization system designed for indoor and outdoor environment. The basic system is composed by 4 anchors and different 41

Chapter 4. Ranging Sensors tags. Each anchor contains an array of 4 UWB antennas used for receiving the radio pulses from the tags (device to localize). Thanks to the array of 4 UWB antennas, the anchors are able to perform ranging measurements using two different methodologies: • TDoA • AoA This combination of methods provides a flexible system of localization in both indoor and outdoor environments, even in 3 dimensions with an accuracy of 20cm of accuracy (with 95% of confidence level) and a maximum tag-sensor distance of 160m. The tag update rate can vary from 10 updates/sec to 1 update/hour [37]. Furthermore an improved tag firmware, called High-Update Rate (HUR) is able to estimate tag position up to 33 times per second. In addition the tag includes features for easy identification such as a motion detector sensor that activates the tag only when it moves [38]. The sensors are connected to a controller PC (in which the localization server engine runs) through a PoE Ethernet switch as shown in figure (Fig. 5.8). A drawback of this UltraWide Band System concerns the calibration routine of the system that need to be accurately carried out in order to have the best performance. In particular, the use of a Laser pointer or a Total Station in order to estimate the position of the anchors with a millimetric resolution is strongly recommended. The anchors are connected together using a double set of Ethernet cables: The first set realizes a start topology in which each anchor is connected with the Localization Server (running on a PC), the second one realizes a ring network that connect all the anchor and it is used to share the synchronization signal. Using a dedicated instrument, called Timing Combiner, is possible to halve the number of Ethernet cables.

4.3.1. Strengths of UWB Technology UWB is based on wide band signal emission at a very low power. The principal characteristics of the UWB waveform are described as follows: • High immunity to multipath effect; • Very low power consumption; • Capacity to penetrate obstacles (at lowest frequencies); • Very low probability to be detected and intercepted; • Low interference to existing wireless systems. 42

4.3. The UWB UbiSense Real-Time Localization System To better understand the potentiality of UWB technology, in the following paragraphs, some of the most important features of UWB technology will be briefly described. Furthermore, in some cases, will be taken into consideration the differences and advantages that UWB systems have compared to the more common narrowband systems. Power Spectral Density The power spectral density (PSD) is defined as the ratio between the transmission power P, in watts, and the bandwidth of the signal B, expressed in Hertz (Eq. 4.2): P P SD = (4.1) B In a UWB system sends short pulses at low power in a very wide band. Conversely, the narrowband systems send signals to the relatively high power on a narrow band. Therefore is evident how the UWB systems have a power spectral density much lower compared to the more common narrowband systems. The low power spectral density, which is used in the transmission, minimizes the potential for interference with other communication systems, and also low PSD of UWB signals make difficult detection mingling with the background noise. Channel Capacity The channel capacity defines the maximum amount of information, expressed in bits, which is possible to transmit on the channel in a second (bit/s). The UWB systems have a high channel capacity thus enabling high data-rate transmission over short distances. For example you a UWB system can reach a transmission rate of 500 Mbit/s up to 3 meters of distance between the receiver and transmitter or 200 Mbit / s at 10m of distance. The main reason for the large channel capacity of UWB systems is due to the high bandwidth. Penetration Capacity The penetration capacity defines the ability of a signal has to penetrate materials and objects. A high penetration capability of the signal allows us to overcome obstacles such as walls, doors, trees and other objects, without being attenuated as to prevent it from reaching the receiver. The first UWB systems had a high penetration capacity due to the low frequency components of the spectrum. Indeed, given the known relationship: λ=

c f

(4.2) 43

Chapter 4. Ranging Sensors where: • λ is the wavelength; • c is the speed of light • f is the signal frequency High-frequency signals have a low wavelength while signals at low frequencies have a wavelength far greater. The signals at low frequency have a greater ability to pass obstacles because their wavelength is usually greater than the width of the material that pass through. Conversely, high-frequency signals, which have very short wavelengths, will suffer from material attenuation, caused by the fact that most of the energy of the wave can not penetrate the obstacle but is reflected. UWB technology would have a high penetration ability as the band would also include the low frequency components. Since 2002, with the entry into force of the FCC rules were imposed severe limitations on the use of the low-frequency part of the spectrum. This resulted in a strong decrease in the ability of penetration as to be lower than conventional systems currently used for IEEE 802.11 WLANs. Multipath Immunity Due to the effects of reflection, diffraction and scattering of electromagnetic waves in the environment when they encounter obstacles, the wave can arrive at the receiver with different paths. In addition, if the delay between the direct wave and the reflected wave is very low, the receiver receive a wave which is the sum of the two waves was producing interference. This phenomenon is called multipath fading. The UWB systems have the characteristic of being very resistant to the problem of multipath and therefore, they can be used also in cluttered indoor environments, thanks to the short pulse duration. UWB-based Localization UWB technology is extremely suitable for applications of real-time localization or tracking of the movement since, due to the bandwidth and to the very short duration of the pulses, it is possible to have an extreme accuracy in position. The UWB systems for localization are commonly used for indoor environments of limited area since with the current power limits imposed by the regulations and the current knowledge, the coverage distances are around 50 meters.

44

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs This chapter aims to provide a contribution on the perception of the environment presenting some robust sensor fusion algorithms for indoor/outdoor localization and navigation of autonomous systems. In this chapter, a Localization System for UGV based on Chirp Spread Spectrum is proposed. The algorithm allows to model the bias of ranging data considering also the faults in the measurements in order to have a more reliable position estimation when ranging data are not available. Furthermore, a Localization Algorithm for mini-UAVs based on Ultra Wide Band is presented: The proposed solution allows to use a low-cost Inertial Measurement Unit (IMU) in the prediction step and the integration of vision-odometry for the detection of markers nearness the touchdown area. The ranging measurements allow to reduce the errors of inertial sensors due to the limited performance of accelerometers and gyros. The obtained results show that an accuracy of 15 cm can be achieved. The results described in this chapter, including the images, have been published in [39, 40, 35].

5.1. Introduction The inertial navigation system basically has two features that distinguishes it and make it competitive respect to the other systems: autonomy and accuracy in short-paths navigation. On the other hand the greater disadvantage is the growth of the error in time, caused by the integration of the errors that affect the output of the sensors (cfr. §3.2.1). For this reason, in long-term navigation systems, the Inertial Navigation Systems performance is typically improved using other kinds of sensors such Ranging Sensors (see Chapter 4) or Global Positioning System. In outdoor environments, satellite-based navigation systems are more suitable for localization purposes while in Indoor environments where the GPS signal is not available, ranging sensors provide a more reliable position estimation over the time but the position estimation depends strongly 45

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs by the number of ranging sensors, the technique used for estimating the distance between devices, e the distance from these devices. Therefore, in recent years it has been a growing interest in the integration techniques of these kind of sensors. These set of techniques, known as Sensor fusion are based on the principle that the combination of data provided by different sources offers an estimation of the variable of interest better than would be possible when these sources were used individually. In the development of such localization systems, there are two general approaches of integration: • Ranging/GPS-aided INS, additional measures of the Ranging/GPS system is used to "help" the INS, reducing the drift errors of the gyroscope and accelerometer; • INS-aided GPS/Ranging, measures of INS are used to improve the accuracy of the GPS/Ranging positioning. The integration between the two systems is driven by the costs and performance requested; the costs is also closely connected with the way in which the sensor information are fused: We can distinguish loosely coupled from tightly coupled integration systems. The aiding measurements from ranging of GPS can be processed in a separate filter and included in the INS filter in the form of position and velocity observations, in a so-called “loosely coupled” integration. Alternatively, aiding measurements, e.g. GNSS range observations, may be included directly into the INS filter, in a so-called “tightly coupled” integration [41]. This chapter describes the localization algorithms developed for Unmanned Aerial Vehicles and Unmanned Ground Vehicle in the context of the R3-COP project. These localization algorithms, based on Extended Kalman Filter, are based on the integration of Inertial Sensors, GPS and Ranging Sensors form the new standard IEEE 802.1.5.4a. Both technologies (Chirp Spread Spectrum and UltraWide Band) belonging to the IEEE 802.1.5.4a Standard have been used. The next paragraph will describe the Localization System for UGV based on CSS, while in the last paragraph a IMU/UWB/Vision based localization system for UAV is presented.

5.2. Adaptive Extended Kalman Filter for Indoor/Outdoor UGV Localization 5.2.1. The Extended Kalman Filter The Kalman Filter can be efficiently used for the estimation of the position of a mobile device (tag) on the basis of ranging measurements made between 46

5.2. Adaptive Extended Kalman Filter for Indoor/Outdoor UGV Localization the tag and at least three known points (anchors) and inertial measurement provided by a strap-down IMU. Let denote with (ax,i , ay,i ) (i = 1, ..., n) the x T and y coordinates of the anchors and with T = (tx , ty ) the tag coordinates to be estimated. The distance between an anchor A and the tag T is calculated in the following way: di =



2

2

(tx − ax,i ) + (ty − ay,i )

The tag position can be obtained by trilateration as follows: � � tx H· =z ty where

and



2 · ax,1 − 2 · ax,2  ..  H= . 2 · ax,1 − 2 · ax,n 

 2 · ay,1 − 2 · ay,2  ..  .  2 · ay,1 − 2 · ay,n

 d22 − d21 + a2x,1 − a2x,2 + a2y,1 − a2y,2   ..  z= .   2 2 2 2 2 2 dn − d1 + ax,1 − ax,n + ay,1 − ay,n

An estimation of T can be obtained using the method of least squares: � � −1 tˆx = (HT H) HT · z tˆy

(5.1)

(5.2)

(5.3)

(5.4)

(5.5)

In the Extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be differentiable functions: ˜ k+1 = f(ˆ x xk , uk , wk ),

(5.6)

˜ k+1 = h(˜ y xk+1 , vk+1 )

(5.7)

˜ k and y ˜ k denote respectively the approximated a priori state and obserwhere x ˆ k the a posteriori estimate of the previous step. The state vector vation and x contains the predicted tag coordinate as expressed in the following equation: � � tkx xk = (5.8) tky

47

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs Referring to the state estimation, the process is characterized by the statistical variables wk and vk that represent respectively the process noise and measurement noise. Wk and vk are supposed to be independent, white and normally distributed with given covariance matrix Qk and Rk . The observation vector y k represents ranging measurements made between tag and anchors, and defines the entry parameter of the filter. Because in the analyzed system the predictor equation contains a linear relationship, the process function f can be expressed as a linear function: xk+1 = Axk + Buk + wk where the transition matrix A and B are defined as follows:     1 0 −ut sin θk cos θk 0     A = 0 1 ut cos θk  , B =  sin θk 0 . 0 0 1 0 1

(5.9)

(5.10)

and T is the time sample. The input control vector contains the linear (ut ) and angular (ua ) speed of the robot: uk =



ut ua



(5.11)

˜ k+1 = h(˜ The equation y xk+1 , vk+1 ) is calculated in the following way:    � 2 2 (t˜x − ax,1 ) + (t˜y − ay,1 ) rˆ1  .   .. .=   + vk+1 .  .  �  2 2 rˆn (t˜x − ax,n ) + (t˜y − ay,n ) ∂h (˜xk , 0) is given as: ∂x   ∂ rˆ1 ∂ rˆ1  ∂ t˜x ∂ t˜y     . ..  =  .. .     ∂ rˆn ∂ rˆn  ∂ t˜x ∂ t˜y

(5.12)

and the related Jacobian matrix C k+1 =

C k+1

48

(5.13)

5.2. Adaptive Extended Kalman Filter for Indoor/Outdoor UGV Localization where ∂ rˆi t˜x − ax,i =� , ˜ 2 2 ∂ tx (t˜x − ax,1 ) + (t˜y − ay,1 ) ∂ rˆi t˜y − ay,i =� ˜ 2 2 ∂ ty (t˜x − ax,1 ) + (t˜y − ay,1 )

(5.14) (5.15)

5.2.2. Detection of faulty range measurements The measures provided by Nanotron sensors are accompanied with flags that indicate whether the measure was made correctly or not. [35]. Specifically, if the ranging measurement ri is done correctly, the corresponding flag variable is set to 15. Otherwise, the flag variable assumes a different value depending on the type of failure occurred. If a fault on a measurement is detected, the corresponding element of the measurement covariance matrix R is increased:  2 σr,1   0 R=  ..  . 0

0 2 σr,2 .. . 0

··· ··· .. . ···

0 0 .. .

2 σr,n

     

(5.16)

2 where σr,i can be identified by experiments. In [42], [43], after a series of tests, 2 authors set σr,1 = 0.1328m2 . Depending of presence of a fault, the i − th element of R matrix is set in the following way:

2 σr,i

=



2 βσr,i : with fault 2 σr,i : without fault

(5.17)

where β is chosen by experiments to give a good localization performance; in the following tests, β is set to 105 . The management of fault measurements allows also to reduce the errors on ranging when there is not Line of Sight between the anchors and the tag, in which the performances of the system could decreases.

5.2.3. The Extended Kalman Filter with bias modelling The model discussed in the previous paragraphs is based on the assumption that the measured distances contain only noise. However, range measurements non only have noise but also a non-zero average error. This non-zero average error can be treated as a bias. In this case, the measured distance, at time k, 49

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs referring to anchor i can be modeled more accurately in the following way: y ki = dki + vki + bki

(5.18)

where y ki denotes the biased measure and bki the bias related to anchor i at time k. The bias vector is difficult to manage because it is not possible to measure it. A strategy could be to estimate the bias vector by subtracting the estimated bias vector from the measured distance vector [44]. Nanotron CSS provides a calibration routine for reducing the bias. However, in more cases the bias vector depends on more factors as multipath or presence of moving obstacles, or metallic surfaces. To model the bias change, we introduce a scaling factor sk which satisfies the following relationship: dki + bki ≈ (1 + sk )dki

(5.19)

By using 5.19, we can derive a new formulation of the measurement model: y ki = (1 + sk )dki + vki

(5.20)

We assume that the bias hardly changes during an iteration; thus, the scaling factor is expressed as sk+1 = sk and the process noise of the scaling factor as wrk . Now we can extend the EKF by adding a new state variable describing the bias that affects range measurements: ˜ k+1 = f(x ˆ k , uk , wk ), x ˜ k+1 = h(x ˜ k+1 , vk+1 ) y where

� � xk sk � � uk uk = 0 � � wk wk = wrk xk =

(5.21) (5.22)

(5.23) (5.24) (5.25)

5.2.4. Experimental Results In this sections the localization experiments in two different environments are presented. The first environment, indoor, is critical due to the presence of walls with metallic surfaces that could degrade the distance estimation. Furthermore, since it is a indoor environment, the GPS is not available. The second test, 50

5.2. Adaptive Extended Kalman Filter for Indoor/Outdoor UGV Localization outdoor environment, allows to evaluate the performance in terms of accuracy and precision considering the GPS trace as ground truth.

5.2.5. Configuration of the experiments The architecture of the mobile robotic platform (see Fig. 5.1) is based on the mobile robotic platform Pioneer P3-AT produced by the MobileRobots Inc. This platform is suitable for outdoor environments and can hold a maximum payload of 12kg. The instrumentation payload is formed by: • Differential Global Positioning System (DGPS) Topcon GR3 receiver; • Inertial Measurement Unit (Microstrain 3DM-GX1); • Nanotron tag. The DGPS receiver is coupled with a master station to increase the accuracy and the precision of positioning data; the antenna is mounted at 0.5m over ground to avoid dangerous reflections and/or signal attenuation due to the presence of obstacles. In kinematic mode the obtained accuracy and precision is < 2cm. A Real Time Operating System (RTOS) provides access to robot by means of a set of software tools included in the Robobuntu Linux distro [45, 46].

5.2.6. Experimental results in Environment n◦ 1 In this subsection the localization results in a indoor environment are presented. The reason of this choice is for the performances evaluations in presence of multipath. The displacement of the anchors are descrived by the following 2-D Cartesian coordinates: A1 = [11.90

0.15]T

(5.26)

A2 = [21.00

2.30]

(5.27)

A3 = [30.90

T

0.15]

(5.28)

A4 = [47.54

2.30]T

(5.29)

T

The unit of these coordinates is meter. The following figure (Fig. 5.2) shows the indoor environment (environment n◦ 1): The maximum speed of the robot is set to 0.8m/s. While the mobile robot moves along the path, the CSS tag node attached to the robot measures ranging from the four anchors with a sampling rate of 10 Hz. First the tag measures the distance form anchor 1 (A1), after the distances from anchors 2 (A2), and 51

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs

Figure 5.1.: Robotic platform

Figure 5.2.: Map of the environment n◦ 1 so on. Because the time required to obtain the four ranging measurements is relatively fast (0.05s) compared to the speed of the robot, we assume that the four distances were measured at the same time. In Test n.1 the true position of the robot at the beginning of the test is [1.70 1.25]T m, while in test n.2 is [2.75 2.25]T m. The obtained results (see Figures 5.3 and 5.4) show that the filter allows to obtain better result in terms of localization due to the estimation of bias; its amplitude is 1m circa. The filter is also able to track the position of robot considering a high uncertainty in the initial position (green line in Fig.5.4) adjusting the error covariance matrix P iteratively. In this case the mean error is approximately 1m. 52

5.2. Adaptive Extended Kalman Filter for Indoor/Outdoor UGV Localization

Figure 5.3.: Test n◦ 1 in environment n◦ 1

Figure 5.4.: Test n◦ 2 in environment n◦ 1

5.2.7. Experimental results in Environment n◦ 2 In this second series of test we chose an outdoor environment without obstacles. The reason of this choice is for the performances evaluations in an open space. The 2-D Cartesian coordinates of anchors have the following values: A1 = [0

0]T

A2 = [17.1

(5.30) 36.5]

(5.31)

T

A3 = [−12.0

50.3]T

(5.32)

A4 = [−29.4

13.8]

(5.33)

T

The unit of these coordinates was meter. In Fig. 5.6 the obtained results are presented. The green dashed lines represent the estimation of path using the odometric information obtained by on board encoders integrated with a MEMS gyro and Nanotron Sensors. The blue one represent the true path calculated with differential GPS (DGPS). The gyro is necessary to estimate the rotation speed of robot due to high 53

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs

Figure 5.5.: Map of the environment n◦ 2

Figure 5.6.: Test n◦ 1 in environment n◦ 2 friction of wheels with the rough terrain. The EKF filter allows to reduce the localization error if compared with the standard dead reckoning. The blue lines represent the ground truth obtained by the GPS with differential corrections. The EKF is also able to manage the loss of anchor due to a temporary / 54

5.3. An IMU/UWB/Vision-Based EKF for mini-UAV Localization continuous fault (e.g., anchor too far, low battery, high occlusion). Considering the total network traffic the 80% of packets contains useful ranging data. The remaining 20% contains invalid data (e.g., out of range) that are managed by the EKF.

5.3. An IMU/UWB/Vision-Based EKF for mini-UAV Localization 5.3.1. The Parrot Ar Drone The Ar Drone (Fig. 5.7) is a small and low-cost quadcopter developed by Parrot. This quadrotor, currently available to the general public, is equipped with two cameras (one facing forward, the other horizontally downwards), a sonar height sensor and a flight-controller running proprietary software for communication and command handling based on Linux. Commands and images are exchanged via a WiFi ad-hoc connection between the host machine and the AR Drone. The bottom camera is characterized by the following properties: • 64 degree diagonal lens; • Video frequency: 60 frames per second; • Resolution: 176x144 pixels (QCIF). The inertial measurement unit is composed by: • A 3 axis digital MEMS (Bosh BMA150) accelerometer positioned at the center of gravity of the AR Drone’s body. The accelerometer is used in a +/- 2g range and acceleration measurements are acquired by an on-chip 10 bit ADC (see Table 5.1) • A two axis MEMS gyroscope (InvenSense IDG500) • A precise piezoelectric gyroscope (Epson XV3700) for the yaw angle and heading control. Both gyroscopes measure up to 500◦ /s. These are analog sensors which outputs are acquired by the 12bits ADC and sent to the flight controller. Using particular settings the Ar Drone is able to send all the information provided by the sensors to the host PC. These data (called "navdata") take more bandwidth but contain also raw data from the ADC converter of the IMU. Each set of data is identified by a tag id comprised between 0 and 20. By default only some data (identified by tag = 0) are provided in output. Others 55

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs data of interest in this work are identified by tag = 2 and tag = 3 which are defined as follow: • tag=2 (NAVDATA_RAW_MEASURES_TAG) ADC output values from IMU called raw_data • tag=3 (NAVDATA_PHYS_MEASURES_TAG): IMU data (calibrated with the internal calibration algorithm) called phys_data Range Bias Non-orthogonality Bandwidth Offset temperature drift Output Noise

±2g/±4g/±8g ±50mg ±2% 25 − 1500Hz 1mg/K√ 0.5mg/ Hz

Table 5.1.: Bosh BMA 150 Accelerometer Specifications

Figure 5.7.: The Parrot Ar Drone quadcopter with body frame axis orientation

5.3.2. The Hardware Setup The hardware architecture is composed by 2 PC. The first PC hosts the UbiSense Location Engine and provides trilateration data on a UDP Server. The second PC, in which the drone control program runs, is connected to the first computer by Ethernet cable and to the Ar Drone through wireless network. Ubisense Sensors and computers are connected together on the same local area network as reported in figure (Fig. 5.8). 56

5.3. An IMU/UWB/Vision-Based EKF for mini-UAV Localization

Figure 5.8.: The hardware architecture used during tests

5.3.3. The Software Setup The software used in this work is mainly composed of three distinct modules: • Ranging UDP Server: This application, developed using the UbiSense libraries provides data from the location Engine. At each time step the application generates a record containing trilateration data in double precision; • Ar Drone Control Software: This module represents an extension of the Ar Drone C# SDK available in [47] improved with routines for: – Storing automatically frames from the bottom camera of the Ar Drone (this feature will be used in the vision module); – Retrieving raw data from the Ar Drone in real time (ADC output from the accelerometer and the gyroscope); – Sending commands to the drone using Joystick/keyboard; – Combining trilateration data from Ranging UDP Server with the data provided by the Ar Drone in a unique data packet. This application is also responsible of sending commands to the drone and the execution delays in this software (that might origin from image elaboration) could interfere with the drone maneuverability. In order to avoid this drawback, the estimation of the drone position is performed by a dedicated Localization Algorithm module (LAM). From a point of view of data, this application can be seen as a data aggregator that provides, at each time step k, all the data to the LAM. 57

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs • Localization Algorithm Module: this module gets the data packet from the Ar Drone Control Software (NavData, frames and UbiSense trilateration) and elaborates them. As above mentioned, the communication between modules using UDP sockets is useful if the data receiving is not a crucial factor (otherwise the TCP sockets should be used). In our application this is not a critical factor. Figure 5.9 summarizes the Software Architecture.

Figure 5.9.: The Software Architecture

5.3.4. Testing of Localization Algorithm in Simulation Environment The localization algorithm developed in this work has been first tested in the new version of the 3D Simulation Environment developed in [48, 49, 50] . This c Simulation Environment, based on SimplySim�SimplyCube [19] is a modular framework mainly oriented to the development and fast prototyping of cooperative Unmanned Aerial Vehicles. The framework combines the high realism of the simulation carried out in a three-dimensional virtual environment (in which the most important laws of physics act) with the easiness of Simulink for fast prototyping of control systems. Furthermore it now includes modules for vision analysis and path-planning with State Flow Machines (see Chapter 6).

5.3.5. IMU Characterization An IMU is generally composed by two orthogonal sensor triads. A triad consists of three mono-axial accelerometers, the other consists of three mono-axial gyroscopes. The two triads are nominally parallel and the origin of the gyroscope is defined as the origin of the accelerometer triad. In recent years inertial sensors based on MEMS (Micro Electro-Mechanical Systems) technology have found applications in many different fields, thanks especially to their low cost and very small size [14]. But low cost sensor are 58

5.3. An IMU/UWB/Vision-Based EKF for mini-UAV Localization generally characterized by poor performances. The most important drawbacks of a MEMS IMU are [40]: bias, scale factor and cross-axis misalignements. In order to obtain good positioning accuracy it is necessary to deeply analyze the behavior of the sensors and realize special test calibrations, both in static and kinematic conditions [39]. The Accelerometer Mathematical Model The acceleration along an axis can be expressed by the following relationship (eq. 5.34) (cfr. §3.3), in which the thermal drift is not considered: z¨a = z¨ + g + �a + Sa g

(5.34)

where • z¨a is the measured acceleration in output to the sensor; • z¨ is the true value of the acceleration (at the considered point); • g is the gravity acceleration; • �a is the bias; • Sa is the scale factor. The Bias Estimation The standard method used for calibration IMUs was traditionally a mechanical platform rotating the IMU into different pre-defined orientations and angular rates. But these tests often require the use of specialized and expensive equipment. A common not-expensive way to make an IMU calibration is the six-position static test [51, 14] (cfr. § 3.10). The six-position method requires the inertial system to be mounted on a leveled surface with each axis of each sensor pointing alternately up and down. The bias can be calculated using the following equations: z¨down − z¨up �a = (5.35) 2 where z¨down and z¨up are respectively two static measurements carried out holding the z axis of the accelerometer downward and upward. The Scale Factor Estimation The scale factor can be described by the following relationship (eq. 5.36): Sa =

z¨down − z¨up − 2g 2g

(5.36) 59

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs where g is the gravitational acceleration. A similar mechanism is applied in order to estimate the scale factor along the other two axis. The Thermal Drift of the IMU The IMU accelerometers and gyros are very sensitive to temperature as shown by Nebot and Durrant-Whyte [52]. As the temperature of the IMU changes, the associated bias and drift will change until the temperature reaches a steady value. Considering the acceleration along the z axis in static condition the IMU temperature increases as an exponential law (see Fig. 5.10). The corresponding

Figure 5.10.: Temperature trend for Ar Drone accelerometer ADC output for zup and zdown accelerations are shown in figures 5.11 and 5.12. As the temperature reaches a stationary value the ADC output stops to change. The bias and scale factor can be then calculated using stationary ADC output.

Figure 5.11.: ADC output with z axis upward 60

5.3. An IMU/UWB/Vision-Based EKF for mini-UAV Localization

Figure 5.12.: ADC output with z axis downward This is not critical in our application. Data are considered only when the thermal steady state of IMU is reached. After the heating of the sensor, the �a and Sa coefficients are supposed constant over a long period. Comparison Between Calibrated Data and Ar Drone phys_data In order to verify the calibration procedure described in the previous paragraphs, a flight-test has been carried out. The experiment consisted in collecting data from the Ar Drone during the hovering of the quadrotor. Because during take off the quadcopter can shows some drifts along x and y axis from the point of departure, data have been collected since the Ar Drone reached the default hovering altitude (1m). In figures 5.13 and 5.14 the red Gaussian describes the distribution of the acceleration values along x and y axis of the phys_data provided by the Ar Drone, while the blue dashed Gaussian describes the distribution of the acceleration values obtained with the procedure described above. Table 5.2 summarizes mean values (µ) and standard deviations (σ 2 ) for each series of data, comparing data calibrated and data provided by the Ar Drone. Data Acc x (phys_data) Acc x (calibrated data) Acc y (phys_data) Acc y (calibrated data)

Mean µ 0.3107 0.0427 0.3425 0.0114

Std. σ 2 0.6884 0.7203 0.6186 0.6186

Table 5.2.: Comparison between acceleration values provided by Ar Drone and the calibrated acceleration values Figures 5.13 and 5.14 show that with the calibration algorithm the accelerometer performance is significantly improved (bias mean error on each 61

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs axis < 0.05m/s2 ).

Figure 5.13.: Comparison between calibrated data and phys_data - x axis

Figure 5.14.: Comparison between calibrated data and phys_data - y axis

5.3.6. The Kinematic Model of the Quadrotor Axis Convention To describe the motion of the UAV it is necessary to define a suitable coordinate system. For most problems dealing with aircraft motion, two coordinate systems are used. The first coordinate system is fixed to the earth and may be considered for the purpose of localization. The second coordinate system is fixed to the UAV and is referred as a body coordinate system (in strap-down configuration). In order to translate the acceleration from body frame to Navigation frame, 62

5.3. An IMU/UWB/Vision-Based EKF for mini-UAV Localization the Rotation Matrix is used (eq. 5.37) (cfr. §3.8.1): 

cθcψ  Rbw = cθsψ −sθ

sφsθcψ − cφsψ cφcψ + sφsθsψ sφcθ

 sφsψ + cφsθcψ  cφsθsψ − sφcψ  cφcθ

(5.37)

where s represents sin, c represents cos and φ, θ, ψ, known as Euler Angles, are named roll, pitch and yaw (see Fig. 5.15). Applying the rotation matrix Rbw (eq. 5.37), the accelerations on the world frame are (eq. 5.38): aworld = Rbw abody + G (5.38) where G is the gravity vector in world frame, expressed as (eq. 5.39): G = [0 0

(5.39)

− g]T

and g is the gravity acceleration.

Figure 5.15.: Euler angles

5.3.7. The Matematical Model Starting from the physical law that describes the uniform change of speed of a point p in one dimension: p(t) = p(t − 1) + ∆p(t) = p(t − 1) +

��

t

α(τ ) dτ

(5.40)

t−1

63

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs the kinematic model used in the Kalman Filter at discrete time can be expressed as: ∆T 2 axk 2 ∆T 2 = yk + ∆T vyk + ayk 2

xk+1 = xk + ∆T vxk +

(5.41)

yk+1

(5.42)

where {vxk , vyx } and {axk , ayk } are respectively velocities and accelerations at step k.

5.3.8. The Extended Kalman Filter The state vector used in this system contains the position and velocity of the mini-UAV in 2D environment. The ranging data are obtained using a small UWB Tag mounted on the top of the mini-UAV. Therefore, the state vector contains the tag coordinates and velocities along x and y axis: � xk = xk

yk

vxk

vyk

�T

(5.43)

Referring to the state estimation, the process is characterized by the statistical variables wk and vk that represent respectively the process noise and measurement noise. Wk and vk are supposed to be independent, white and normally distributed with given covariance matrix Qk and Rk . The observation vector y k represents ranging measurements made between tag and anchors, and defines the entry parameter of the filter. Because in the analyzed system the predictor equation contains a linear relationship, the process function f can be expressed as a linear function: xk+1 = Axk + Buk + wk

(5.44)

where the transition matrix A is defined as follows:   1 0 ∆T 0 0 1 0 ∆T    A=  0 0 1 0  0 0 0 1

(5.45)

and T is the time sample.

The input control vector contains the linear acceleration (uk ) of the quadcopter: � � uaxk uk = (5.46) uayk 64

5.3. An IMU/UWB/Vision-Based EKF for mini-UAV Localization Sensor measurements at time k are modeled in the Kalman Filter by the following equation (measurement model): zk = Hxk + vk

(5.47)

Because the UbiSense SDK provides only the (x,y,z) coordinate of the estimated position of the tag, the H matrix is: � � 1 0 0 0 H= (5.48) 0 1 0 0 The following flow chart shows how the sensor fusion algorithm works (Fig. 5.16):

Figure 5.16.: Sensor fusion algorithm flow-chart

Improving the Extended Kalman Filter with vision-based odometry Today the vision-based odometry approach represents one of the most promising methodology to improve the accuracy and precision of unmanned aerial vehicles localization. An interesting review of methods for visual odometry is [53]. The approach here adopted is based on artificial well-known markers in terms of size and position in the considered environment. This kind of approach is suitable in the case of indoor localization. Feature based approaches based on SIFT/SURF and their variants are suggested for complex environments as the 65

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs outdoor [54]. The first step (offline) is represented by the calibration of image sensor to derive intrinsic/extrinsic parameters of the camera. This step is performed using a standard chessboard pattern viewed from different point of view. The calibration is based on the Zhang approach [55]. In real-time the artificial marker are detected and geo-referred using the following approach: • Get current frame;

• Convert from RGB to gray-scale color space using the Principal Component Analysis to obtain high contrast image; • Smoothing of image using a 3x3 Gaussian kernel; • Adaptive thresholding; • Polygon extraction;

• Shape Filtering (only polygons with four connected segments are considered); • Extraction of perspective transform from 4 corners;

• Warp the image using the perspective transformation;

• Calculation of pose given a the set of extracted object points using the camera matrix and the distortion coefficient derived in the calibration stage; • Decode if possible the content of marker if the QR-code is present.

In Fig. 5.17 an example of marker detection from bottom side Ar.Drone camera is shown. The marker is square in shape with the side of 22 cm and 2 cm thick line. The idea is to use a set of well known marker optionally with coded information as shown in Fig. 5.18 using a QR-code. The detection and recognition stages are performed online using a dedicated computer due to the constraints of computing unit installed on the Ar.Drone. The time required to detect and recognize the marker is approximately 100 ms on a Intel Core 2 Duo machine running Windows 7 operating system. The extracted Ar.drone pose referred to the detected marker is then used into the EKF to correct the estimation of position changing the weight of ranging and inertial data/measurements. The obtained results show that accuracy ≤ 0.1m can be achieved. The bottom camera of Ar.Drone was used due to future extension that focuses on the capability of take-off / landing from a mobile ground rover equipped with a well-known marker. This kind of camera is useful to improve these tasks which requires accuracy not available by the UWB sensors. 66

5.3. An IMU/UWB/Vision-Based EKF for mini-UAV Localization

Figure 5.17.: An example of target detection using the bottom side Ar.Drone webcam

Figure 5.18.: An example of QRCode used to improve the localization task in the considered indoor environment

5.3.9. Experimental Results In this section the results of some tests carried out in indoor environment are presented (Fig. 5.19). The reason of this choice is for the performances evaluations of the algorithm even in presence of multipath. The 2-D Cartesian coordinates of anchors have the following values (expressed in meters): A1 = [0.18

0.79]T

A2 = [1.79

8.107]

(5.50)

A3 = [7.31

9.28]T

(5.51)

A4 = [4.62

0.23]

(5.52)

(5.49) T

T

The initial position of the mobile agent is [3.3 1.8]T m. The speed of the UAV is set to the maximum Ar Drone velocity: 5 m/s. The step time used in localization algorithm is 0.1s in order to exploit the maximum update rate for the UbiSense Tag [37]. While the mobile agent moves along the path, the UWB tag node measures ranging from the four anchors and then a trilateration of (x,y) position is provided to the Localization algorithm module from the UbiSense Location Engine through the Ranging UDP Server. 67

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs

Figure 5.19.: Map of the indoor environment Experimental Results without Vision-based Odometry Figure 5.20 shows the position estimation (red line) using calibrated IMU data and ranging measurements (blue dots). As comparison, also the position estimation based on odometry (green dashed line) is reported. Fault Simulation on Ranging Measurements In figure 5.21 a more complex test has been carried out. The quadrotor is controlled to fly along a square path. During the flight a fault of the ranging sensors is simulated. In order to simulate the fault the localization algorithm is constrained to use the last available measure of trilateration provided by the Ranging UDP server through the Ar Drone Control Software. When the localization algorithm detects the fault (the same trilateration data over an interval T ) it automatically increases the covariance matrix of the measurement process R and the error estimation covariance matrix P. In this way the quadcopter is 68

5.3. An IMU/UWB/Vision-Based EKF for mini-UAV Localization

Figure 5.20.: Experimental Results - Test n◦ 1 constrained to localize itself using mainly odometric data. In figure 5.21 blue dots and black triangles represent respectively the ranging measurement before and after the fault.

Figure 5.21.: Experimental Results - Test n◦ 2 Figures 5.22 and 5.23 show the x and y axis individually. The fault occurs at T = [8 : 10]s. During the fault the quadcopter is able to localize itself with an error about 1m. When the ranging system is re-established, the filter is able to correctly re-estimate the quadrotor position in less than 5 steps. Experimental Results with Vision-Based Odometry In this section an experimental result of the quadrotor position estimation with vision-based odometry is reported. The marker is positioned at (x, y) = [3.5 7.1]m. The initial position of the quadrotor is (x, y) = [3.15 2.9]m. 69

Chapter 5. Sensor Fusion for Indoor/Outdoor Localization of UAVs and UGVs

Figure 5.22.: Experimental Results - Test n◦ 2 - x axis

Figure 5.23.: Experimental Results - Test n◦ 2 - y axis After take off, the quadrotor moves toward the marker. Figure 5.24 shows a comparison of the position estimation with and without the vision-odometry. If the marker is recognized by the bottom camera, the localization algorithm increases the weight of the vision-odometry in the position estimation. At each frame, a boolean variable indicating the detection of the marker is associate. Figures 5.25 and 5.26 show the x and y axis individually. When the quadrotor reaches the marker position, (t = 3.5s), if the marker is detected, the boolean variable is set to true and the EKF reduces drastically the localization error, calculating at each time step the ∆x and ∆y position from the marker considering also the current body attitude.

70

5.3. An IMU/UWB/Vision-Based EKF for mini-UAV Localization

Figure 5.24.: Experimental Results using also vision odometry

Figure 5.25.: Experimental Results using also vision odometry - x axis

Figure 5.26.: Experimental Results using also vision odometry - y axis

71

Chapter 6. 3D Simulator for cooperation between UAV and UGV This chapter describes the 3D Simulation Environment developed for the testing of some of the Localization and Navigation algorithms as part of the framework of methodologies addressed by the R3-COP project. This framework, based on the nVidia PhysX physical Engines provides also an interface with the MatLab/Simulink Environment. The Simulation Environment is then used for testing different cooperative scenarios like Navigation of many UAVs in formation using Networked Decentralized Model Predictive Control. Thanks to its interface with MatLab and Simulink, the Framework have been also tested and used by other R3-COP partners for the development of Mission Management Systems based on Finite State Machines, described in the last part of this Chapter. The possibility to interface the virtual environment with MatLab/Simulink, to develop custom mobile robots specifying physical properties, to simulate different kinds of sensors, make the developed simulator an interesting tool especially in the development of cooperative robotic systems. The results described in this chapter, including the images, have been published in [48, 50, 49].

6.1. The Framework The developed framework provides a set of features mainly oriented to the simulation and control of autonomous aircraft in cooperative tasks. As mentioned in Chapter 2 the SimplyCube environment offers the opportunity to develop own applications exploiting the power of language based on .NET platform (C#, Visual Basic and C++/CLI). Thanks to its easiness of use and the considerable support provided for it, the C# is the ideal candidate for the development of applications based on SimplyCube. The framework [50, 48] consists of a series of modules, each of which is specialized in a specific task. In the current version of the framework the modules available are: • Management of three-dimensional environment actors (both static and 73

Chapter 6. 3D Simulator for cooperation between UAV and UGV dynamic) Module: This module implements classes that allow 3D models to interact with the virtual world. The distinction between static and dynamic actors allows to divide entities into two separate categories: items on which it is possible to apply forces and items for which this is not possible. • Drone Module: This library provides a set of capabilities for the management of quadrotors. The classes implemented allow to obtain information about all aspects of the quadrotor (for example yaw, pitch and roll angles) as well as methods for handling them. At each 3D model an XML configuration file is associated. In this file it is possible to edit any physical parameter of the aircraft (viscous friction of air on the wings, maximum rpm for each engine, and so on). • Avionic Instruments Module: This module plays a minor role. It allows graphical display of data for each quadrotor. In this way we can gain real-time information concerning, for example, the position in space of an aircraft. In the current version of the framework, the following two tools are provided: artificial horizon and altimeter. Thanks to the modularity of the software, we can add a new instrument at any time simply by developing the code that will inherit the basic methods from the parent class. • Network Services Module: This library provides methods for creating TCP/IP and UDP/IP connections. • Vision Module: This module provides simulated video stream from different mobile agents’ point of views. Together with these modules, within the framework there is a specific library to interface the simulator with Matlab/Simulink. This open-source library, called PNET [56] , can be used to set up TCP/IP and UDP/IP connections with Matlab. It can transmit data over the Intranet/Internet between Matlab processes or other applications. The following figure shows the structure of the framework just discussed (Fig. 6.1). In order to make more realistic simulations especially when dealing with cooperative tasks, the new version of the framework provides also a modular urban environment that can be enhanced adding more models (see Figure 6.2). The Urban scenario has been used during the testing of cooperative tasks based on Finite State Machine in MatLab/Simulink, described later in the chapter. 74

6.1. The Framework

Figure 6.1.: Framework structure diagram

6.1.1. Management of three-dimensional environment actors Module As mentioned before this module implements the classes that allow 3D models to interact with the virtual world. A physical object is a rigid body that can collide with other objects. In the SimplyCube environment they are called actors and mainly separated into two categories: • Static Actors; • Dynamic Actors. The dynamic actors can move. Their properties are mass, velocity and inertia. Moreover, it is possible to apply a force or a torque to them. When a collision is detected in a dynamic actor, the simulator applies a force and a torque to it in order to simulate a real reaction, keeping in consideration the properties of mass and inertia. The forces and torques applied dynamically change the speed of the actor and therefore the position. The static actors are much simpler than dynamic actors. In fact, for them reaction forces and torques are not calculated and they do not have the opportunity to travel in space. As mentioned, a rigid body can collide with other objects. To make this happen, it is necessary to define an area of collision, called "collision shape". SimplyCube environment provides three different collision shapes: parallelepiped, ellipsoid and capsule. By combining these three basic areas it is possible to generate arbitrarily complex surfaces collision. These areas are included inside the actors (both dynamic and static). 75

Chapter 6. 3D Simulator for cooperation between UAV and UGV

Figure 6.2.: Virtual Urban Environment

6.1.2. Drone Module This library provides a set of capabilities for the management of quadrotors. For each 3D model of quadrotors, an XML configuration file is associated. In order to create a drone, it is necessary to declare two files: • A file that defines the complex object which will make the drone: It is an assembly of several simple actors linked with joints; • A file which will define the drone configuration: its body (main part), its rotors, and for each rotor the engine and blade configuration. A drone is a complex object composed of a body, several rotors and blades. In order to make the drone fly, a force is applied on each rotor depending on its angular velocity (rotation speed). For each rotor the force applied equals to: F = w2 k where: • F is the force applied, in Newton’s;

• w is the angular velocity of the blade, in radians/seconds−2 • k is a coefficient, calculated as follow: k =

M assLif t·Gravity RP M Lif t2

Varying MassLift and RPMLift, it is possible to define a different k coefficient for each rotor. The RPMLift can be set arbitrarily: it only represents the RPM reference for a rotor, but MassLift should be well calculated in order to make the drone stable. 76

6.1. The Framework

6.1.3. Avionic Instruments Module This library contains all the classes used for the development and maintenance of avionic instruments. The tools developed include: • An artificial horizon; • An altimeter. A set of primitives for the creation of other types of instruments are provided, also to extend - in future development - the avionic library. All virtual instruments have been made following the same logic: objects are made with a series of bitmap images which are rotated, translated or scaled before being displayed. The methods for the operations of rotation, translation and scaling are implemented in a parent class. Subsequently, each class that will implement a different virtual instrument will inherit from this class the basic methods and use its parameters to manipulate the images. Specifically the rotation of the image is divided into two parts: • Effective Rotation around a fixed point which is user-defined; • Translation of the image at a given point (which is always defined by the user).

6.1.4. Network Services Module The library implements the communication layer based on TCP/IP and UDP/IP. The advantages of using these protocols for communication between the framework and the outside world are various; they allow for the possibility to create a communication protocol shared by all parts of the system and also allow for the option to sort the computational load on several computers which do not have to be located in the same network.

6.1.5. Vision Module Using the tools provided by the SimplySim libraries every simulated modeled agent (UAV and UGV) has been equipped with a pan-tilt virtual camera which provides a video stream over UDP socket at 15 frames per second. In order to keep the computation load light as possible, every frame is compressed and sent entirely in a UDP datagram. Thanks to this feature, is possible to retrieve the stream from simulated camera in third-part application for image and video elaboration. As example, a OpenCV-based application for the image analysis has been integrated within the framework. This application exploits the high performances of OpenCV libraries to analyze the frames retrieved by the simulated cameras. 77

Chapter 6. 3D Simulator for cooperation between UAV and UGV Thanks to the possibility to receive data and send commands throughout UDP sockets, algorithms for the autonomous landing based on markers has been successfully developed and tested (Figure 6.3).

Figure 6.3.: Vision Module

6.2. Simulator Interface with Matlab In this chapter the methodology used in the design of process control systems is described. After a brief discussion about the communication between simulator and control system, an overview of PID control system for attitude control of a quadrotor and tracking position is discussed.

6.2.1. The Quadrotor Model The context of this work is the assumption of a gray-box modelling. The process is a 3D model of the aircraft within the simulation environment. The system of equations describing the dynamics of the aircraft was replaced with an S-function (Fig. 6.4) that implements a UDP server to exchange data with the simulator; the quadrotor is considered as a gray-box due to the limited availability of parameters. The Inputs and Outputs of S-Function are listed in the following table (Tab. 6.1) : 78

6.2. Simulator Interface with Matlab Input 1 2 3 4 Output 1 2 3 4 5 6 7 8 9 10 11 12

Description ∆rpm for altitude ∆rpm for pitch angle ∆rpm for roll angle ∆rpm for yaw angle Description position among x position among y position among z linear speed among x linear speed among y linear speed among z pitch angle roll angle yaw angle angular speed around pitch angular speed around roll angular speed around yaw

Table 6.1.: Input and Outputs list of S-function The logic behind the S-function is as follows: 1. the simulator sends sensor data to the server UDP contained within the S-Function; 2. the S-function transmits data received from the simulator out of the Simulink block and provides values for the calculation of control laws; 3. the control system calculates the control laws and provide the results in input to the S-Function; 4. the UDP server included in the S-Function sends back to the simulator

Figure 6.4.: Simulink block of quadrotor 79

Chapter 6. 3D Simulator for cooperation between UAV and UGV the values of control law; 5. the simulator uses the data received to set the value of rotation of the rotors. The following figure shows the data flow (Fig. 6.5):

Figure 6.5.: Data flow between simulator and Simulink control system

6.3. A PID controller for position tracking For the position of the quadrotor, a control system consisting of three nested control loops was realized. The inner ring is responsible for the stabilization of the aircraft with the values of pitch, roll and yaw angles. The reference for the attitude control is provided by the speed controller, which takes as a reference the efforts of control position. Figure 6.6 shows the logical arrangement of the three controllers:

Figure 6.6.: Control System for stabilization and position tracking of quadrotor Controllers for both, position and speed, are done by PID algorithms. The calibration of parameters was made with heuristic techniques, just as for atti80

6.4. Formation Control via Networked Decentralized Model Predictive Control tude control. Using these three nested control loops, it is possible to move the aircraft along the way and let it to reach the waypoints with zero speed. For the generation of references of the position control, the Simulink Signal Builder was used. This tool provides the ability to define separately the coordinate values of positions to reach for the aircraft. In this way it is possible to operate with absolute simplicity the generation of trajectories for the quadrotor.

6.4. Formation Control via Networked Decentralized Model Predictive Control The structure of this framework makes it possible for the development of cooperative control laws. These control techniques include PID and Model Predictive Control. In this section, an algorithm for MPC formation flight of two quadrotor is presented. Formation Control has been addressed in [57], [58], [59], [60], [61], [62], [63], [64], [65], [66] . However, in the near future it will be possible to develop control techniques using formation control not only for UAVs but also for UGVs. In Control Engineering, when the process to be monitored is large or too fast to implement via classical system of centralized control, the problem of control is often divided into several sub problems where the requirements of the specification is guaranteed by a proper collaboration of subsystems. This control technique called Decentralized Control has been widely studied in recent years, mainly because of the remarkable expansion of computer networks. The coordination of the subsystems can be obtained through the exchange of information between agents inside a communication network (Fig. 6.7). In this way the coordination is completely decentralized and also the control strategy. The features of Networked Decentralized Model Predictive Control (ND-MPC) have recently been presented and successfully tested in several real cases in which it was necessary to have a strong interaction among a large number of subsystems. The advantages of using the MPC are linked to the ability to generate control actions taking into account not only the information from each subsystem but also the non-linearity and any limitations of the system. In this section the ND-MPC is used to solve the problem of leader-follower of two quadrotors (Fig. 6.9) trying to minimize the control actions for each of the components. The basic-idea is the following: vehicles must stay at a constant distance from each other: each vehicle follows the leader and only the leader knows the path. Each aircraft implements a Decentralized MPC algorithm based on the information collected by its sensors and information from other aircraft on the network. An error model is defined to ensure that the aircrafts remain in formation [67] [68] . These models are highly non linear, coupled and 81

Chapter 6. 3D Simulator for cooperation between UAV and UGV dynamically bound.

Figure 6.7.: A formation of quadrotor in a leader-follower scheme

6.4.1. The Kinematic Model Let’s consider a set of N aircrafts whose configuration at the generic instant t is denoted by the following vector: qi (t) � [qxi (t) qyi (t) qθi (t)]T

(6.1)

Assuming that a low-level controller for speed maintaining is defined, the control problem is transformed into a desired route planning for low-level controller which should define the optimal value of the linear speed v and rotational speed ω and should ensure that the aircrafts remain in formation minimizing the efforts as much as possible. Therefore, each aircraft will be guided through its linear velocity and angular velocity that will define the vector of control actions: ui (t) = [v i (t) ω i (t)]T

(6.2)

The kinematics of each single aircraft is described by the following continuoustime model:   cos qθi (t) 0   q˙i (t) =  sin qθi (t) 0 ui (t). (6.3) 0 1

By sampling (3) with a sample time Ts , velocities v, w produce finite linear and angular displacement vki � v i (kTs )Ts , wki � wi (kTs )Ts , within each sampling i i i interval. Defining qik � [qx.k qy,k qθ,k ]T � qi (kTs ), uik � [vki wki ]t � ui (kTs )Ts and approximating the derivatives with a proper discretization methods, the 82

6.4. Formation Control via Networked Decentralized Model Predictive Control following discrete-time model is obtained: qik+1 = qik + Hik uik where, in general,

 i cos qθ,k  i i i Hk = H(qk ) =  sin qθ,k 0

(6.4)  0  0 1

(6.5)

When dealing with formation control problem, the position of a leader aircraft with respect to the follower aircraft should be kept equal to a desired value. Let’s define a rotation matrix operator which transforms the fixed frame coordinates into rotated frame coordinates by a rotation α as follows:   cos(α) sin(α) 0   T(α) �  sin(α) cos(α) 0 (6.6) 0 0 1 i With respect to Figure 6.8, denoting with Tik � T(qθ,k ) the rotational matrix

Figure 6.8.: Relative configuration of aircraft V j with respect to aircraft V i for the tracking error system which changes inertial coordinates into the frame reference (Oi , xi , yi ) fixed to aircraft V i , the relative displacement of aircraft V j referred to V i is: ij ij ij T j i i dij k � [xk yk θk ] = Tk (qk − qk ),

(6.7)

which implies by (6.4) the following relation: j j j i i i i dij k+1 = Tk+1 [qk + Hk uk − qk − Hk uk ].

(6.8) 83

Chapter 6. 3D Simulator for cooperation between UAV and UGV Defining Aik � A(uik ) � Tik+1 (Tik )−1

(6.9)

Bik � B(uik ) � −Tik+1 Hik

(6.10)

jji j j i i Eji k � E(dk , uk , uk ) � Tk+1 Hk ,

(6.11)

equation (6.8) for i, j = 1, . . . , N, j �= i gives the formation vector model: ji j i ji i i dji k+1 = Ak dk + Bk uk + Ek uk .

(6.12)

Note that matrices Aik , Bik , Eji k are, in general, functions of the current disj i placement dji , control action u k and interaction vector uk . In this case, because k there is not a relative rotation between quadrotors, the previous equation (6.12) is simplified as follows: ji j i dji (6.13) k+1 = dk + uk + uk .

6.4.2. The Predictive Model The h-head state prediction for aircraft V i is computed by h iterations of model (6.12) which gives: j � ji � i � ji � i �i + E � ji u d k+1|k = Ak|k dk|k + Bk|k u k|k k|k � k|k ,

� ji d k+h|k

=

+

�i � ik+h−1|k B k+h−1|k u

h−1 � h−l �

l=1 n=1

+

(6.14)

� ji � jk+h−1|k + E k+h−1|k u

�i �i � ik+l−1|k + A k+h−n|k [Bk+l−1|k u

� ji � jk+l−1|k ] + +E k+l−1|k u

h �

n=1

(6.15)

�i � ji A k+h−n|k dk|k

6.4.3. Physical Constraints Due to physical limits, the discrete time-model (12) is subject to a set of constraints on its velocities and accelerations. For each integer h ≥ 1, the discrete angular and linear velocities are constrained by the following inequalities: vi ≤ vik+h−1 ≤ vi ,

wi ≤ wik+h−1 , ≤ wi

|∆vik+h−1 |

|∆wik+h−1 |

i

≤ ∆v ,

≤ ∆w

i

(6.16) (6.17)

i i i where ∆ is a difference operator such that ∆vk+h−1 � vk+h−1 − vk+h−2 and i i i ∆wk+h−1 � wk+h−1 − wk+h−2 gives the discrete accelerations.

84

6.4. Formation Control via Networked Decentralized Model Predictive Control

6.4.4. The Leader-Follower Problem Let us assume that the aircraft V i is controlled by a local independent controller Ai which implements an MPC strategy [49]. The reference formation pattern is defined by vectors ji ji d � [xji y ji θ ]T , (6.18) which specifies the desired displacement for the couple of aircrafts V i , V j , where V j is the leader for V i , i = 0, . . . , N , j �= i. In order to keep the desired formation, agent Ai communicates with the other agents and iteratively i i � i·|k � [(� computes the optimal control sequence u uk|k )T . . . (� uk+p−1|k )T ]T over the horizon p. The following framework is proposed here: • Each control agent Ai , i = 1, . . . , N communicates with its neighboring agents. • Each control agent Ai , i = 1, . . . , N knows its configuration qi and the configurations qj of the neighboring agents. • The reference trajectory T ∗ , to be followed by the main leader aircraft V 1 , is generated by a virtual reference aircraft V 0 which moves according to the considered model (4). • Each V i , i = 2, . . . , N follows one and only one leader V j , j �= i; V 1 follows virtual vehicle V 0 which exactly tracks the reference trajectory T ∗. ji

• Each V i , i = 1, . . . , N should keep the formation vector d , from its leader V j . In order to evaluate the performance of a follower Ai , a measure of difference between the actual or predicted formation vector dji k and the constant reference vector d

ji

ji ji ji T is needed. Given the actual formation vector dji k � [xk yk θk ] of ji

vehicle V i which follows its leader V j with the desired formation vector dk � [xji y ji θji ]T the following scalar is chosen as a measure of the performance for control Agent Ai : ji ji 2 ji 2 ji 2 2 �dji k − d � � ρx (xk − x ) + ρy (yk − y ) + ρθ sin ji

θkji − θ 2

ji

(6.19)

6.4.5. Decentralized MPC Given the tree of connections g = [g 1 , . . . , g N ] of aircrafts, reference trajectory T ∗ and prediction horizon p, the Networked Decentralized MPC problem at time k for the set of aircrafts {V 1 , . . . , V N } with weighting coefficient µi ∈ 85

Chapter 6. 3D Simulator for cooperation between UAV and UGV R, i = 1, . . . , N consists in solving N independent non linear optimization problems stated, for i = 1, . . . , N , as: � ·|k , uj∗ min Jki (dji k ,u ·|k−1 ), i � u·|k

(6.20)

i

where

� ·|k , uj∗ Jki (dji k ,u ·|k−1 ) � i

subject to:

p �

h=1

g i ,i

g i ,i 2

� �d k+h|k − d

� + µi |� uk+h−1|k |2 , i

vi ≤ vik+h−1 ≤ vi ,

wi ≤ wik+h−1 , ≤ wi

|∆vik+h−1 |

|∆wik+h−1 |

i

≤ ∆v ,

≤ ∆w

i

(6.21)

(6.22) (6.23)

Control actions thus determined will to act directly on the control of pitch and roll angles of each of the N aircraft. In the following simulations, the absolute bound for speed among x and z axis will be set to 3 m/s.

Figure 6.9.: A sample of scenario where two UAV have performed a path following by MPC control law

6.4.6. Main Results In this section the results about formation control of two aircrafts are presented. The leader aircraft will follow a virtual trajectory generated by a virtual reference aircraft V 0 . In Fig. 6.10 the trajectory used for simulations is shown: 86

6.4. Formation Control via Networked Decentralized Model Predictive Control

Figure 6.10.: Example of path for virtual aircraft: units are in meters Simulations last 180 seconds during which leader aircraft takes off, flies the path and lands at the point of departure. The simulation time is divided into 10 intervals Ti , i = 1, ..., 10 described as follow: • T1=[0 - 20] sec: the leader takes off and reach the waypoint 1 (WP1) • T2=[20 - 30] sec: the leader remains in WP1 • T3=[30 - 50] sec: the leader flies the distance between WP1 and WP2 • T4=[50 - 70] sec: the leader remains in WP2 • T5=[70 - 80] sec: the leader flies the distance between WP2 and WP3 • T6=[80 - 100] sec: the leader remains in WP3 • T7=[100 - 120] sec: the leader flies the distance between WP3 and WP4 • T8=[120 - 135] sec: the leader remains in WP4 • T9=[135 - 145] sec: the leader flies the distance between WP4 and WP1 • T10=[160 - 175] sec: the leader lands. In the following the results of ND-MPC parameters tuning are shown. With the framework developed, it is possible to make all the simulations needed to calculate the optimal values of the parameters for the functional J contained in the ND-MPC algorithm. About the simulations the trend of relative positioning error among the x and z axis (pitch and roll respectively) will be shown. 87

Chapter 6. 3D Simulator for cooperation between UAV and UGV

6.4.7. Simulation Results In these simulations the leader aircraft follows the rectangular path defined in the previous paragraph. The follower aircraft is constrained to maintain a relative distance from the leader of 2m along the x axis and 2 m along the z axis. Tables 6.2 and 6.3 show the positioning relative errors of the follower respect to the leader. In simulation 15, the values of the chosen parametrs ensure a maximum error of 0.12 m among the two coordinates with an horizon prediction of 20. The values of RM SX and RM SZ are referred to the entire flight sequence. Values of maxErrX and maxErrZ are expressed in meters. Simu 1 2 3 4 5 6 7 8

ρx ρz 0.5 0.5 1.5 1.5 3.5 3.5 3.8 3.8

µ 0.1 0.5 0.1 0.5 0.1 0.5 0.1 0.5

h 10 10 10 10 10 10 10 10

maxErrx (m) 1.19 4.35 0.45 1.85 0.21 0.89 0.19 0.83

RM SX 0.36 1.29 0.12 0.56 0.05 0.26 0.05 0.24

maxErrz (m) 1.11 4.85 0.42 1.71 0.19 0.82 0.18 0.77

RM SZ 0.48 2.01 0.16 0.77 0.07 0.34 0.06 0.32

Table 6.2.: Simulation results of MPC with horizon prediction equals to 10 Sim 9 10 11 12 13 14 15 16

ρx ρz 0.5 0.5 1.5 1.5 3.5 3.5 3.8 3.8

µ 0.1 0.5 0.1 0.5 0.1 0.5 0.1 0.5

h 20 20 20 20 20 20 20 20

maxErrx (m) 0.61 2.52 0.23 0.95 0.12 0.45 0.12 0.42

RM SX 0.18 0.74 0.06 0.26 0.03 0.13 0.03 0.12

maxErrz (m) 0.57 2.38 0.21 0.88 0.11 0.42 0.11 0.39

RM SZ 0.23 1.05 0.08 0.35 0.04 0.16 0.03 0.15

Table 6.3.: Simulation results of MPC with horizon prediction equals to 20 The following graphs (Fig. 6.11, 6.12, 6.13, 6.14, 6.15, 6.16) show the relative position error among x and z axis. Each graph represents the evolution of two simulations carried out using the same weight coefficients for relative positioning errors along the two axes and two different values for the prediction horizon. The shift of the leader aircraft from a way-point to the next is carried out following a ramp reference. Clearly, the slope of the ramp determines the speed of the aircraft. After each shift, the leader lasts in the new way-point for a time Ti , i = 2, 4, 6, 8. As mentioned before, the follower quadrotor, during the entire simulation, is forced to maintain a relative distance of 2m from the 88

6.4. Formation Control via Networked Decentralized Model Predictive Control

Figure 6.11.: Position Error between leader and follower among x axis. Graphs relating to Simulations n◦ 3 and 11

Figure 6.12.: Position Error between leader and follower among y axis. Graphs relating to Simulations n◦ 3 and 11 leader among each of the two axes. The graphs shown above indicate a relative position error just about 0 during intervals in which quadrotors remain in a way-point and a non-zero relative position error (depending of the choice of ρx , ρz , µ and h parameters) during the routing from a way-point to the next one. As an example, in figures 6.17 and 6.18, the trajectories covered by the quadrotors during simulation 13 are reported. The position error at the generic t instant must be calculated as follows: error = posLeaderi (t) − posF olloweri (t) − d

(6.24)

where d is the relative desired displacement between aircraft (in this case d = 89

Chapter 6. 3D Simulator for cooperation between UAV and UGV

Figure 6.13.: Position Error between leader and follower among x axis. Graphs relating to Simulations n◦ 5 and 13

Figure 6.14.: Position Error between leader and follower among y axis. Graphs relating to Simulations n◦ 5 and 13 2m). In accordance with the equation 6.24, relatively to the x-axis, at t = 38.012s the error is: error = posLeaderx (38.012) − posF ollowerx (38.012) − d = 151.9991 − 149.9978 − 2 � 0m

(6.25) (6.26)

while, at t = 76.65s the error is: error = posLeaderx (76.65) − posF ollowerx (76.65) − d = 156.3859 − 154.2934 − 2 � 0.09m

90

(6.27) (6.28)

6.4. Formation Control via Networked Decentralized Model Predictive Control

Figure 6.15.: Position Error between leader and follower among x axis. Graphs relating to Simulations n◦ 7 and 15

Figure 6.16.: Position Error between leader and follower among y axis. Graphs relating to Simulations n◦ 7 and 15

Such values are in keeping with the simulation results. The obtained result show that the ND-MPC gives good results also for formation control in the case of flying robots. The simulations confirm that increasing the prediction horizon and keeping fixed the weight coefficients, a substantial improvement in the relative position between leader and follower can be obtained. Clearly, the growth of the prediction horizon inevitably increases the value of the control effort, and with it the value of the functional. 91

Chapter 6. 3D Simulator for cooperation between UAV and UGV

Figure 6.17.: Paths of Leader and Follower among x-axis. Graphs relating to Simulation n◦ 13

Figure 6.18.: Paths of Leader and Follower among z-axis. Graphs relating to Simulation n◦ 13

6.5. State Machines for Mission Management The Air Borne Demonstrator in the R3-COP project is oriented to demonstrate some examples of cooperation between the UAV and UGV in order to accomplish one of the scenarios proposed. The framework developed has been used in the first stage of the development of Air Borne Demonstrator, providing a set of tools for testing different cooperating control strategies in a safe virtual environment. In particular the cooperation between the mobile agents, realized using exchange of messages has been evaluated. Among all the available proposed scenario in the R3-COP project, the Building Inspection was chosen. With respect to the Building Inspection Scenario, the system consists of 92

6.5. State Machines for Mission Management two autonomous objects (AO) - RUAV and UGV - which are controlled from a Ground Control Station (GCS). For RUAV and UGV generic models with virtual video cameras are used. All objects are able to communicate between themselves through UDP sockets. In this task, a small RUAV equipped with different kind of sensors, has been used to analyze the surface of a building. Since the limited battery endurance of the RUAV that produces a short flight time, the system is extended with the use of an Unmanned Ground Vehicle. Therefore the UAV was able to land on the UGV equipped with a system for battery recharging, and therefore extend the mission endurance. Going more in detail, a step-by-step description of the Task can be summarized as follows: • The demonstration starts with the UAV positioned on UGV in indoor environment • UAV takes off from the UGV • UAV flies a path using a pre-defined list of waypoints • UAV lands on UGV. The mobile platform mounted on UGV will help the UAV during the landing (mechanical compliance) • UAV starts recharging (Fast Recharging) • After a time t, UGV (with the UAV onboard) moves toward the outdoor environment giving the possibility to the UAV to change from UWB Ranging to the GPS Ranging (handover). During this phase the UAV will remain on the UGV. The handover will be managed by UAV and UGV. The handover for UAV will be carried out whit the UAV lying on the UGV and NOT during the flight. • The UGV reaches a predefined waypoint (outdoor) • The UAV takes off and starts a building inspection using a thermocamera or visible camera, flying through a pre-defined path and sending images to the Ground Station. • The UAV lands on UGV or a different place. In order to make more realistic the testing of the cooperative strategies within the Simulation Environment, the virtual urban scenario shown in Figure has been used. In our application, several separate subtasks have to be managed: • Management of Autonomous Mobile Agents • Development of sample control programs for RUAV, UGV, GCS 93

Chapter 6. 3D Simulator for cooperation between UAV and UGV • Implementation of easily changeable missions • Development of testing platform for the framework Therefore, the Mission Management System has been developed in MatLab StateFlow exploiting also the possibility of the Framework to communicate with MatLab (cfr §6.2).

6.5.1. MatLab StateFlow for Mission Management Stateflow is generally used to specify the discrete controller in the model of a hybrid system where the continuous dynamics (i.e., the behavior of the plant and environment) are specified using Simulink. It suits well our needs, as we can consider our objects as discrete real time systems, where continuous dynamics are done in the Virtual Environment. Each object (RUAV,UGV and GCS) was developed as a StateFlow chart (fig 2) in one Simulink system. In particular State Flow was used to managed the following tasks: • Main logic provides control code • Hardware and Software Events • Video image analysis The Control code is divided in hierarchical tasks. Each of these tasks correspond to the different stages of the mission (Landing, Takeoff, Waypoint Navigation). Hardware and Software Events are used to switch between the different stages of the mission. Video Image analysis provides an additional source of events for controlling the execution of the mission: An OpenCV application analyzes the virtual video stream gathered from the on-board camera on the UAV in order to detect the landing area and estimate the distance from it during the Landing procedure. Our framework is similar to a Software-In-The-Loop (SIL) model where System Under Test (SUT) in our case are RUAV and UGV high level programs and algorithms. This simulated model are much more suited for performing automatic repeatable tests then real hardware. In developing control code for RUAV we can note advantages of using Finite State Machines: • Graphic design of code • Separation in sequential and parallel states. That is essential for developing real time programs • Easy debugging and testing in Simulink environment • Use of Simulink tools for the Mission Planning 94

6.5. State Machines for Mission Management • Mission Planning in diagram format is easy to read, change and expand • Possibility of deploying virtual reality on separate PC from Simulink model, thanks to the UDP connection between the Framework and MatLab The framework is best suited for performing testing with visual online observation of test results. In one single virtual reality the behavior of the several objects are evaluated and cooperation of them can be analyzed. Very important is video camera simulation as autonomous robots nowadays are always equipped with cameras. Framework can also be adjusted to perform automatic tests: regress testing, load tests, stress tests etc. Other important field for automatic testing is automatic test case generation with a certain coverage criteria for the framework [69].

95

Chapter 7. Conclusions and Future Works In the first part of this Thesis some version of the Extended Kalman Filter for indoor localization using a 802.15.4a Wireless Network and vision-based odometry for UAV and UGV have been presented. Concerning the UGV, the Localization Algorithms based on IEEE 802.145.4a Chirp Spread Spectrum (CSS) technology, is able to manage faults on measurements isolating the corrupted data without shutting down the localization system. The Nanotron RTLS Kit provides a reliable system for Ranging especially in outdoor environments using a proprietary ranging technology called Symmetrical Double-Sided Two Way Ranging (SDS-TWR). This technique tries to overtake the limitations of the classical Received Signal Strength Indication (RSSI) (e.g., Wi-fi mapping) that does not ensure good performance especially in structured environments. The set of these devices allows to create a Wireless Sensor Network (WSN) that is suitable for cooperative tasks where the data link is fundamental to share data and support the relative localization. The management of fault measurements allows also to reduce the errors on ranging when there is not Line of Sight between the anchors and the tag, in which the performances of the system decreases. However, the obtained results put in evidence the necessity of further ranging data to obtain centimetric accuracy in precision of localization that actually is rated to 1m, especially in indoor environments. The UAV Localization Algorithm based on the UbiSense UltraWide Band System provided good performances in indoor environments. The Ultra Wide Band is a promising choice in the development of localizations systems, thank especially to its resolution and immunity to multi-path. The proposed solution allows to use a low-cost Inertial Measurement Unit (IMU) in the prediction step and the integration of vision-odometry for the detection of markers nearness the touchdown area. The ranging measurements allow to reduce the errors of inertial sensors due to the limited performance of accelerometers and gyros. The obtained results show that an accuracy of 15 cm can be achieved. However, a drawback of the UbiSense UltraWide Band System in Author’s 97

Chapter 7. Conclusions and Future Works opinion concerns the calibration routine of the system that need to be accurately carried out in order to have the best performance. In particular, the use of a Laser pointer or a Total Station to estimate the position of the anchors with a millimetric resolution is strongly recommended. Furthermore, the anchors are connected together using a double set of Ethernet cables: The first set realizes a star topology network in which each anchor is connected with the Localization Server (running on a PC), the second one realizes a ring network that connects all the anchor and it is used to share the synchronization signal. Using a dedicated instrument, called Timing Combiner, is possible to halve the number of Ethernet cables. This characteristics could be limiting the use of this system where is not possible to modify the infrastructure of the environment. For both of the proposed version of the Kalman Filter, the calibration procedure for MEMS is a critical aspect that needs to be carried out in order to improve the a priori state estimation. The obtained results reinforce the necessity of integrate additional sensors to obtain better results in terms of accuracy and precision. In case of indoor environments the integration of a Laser Range Finder on UGV could significantly improve the overall performance. Concerning the UAV localization system, the vision system is based on the detection and recognition of artificial landmarks which can be installed in indoor environments but the approach can be easily extended to natural landmarks. Future works will be steered to extend the set of sensors integrating visual information based on high-definition camera and to optimize the code to improve the overall performances and, in particular for the UAVs, using embedded platform with a dedicated GPU to exploit the high-parallelism of the GPU in order to make possible the video elaboration on-board avoiding delays that might occurs when video stream need to be transferred to a remote PC for the elaboration. In the second part of this Thesis, a modular framework for fast prototyping of cooperative autonomous was presented. A non linear MPC strategy to evaluate the performances of the framework has been considered. Furthermore, the the Simulator can be used also for testing localization algorithms for UAV and UGV (cfr. §5.3). The modularity of this framework allows to quickly develop new modules without major changes of the already developed code. In addition it allows gray-box development of control systems and the ability to perform highly realistic simulations based on the most important physics engines currently available (Newton, PhysX, and so on). The virtual environment can be enhanced simply by integrating the threedimensional models of objects. Development of control systems directly in 98

Matlab/Simulink will provide the ability to generate code that can be downloaded directly on the hardware with tools like Real-Time Workshop. Thanks to the socket connection it is also possible to distribute the computational load across multiple computers on a network by providing an architecture formed by a computer in which the simulator runs and n computers with each one of them specialized for controlling of a modeled dynamic system. Future work will be steered to improve the quality of the simulation by providing the ability to model sensors that allow a higher degree of realism. This will make it possible to test new types of control algorithms based on a probabilistic representation of information.

99

Appendix A. Air Borne Demonstrator A.1. General Description The Air Borne Demonstrator in the R3-COP project is oriented to demonstrate some examples of cooperation between the UAV and UGV in order to accomplish one of the scenarios proposed. The framework developed has been used in the first stage of the development of Air Borne Demonstrator, providing a set of tools for testing different cooperating control strategies in a safe virtual environment. Among all the available proposed scenario in the R3-COP project, the Building Inspection was chosen. The hardware used for the demonstration is the following: • UAV AscTec Pelican equipped with: – Thermocamera (optional)

– ZigBee Module for short range communication – Ranging Sensor (TAG) (for indoor localization) – GPS (for outdoor localization) – Visible Camera (optional) • UGV Pioneer P3-AT equipped with a platform for battery recharging of UAV • PC as Ground Station • Wireless Sensor Network based on UWB technology In particular the demonstration will be organized taking in consideration: • Autonomous Flight of UAV in indoor environment through waypoints. The Autonomous Flight will exploit localization algorithms based on Sensor fusion between Inertial Sensors and UWB ranging Sensors. The UAV will take off from the UGV. After the indoor inspection the drone will land on the UGV. 101

Appendix A. Air Borne Demonstrator • Use of UGV for battery recharging of UAV. The UGV will be equipped with a landing platform that will be used for fast recharging of the UAV battery. • Handover between the indoor Wireless Sensor Network and GPS.

• Autonomous flight in outdoor environment exploiting Inertial Sensors and GPS in order to inspect a building using a thermocamera or visible camera. A step-by-step description of the Demonstration can be summarized as follows: • The demonstration starts with the UAV positioned on UGV in indoor environment • UAV takes off from the UGV

• UAV flies a path using a pre-defined list of waypoints

• UAV lands on UGV. The mobile platform mounted on UGV will help the UAV during the landing (mechanical compliance) • UAV starts recharging (Fast Recharging)

• After a time t, UGV (with the UAV onboard) moves toward the outdoor environment giving the possibility to the UAV to change from UWB Ranging to the GPS Ranging (handover). During this phase the UAV will remain on the UGV. The handover will be managed by UAV and UGV. The handover for UAV will be carried out whit the UAV lying on the UGV and NOT during the flight. • The UGV reaches a predefined waypoint (outdoor)

• The UAV takes off and starts a building inspection using a thermocamera or visible camera, flying through a pre-defined path and sending images to the Ground Station. • The UAV lands on UGV or a different place.

A.2. Detailed Description Based on the step-by-step description on the previous paragraph, three main tasks can be identified: • Indoor Mission;

• Transit between Indoor and Outdoor Environment (Handover); • Outdoor Mission. 102

A.2. Detailed Description

A.2.1. Indoor Mission The objective of the Indoor Mission is to demonstrate the possibility to manage an autonomous navigation and cooperation between UAV and UGV without the use of GPS. The localization of the mobile agents (UAV and UGV) will be carried out using inertial and ranging sensors. The inertial sensor suite is composed by: • Accelerometer • Gyroscope • Magnetometer • Pressure Sensor • Encoder (on UGV) The Ranging Sensor are based on the UWB technology (standard IEEE 802.15.4a). Data provided by Inertial and Ranging Sensor have to be combined using one of the Localization Algorithm based on Sensor Fusion. The UGV will be equipped with a Landing Platform that will help the UAV during the landing. The indoor phase starts with the UAV positioned on the UGV. After the boot of the system, the UAV takes off and flies throughout a square path (yellow path in Figure 1). The localization of the UAV will be carried out using the Inertial Sensors (in strap-down configuration) and the UWB UbiSense Ranging Sensors. The UbiSense System will be composed by 4 anchors mounted on the corner of the room (yellow boxes in Figure A.1). Afterwards, the UAV come back to the first waypoint and then lands on the UGV. When the UAV is landed on the UGV it will start to recharge its battery. The Indoor phase ends with the UAV landed on the UGV.

A.2.2. Transit between Indoor and Outdoor Environment The objective of this phase is to autonomously manage the handover between the UWB Sensors and the GPS system. This phase starts with the UGV landed on the UAV. During the recharging of the UAV, the UGV moves throughout the green path (Figure A.2), carrying out the UAV from the Indoor to the Outdoor Environment. In the red area the system will manage the handover between the UWB System and the GPS. In this way the UAV will be ready to use the GPS in order to navigate in Outdoor Environment. This phase ends when the UGV reaches a predefined waypoint outside the Building. 103

Appendix A. Air Borne Demonstrator

Figure A.1.: Indoor Mission

Figure A.2.: Handover

A.2.3. Outdoor Mission The objective of this phase is to demonstrate the use of the UAV and UGV to accomplish the Building Inspection Scenario. This phase starts with the UGV standing on a pre-defined waypoint. Afterwards, the UAV takes off from the UAV and starts to inspect the facade of the building (red path in Figure A.3). During this task the UAV will use the thermocamera and/or the visibile camera in order to identify problems (such as cracks) on the external wall of the building. In the meantime, the UGV moves towards the Rendez-Vous area (orange square in Figure A.3). In order to maximize the quality of the position estimation of the UGV, it will be equipped with a D-GPS receiver (Differential GPS). The D-GPS base station will be installed on a well known area near the building. After that, the UAV will reach the Rendez- Vous area flying 104

A.2. Detailed Description throughout the green path in Figure A.3. The Outdoor mission ends with the UAV landed on the UGV.

Figure A.3.: Outdoor Mission

A.2.4. Features to be demonstrated and Innovation Featured demonstrated in this demonstrator are: • Localization Algorithms for UAV and UGV based on Sensor Fusion, using different technologies • Autonomous Navigation of UAV and UGV in Indoor/Outdoor Environments • Cooperation between UAV and UGV in order to accomplish a complex task • Use of different Sensors to analyze the environment (Thermocamera, Visibile Camera, LIDAR) • Capacity to take some decisions autonomously, considering the current state of the system (i.e. the UAV lands when the battery is low) The Innovation of this work can be summarized as follows: • Exploitation of IEEE 802.15.4a for providing Communication and Localization in GPS-denied environments • Development of Modular Simulation Environment based on MatLab for testing some of the missions (use cases). Integration of different kind of sensors within the Simulation System to provide a complete representation of the environment to the UAV and UGV. 105

Appendix A. Air Borne Demonstrator • Exploitation of different Sensors to provide simple reasoning abilities to the Mobile Agents • Recharging of UAV on UGV (challenging problem)

A.2.5. Pictures from the Real Demonstrator In this paragraph some pictures from the Official Air-Borne Demonstrator of the R3-COP are shown. Figure A.4 shows the system while the UAV is approaching the UGV to land on it in order to re- start the battery recharging. In particular, Figure A.5 shows a detail of the UbiSense system. Figure A.6

Figure A.4.: Indoor Mission - Final Demonstrator

Figure A.5.: UbiSense System - Final Demonstrator show a moment during the outdoor mission. In the outdoor environment the UAV has been used to analyze the surface of a building using a thermocamera. The output of the thermocamera is shown in Figure A.7. 106

A.2. Detailed Description

Figure A.6.: Outdoor Mission - Final Demonstrator

Figure A.7.: Thermocamera Output - Final Demonstrator

107

Bibliography [1] www.mems exchange.org, “What is mems technology?” Available: https://www.mems-exchange.org/MEMS

[Online].

[2] B. Barshan and H. Durrant-Whyte, “Inertial navigation systems for mobile robots,” Robotics and Automation, IEEE Transactions on, vol. 11, no. 3, pp. 328 –342, jun 1995. [3] Z. Sahinoglu and S. Gezici, “Ranging in the ieee 802.15.4a standard,” in IEEE Annual Wireless and Microwave Technology Conference, 2006. WAMICON ’06., 2006, pp. 1 –5. [4] H. Cho and S. W. Kim, “Mobile robot localization using biased chirpspread-spectrum ranging,” IEEE Transactions on Industrial Electronics, vol. 57, no. 8, pp. 2826 –2835, aug. 2010. [5] C. Rohrig, D. Hes, C. Kirsch, and F. Kunemund, “Localization of an omnidirectional transport robot using ieee 802.15.4a ranging and laser range finder,” in Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010), October 2010, pp. 3798–3803. [Online]. Available: http://www.inf.fhdortmund.de/personen/professoren/roehrig/papers/iros10.pdf [6] A. Sikora and V. Groza, “Fields tests for ranging and localization with time-of-flight-measurements using chirp spread spectrum rf-devices,” in IEEE Instrumentation and Measurement Technology Conference Proceedings, 2007. IMTC 2007., may 2007, pp. 1 –6. [7] J.-E. Kim, J. Kang, D. Kim, Y. Ko, and J. Kim, “Ieee 802.15.4a css-based localization system for wireless sensor networks,” in IEEE Internatonal Conference on Mobile Adhoc and Sensor Systems, 2007. MASS 2007., oct. 2007, pp. 1 –3. [8] S. Krishnan, P. Sharma, Z. Guoping, and O. H. Woon, “A uwb based localization system for indoor robot navigation,” in IEEE International Conference on Ultra-Wideband, 2007. ICUWB 2007., sept. 2007, pp. 77 –82. 109

Bibliography [9] A. Nemra and N. Aouf, “Robust ins/gps sensor fusion for uav localization using sdre nonlinear filtering,” Sensors Journal, IEEE, vol. 10, no. 4, pp. 789 –798, april 2010. [10] N. Abdelkrim, N. Aouf, A. Tsourdos, and B. White, “Robust nonlinear filtering for ins/gps uav localization,” in Control and Automation, 2008 16th Mediterranean Conference on, june 2008, pp. 695 –702. [11] S. Sohn, B. Lee, J. Kim, and C. Kee, “Vision-based real-time target localization for single-antenna gps-guided uav,” Aerospace and Electronic Systems, IEEE Transactions on, vol. 44, no. 4, pp. 1391 –1401, oct. 2008. [12] J. Tisdale, A. Ryan, Z. Kim, D. Tornqvist, and J. Hedrick, “A multiple uav system for vision-based search and localization,” in American Control Conference, 2008, june 2008, pp. 1985 –1990. [13] S. Rady, A. Kandil, and E. Badreddin, “A hybrid localization approach for uav in gps denied areas,” in System Integration (SII), 2011 IEEE/SICE International Symposium on, dec. 2011, pp. 1269 –1274. [14] M. De Agostino, A. Manzino, and M. Piras, “Performances comparison of different mems-based imus,” in Position Location and Navigation Symposium (PLANS), 2010 IEEE/ION, 2010, pp. 187–201. [15] J. Hung, J. R. Thacher, and H. V. White, “Calibration of accelerometer triad of an imu with drifting z -accelerometer bias,” in Aerospace and Electronics Conference, 1989. NAECON 1989., Proceedings of the IEEE 1989 National, 1989, pp. 153–158 vol.1. [16] playerstage, “Player stage http://playerstage.sourceforge.net/

gazebo.”

[Online].

Available:

[17] flightgear, “Flightgear.” [Online]. Available: http://www.flightgear.org/ [18] mrds, “Microsoft robotics developer http://www.microsoft.com/robotics/

studio.”

[Online].

Available:

[19] “Simplysim simplycube.” [Online]. Available: http://www.simplysim.net/ [20] PhysX, “Physx.” [Online]. http://www.nvidia.com/object/physx-9.10.0513-driver.html

Available:

[21] N. Dynamics, “Newton.” [Online]. http://newtondynamics.com/forum/newton.php

Available:

[22] X-Plane, “X-plane.” [Online]. Available: http://www.x-plane.com 110

Bibliography [23] A. Mancini, A. Cesetti, A. Iualè, E. Frontoni, P. Zingaretti, and S. Longhi, “A framework for simulation and testing of uavs in cooperative scenarios,” Journal of Intelligent & Robotic Systems, vol. 54, pp. 307–329, 2009, 10.1007/s10846-008-9268-8. [Online]. Available: http://dx.doi.org/10.1007/s10846-008-9268-8 [24] E. Frontoni, A. Mancini, F. Caponetti, and P. Zingaretti, “A framework for simulations and tests of mobile robotics tasks,” in 14th Mediterranean Conference on Control and Automation., 2006, pp. 1 –6. [25] D. Dubuc, K. Grenier, L. Rabbia, A. Tackac, M. Saadaoui, P. Pons, P. Caudrillier, O. Pascal, H. Aubert, H. Baudrand, J. Tao, P. Combes, J. Graffeuil, and R. Plana, “Mems and nems technologies for wireless communications,” in Microelectronics, 2002. MIEL 2002. 23rd International Conference on, vol. 1, 2002, pp. 91–98 vol.1. [26] M. Park and Y. Gao, “Error and performance analysis of mems-based inertial sensors with a low-cost gps receiver,” Sensors, vol. 8, no. 4, pp. 2240–2261, 2008. [Online]. Available: http://www.mdpi.com/14248220/8/4/2240 [27] Endevco, “Practical understanding of key accelerometer specifications.” [Online]. Available: https://www.endevco.com/news/emails/2011_12/tp328.pdf [28] N. Yazdi, F. Ayazi, and K. Najafi, “Micromachined inertial sensors,” Proceedings of the IEEE, vol. 86, no. 8, pp. 1640–1659, 1998. [29] D. Agostino, “I sensori inerziali di basso costo per la navigazione geodetica,” Ph.D. dissertation, Politecnico di Torino, 2009. [30] S. Nasiri, “A critical review of mems gyroscopes technology and commercialization status,” InvenSense Technology. [31] P. Aggarwal, Z. Syed, X. Niu, and N. El-Sheimy, “A standard testing and calibration procedure for low cost mems inertial sensors and units,” Journal of Navigation, vol. 61, pp. 323–336, 4 2008. [32] I. Frosio, F. Pedersini, and N. Alberto Borghese, “Autocalibration of mems accelerometers,” Instrumentation and Measurement, IEEE Transactions on, vol. 58, no. 6, pp. 2034–2041, 2009. [33] InvenSense, “Mpu6050.” [Online]. Available: http://invensense.com/mems/gyro/documents/PS-MPU-6000A00v3.4.pdf 111

Bibliography [34] S. Johansson, “A description of quaternion algebras.” [35] A. Benini, A. Mancini, E. Frontoni, P. Zingaretti, and S. Longhi, “Adaptive extended kalman filter for indoor/outdoor localization using a 802.15.4a wireless network,” in Proceedings of the 5th European Conference on Mobile Robots ECMR 2011, September 2011, pp. 315 –320. [36] H.-S. Ahn, H. Hur, and W.-S. Choi, “One-way ranging technique for cssbased indoor localization,” in Industrial Informatics, 2008. INDIN 2008. 6th IEEE International Conference on, july 2008, pp. 1513 –1518. [37] A. Ward, “In-building location systems,” in Location Technologies, 2007. The Institution of Engineering and Technology Seminar on, dec. 2007, pp. 1 –18. [38] J. A. Corrales, F. A. Candelas, and F. Torres, “Hybrid tracking of human operators using imu/uwb data fusion by a kalman filter,” in Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction, ser. HRI ’08. New York, NY, USA: ACM, 2008, pp. 193–200. [Online]. Available: http://doi.acm.org/10.1145/1349822.1349848 [39] A. Benini, A. Mancini, and S. Longhi, “An imu/uwb/vision-based extended kalman filter for mini-uav localization in indoor environment using 802.15.4a wireless sensor network,” Journal of Intelligent & Robotic Systems, vol. 70, no. 1-4, pp. 461–476, 2013. [Online]. Available: http://dx.doi.org/10.1007/s10846-012-9742-1 [40] A. Benini, A. Mancini, A. Marinelli, and S. Longhi, “A biased extended kalman filter for indoor localization of a mobile agent using low-cost imu and uwb wireless sensor network,” in Proceedings of the 10th IFAC Symposium on Robot Control, September 2012, pp. 735–740. [41] J. Zhang, T. Zhang, X. Jiang, and S. Wang, “Tightly coupled gps/ins integrated navigation algorithm based on kalman filter,” in Business Computing and Global Informatization (BCGIN), 2012 Second International Conference on, 2012, pp. 588–591. [42] C. Rohrig and M. Muller, “Indoor location tracking in non-line-of-sight environments using a ieee 802.15.4a wireless network,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009. IROS 2009, oct. 2009, pp. 552 –557. [43] S. Spieker and C. Rohrig, “Localization of pallets in warehouses using wireless sensor networks,” in 16th Mediterranean Conference on Control and Automation, 2008, june 2008, pp. 1833 –1838. 112

Bibliography [44] F. Capezio, A. Sgorbissa, and R. Zaccaria, “An augmented state vector approach to gps-based localization,” in Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on, 29 2007-nov. 2 2007, pp. 2480 –2485. [45] A. Mancini, E. Frontoni, A. Ascani, and P. Zingaretti, “Robobuntu: A linux distribution for mobile robotics,” may. 2009, pp. 2544 –2549. [46] Robobuntu, “robobuntu http://www.robobuntu.diiga.univpm.it.

linux

distribution,”

[47] S. Hobley, “Ar drone c# sdk http://www.stephenhobley.com/blog/2010/11/28/c-sdkfor-ar-drone-now-available/.” [Online]. Available: hhttp://www.stephenhobley.com/blog/2010/11/28/c-sdk-for-ar-dronenow-available/ [48] A. Benini, A. Mancini, R. Minutolo, S. Longhi, and M. Montanari, “A modular framework for fast prototyping of cooperative unmanned aerial vehicle,” Journal of Intelligent & Robotic Systems, vol. 65, pp. 507–520, 2012, 10.1007/s10846-011-9577-1. [Online]. Available: http://dx.doi.org/10.1007/s10846-011-9577-1 [49] A. Mancini, A. Benini, E. Frontoni, P. Zingaretti, and S. Longhi, “Coalition formation for unmanned quadrotors,” in Proceedings of the 7th International ASME/IEEE Conference on Mechatronics & Embedded Systems & Applications, September 2011, pp. 315 –320. [50] A. Benini, A. Mancini, E. Frontoni, P. Zingaretti, and S. Longhi, “A simulation framework for coalition formation of unmanned aerial vehicles,” in Control Automation (MED), 2011 19th Mediterranean Conference on, june 2011, pp. 406 –411. [51] M. De Bento, B. Eissfeller, and F. Machado, “How to deal with low performance imus in an integrated navigation system: Step by step,” in Satellite Navigation Technologies and European Workshop on GNSS Signals and Signal Processing (NAVITEC), 2010 5th ESA Workshop on, dec. 2010, pp. 1 –10. [52] E. Nebot and H. Durrant-Whyte, “Initial calibration and alignment of low-cost inertial navigation units for land vehicle applications,” Journal of Robotic Systems, vol. 16, no. 2, pp. 81–92, 1999. [53] D. Scaramuzza and F. Fraundorfer, “Visual odometry [tutorial].” IEEE Robot. Automat. Mag., vol. 18, no. 4, pp. 80–92, 2011. 113

Bibliography [54] A. Ascani, E. Frontoni, A. Mancini, and P. Zingaretti, “Feature group matching for appearance-based localization,” in IROS, 2008, pp. 3933– 3938. [55] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, pp. 1330– 1334, 2000. [56] Pnet, “Pnet.” [Online]. http://www.mathworks.com/matlabcentral/fileexchange/345

Available:

[57] P. Sujit, J. George, and R. Beard, “Multiple uav coalition formation,” in American Control Conference, 2008, 2008, pp. 2010 –2015. [58] Z. jun Yang, X. hui Qi, and G. lin Shan, “Simulation of flight control laws design using model predictive controllers,” in International Conference on Mechatronics and Automation, 2009. ICMA 2009., 2009, pp. 4213 –4218. [59] M. Vaccarini, S. Longhi, and R. Katebi, “State space stability analysis of unconstrained decentralized model predictive control systems,” in American Control Conference, 2006, 2006, p. 6 pp. [60] J. Lawton, R. Beard, and B. Young, “A decentralized approach to formation maneuvers,” IEEE Transactions on Robotics and Automation., vol. 19, no. 6, pp. 933 – 941, 2003. [61] X. Xi and E. Abed, “Formation control with virtual leaders and reduced communications,” in Decision and Control, 2005 and 2005 European Control Conference. CDC-ECC ’05. 44th IEEE Conference on, 2005, pp. 1854 – 1860. [62] I.-A. Ihle, J. Jouffroy, and T. Fossen, “Formation control of marine surface craft using lagrange multipliers,” in 44th IEEE Conference on Decision and Control, 2005 and 2005 European Control Conference., 2005, pp. 752 – 758. [63] D. Dimarogonas and K. Kyriakopoulos, “Formation control and collision avoidance for multi-agent systems and a connection between formation infeasibility and flocking behavior,” in 44th IEEE Conference on Decision and Control, 2005 and 2005 European Control Conference. CDC-ECC ’05., 2005, pp. 84 – 89. [64] ——, “On the state agreement problem for multiple unicycles,” in American Control Conference, 2006, 2006, p. 6 pp. 114

Bibliography [65] M. Vaccarini, S. Longhi, and M. R. Katebi, “Unconstrained networked decentralized model predictive control,” Journal of Process Control, vol. 19, no. 2, pp. 328 – 339, 2009. [Online]. Available: http://www.sciencedirect.com/science/article/B6V4N-4SSGCHS4/2/4383680ae53c0b7aa32c9ee707da1cfb [66] W. B. Dunbar and R. M. Murray, “Distributed receding horizon control for multi-vehicle formation stabilization,” Automatica, vol. 42, pp. 549–558, 2006. [Online]. Available: http://www.escholarship.org/uc/item/3hw060jt [67] L. Sauro, “Cooperative Control of Underwater Glider Fleets by Fault Tolerant Decentralized MPC.” 17th IFAC World Congress, Seoul, Korea, 2008, pp. 16 021–16 026, 2008. [Online]. Available: http://dx.doi.org/10.3182/20080706-5-KR-1001.02708 [68] M. Vaccarini and S. Longhi, “Formation control of marine vehicles via realtime networked decentralized mpc,” in 17th Mediterranean Conference on Control and Automation, 2009. MED ’09., 2009, pp. 428 –433. [69] A. Auzis, J. Barzdins, J. Bicevskis, K. Cerans, and A. Kalnins, “Automatic construction of test sets: Theoretical approach,” in Baltic Computer Science, ser. Lecture Notes in Computer Science, J. Barzdins and D. Bjorner, Eds., vol. 502. Springer Berlin Heidelberg, 1991, pp. 286–359. [Online]. Available: http://dx.doi.org/10.1007/BFb0019362

115