Autonomous Copter Landing on Marked Moving

0 downloads 0 Views 1MB Size Report
Mar 1, 2019 - Implement landing system on a marked moving platform using a simulator .... Auto Tune application in MATLAB to get the best proportional, integral ..... landing of a miniature uav on a moving carrier vehicle,” Journal of intelli-.
Autonomous Copter Landing on Marked Moving Platform Victor Massagu´e Respall March 1, 2019

1

Project Goals and Novelty

Nowadays considerable research has been done in the field of autonomous aerial vehicles such as copters. Copters are the perfect vehicle for many different tasks as recognition of the terrain, accessing difficult or dangerous areas, delivering any object from point A to point B, collecting data or just simply for recreation. Clearly if the copter has the ability to land on a moving platform it would be much more useful and applicable to many other fields like rescue or military missions. For all these tasks we need a system for landing in different situations and environments specially when the target or platform where the copter is supposed to land is moving. It suppose a challenge to deal with the landing of the copter on a moving platform due to unknown motion of the platform and the nonlinear copter dynamics. Another main problem is the autonomy of the copter, since typically it is about 20 minutes. However this problem is out of consideration in this project. This project deals with the problem of landing an autonomous copter on a marked moving platform. Basically it implies three goals: • Detecting and tracking a marked platform using a simulator • Design of the controller • Implement landing system on a marked moving platform using a simulator and after that in real world with Ar Drone 2.0 Since we assume that the shape of the platform is known (marked), vision and global positioning are typically used to recognize and track it.

2

Abstract

This project presents the task of solving an autonomous copter landing on a marked moving platform. The copter needs to identify and track a marked moving platform using vision and global positioning. Kalman filter will be used 1

for predicting the position of the moving platform. The whole project is first tested by computer simulator like Gazebo, and after getting sufficient successful results a Parrot Ar Drone 2.0 is used for testing our system in the real world. Ar Drone is tracking satisfactory the maker up to some point that he is not able to detect it due to the low height while is landing. This produces a slight error in the accuracy of the final position.

3

Related Works

Similar project applied to a ship platform using an helicopter was performed. The problem was presented as a nonlinear tracking control problem for an underactuated system with a two time-scale controller. Through simulations, the proposed approach was demonstrated to be effective for the problem [1].A work presents a quadrotor system capable of autonomously landing on a moving platform using state-of-the-art computer vision algorithms and multi-sensor fusion [2].A work with vision-based target following and landing system for a quadcoptor using Adaptive control was implemented[3]. Another work related to marine environment but using our copter Ar Drone. The algorithm takes up the challenge of tough landing conditions prevalent in the oceans due to winds and currents. Landing pad sensing was achieved through image processing techniques using MATLAB. Testing has been carried out both indoors, and outdoors over open water, with a success rate of over 75% [4]. A more familiar way is used in this project, Kalman filter for target tracking. Based on the output of the tracker a simple trajectory controller is implemented which (within the given constraints) ensures that the helicopter is able to land on the target. They presented results from data collected from manual flights which validate their tracking algorithm [5]. A low-cost flight control system for a small outdoor helicopter where all processing is performed on-board was described by [6] and [7] shows a vision-based road-following algorithm for an autonomous fixed-wing aircraft with a wingspan. Experiments with an external motion capture system were conducted by [8]. They achieved a maximum deviation of 10 cm from the desired position. Also [9] were using a field monitored by stationary cameras. They presented an assistant and training system for controlling RC helicopters during flight. The idea of using LEDs as landmarks is not completely new. [10] described how a miniature helicopter can be visually tracked in six degrees of freedom with only three on-board LEDs and a single on-ground camera. Also the Wii remote camera can be found in former projects. [11] presented an optical tracking system using the camera data of two Wii remotes. Similar approach using Wii remotes was used in [12] and the system provides small errors and allows for safe, autonomous indoor flights.

2

4

Methodology

PID Controller PID controller is a well defined classical controller and has been in practice for many years due to it’s design and performance. As the name suggests the PID consist solely of three elements Proportional, derivative and integral. The mathematical form of PID controller is given below, R u(t) = Kp e(t) + Ki e(τ )d(τ ) + Kd de dt wheres Kp , Ki , Kd are the proportional, integral and derivative gains and e(t) is the error signal i.e., the difference between desired and measured value. The basic operation principle of PID control is that it takes measured value from output sensor, compares it with the desired value and than accordingly sends commands to a plant. Plant can be thought as the system on which the desired actions are being performed to optimize it’s behaviour. The block diagram is shown in figure.1 The three terms of PID controller i.e., proportional integral and derivative are

Figure 1: PID Controller Schematic Diagram responsible for different actions. Let me discuss each term one by one. Proportional gain just multiplies a constant value with the error. It aggressively tries to match output and input. The higher value of proportional gain results in improved rise time (rise time is the measure of time in which the value of system reaches 90% of it’s steady state value). But increasing the proportional gain results in more oscillations around steady state value. Next is the derivative term, increasing this term results in less oscillations in a system which can be a very good feature in designing stable systems. But it has a limit also, increasing the derivative term make our system over-damped i.e., it takes a lot of time to reach it’s steady state value. Also it reduces the rise time of the system. The last term is the integral. It just integrates the error of the input and output over time. In this manner it tries to improve steady state error of the system. 3

Now how to choose the optimized values of proportional, integral and derivative term for our system, to make the best performance. This depends upon what performance you want from from controller for example, steady state error of less that 2%, rise time of 2ms and overshoot of 10%. Next after deciding design parameters we can tune these parameters by Matlab or by using different mathematical approaches like Ziegler Nichols Tunnig method. Before anything, calibration of the bottom camera needs to be done for deleting possible distortions in the images. Next step is preparing the scene placing the marker and the drone on it. Right after that the drone will start its execution taking off and following the marker autonomously. This is done using a tracking algorithm that contains a system to detect the marker in the image provided by the bottom camera, and it returns the position of the marker with respect to the camera frame. In order to improve our position coordinates and deal with small delays of communication between the laptop and the drone Kalman is applied and it gives us the estimated position of the marker used for applying PID controller. If the drone doesn’t receive any message of landing, the drone will continue following the marker. Finally if the command is sent, the drone initiates the landing procedure on the moving platform. A simple scheme of this process can be seen in Figure 2.

5

System Setup

For our project we are using the following software: • Ros libraries – Tum Simulator: Contains the Gazebo simulator implementation for Ar Drone 2.0 – Ar Drone Autonomy: Contains useful commands for controlling the drone – Ar Track Alvar: Contains tracking algorithm of a known tag/marker – Ros-Gazebo: Allows us to launch Gazebo in ROS environment – Camera-Calibration: Allows us to calibrate the bottom camera of the drone. • Gazebo simulator • Ar Drone 2.0 • Marker acting as a target platform The current directory of our project is hosted here: https://github.com/vicmassy/Landing-Drone

4

Figure 2: Diagram showing the methodology performed

6

Experimental Results

Simulations First of all the required libraries are loaded in the environment. The process is run by executing the proper commands in Terminal. Figure 3 depicts the

5

scenario just after loading the environment. The top right section shows the live coverage of bottom camera of Quadcopter.

Figure 3: Initial State in Gazebo with bottom camera window on top right corner Next the Quadcopter is commanded to take off from it’s initial position, here the Camera and Tracking libraries come into play. The bottom camera is used to acquire image of the AR Tag and it’s associated library help to find the exact coordinates of the AR Tag. Once we are done with the acquiring the coordinates the task is simple to track the AR Tag. This can be shown in Figure 4. It worth noting that the bottom camera is acquiring the image in Top Right window of Figure 4. Now its time to show the important task of tracking the object. Here the AR Tag is displaced from its location such that it is inside the view field of the bottom camera of the Quadcopter. The task of moving the target is now being done manually. When the AR Tag is released in the environment the Quadcopter starts following it. This is shown in Figure 3. For now we have successfully achieved the goal of tracking the object. Now our next step is to design a proper Controller for the Quadcopter in real hardware.

Real Time Testing For the flight test we decided to change the marker size from what we were using in the simulation. The reason is that reducing the marker size results in amplifying the field view of bottom camera of the Ar Drone. The field view of Ar Drone 2.0 is just 96◦ . So we reduced the size of marker to 6cm by 6cm. The design of platform was also the main concern to test the experimental results. 6

Figure 4: Takeoff State in Gazebo with bottom camera window on top right corner

Figure 5: Tracking State in Gazebo with bottom camera window on top right corner So we designed the moving platform by hard board and attached little masses to make it more stable, also not to be influenced by the propeller speed. We attached a small rope with the hard board to ease the movement of it. The system setup is shown in Figure 6 There was also a major obstacle while controlling a drone, it is the self sta7

Figure 6: System Setup for Real Time Testing bilization feature of Ar Drone 2.0 it uses the bottom camera to automatically stabilize itself, if it detects any thing moving in its field view, the Ar Drone senses it as a disturbance from steady state value and after that the drone tries to recalculate it’s position. However we were able to turn off this auto stabilization feature of the Ar Drone. We used a small trick while implementing it in our code. We set a value to twist along x and y very small not exactly zero. But after doing this trick, we lost the stability of the Ar Drone and we need to redesign the PID controller for Ar Drone. For the design of controller we tried to implement Model Predictive Control but due to lag of time we relied on PID controller. Contrary to the previous work we used to designed the PID controller in more mathematical way. To design the best performance PID controller we first decided to do system identification of Ar Drone and extract the dynamical model of the drone. But we were not able to directly access the Bldc motor of the Ar Drone. This is due to the onboard controller embedded on the Ar Drone. So we were restricted in sending commands to motors and also the current measurements was not possible to do this task. So we took the parameters value from a different research paper for Ar Drone and designed the new PID controller using MATLAB. We used PID Auto Tune application in MATLAB to get the best proportional, integral and derivative gain values for our model. Also we used the predicted values from Kalman filter to implement the PID controller in our project. The altitude of the Ar Drone was also a problem during testing, in the take off state the Ar Drone was not able to attain an appreciable height as a result it loss the marker very easily. So we decided to increase the state of the system. We designed a just that launches just after the state of take-off. In this state we send message to Ar Drone to move in z-direction. And that we start our real 8

testing. For properly landing the Ar Drone on the moving platform we implement our landing strategy for the Ar Drone. We implemented the code in such a way that after taking off and attaining an appreciable height the Ar Drone start tracking the platform and waits for the user to enter the command for landing. Once it is directed to land, the algorithm decreases the current height of Ar Drone by 10% and again searches for landing target, this process is repeated till the Ar Drone is at the height of 0.5m from ground, at that time the drone lands on the platform.

7

Results

After evaluating some previous works on this field, we see that the main problem is dealing with the real environment because of too many factor to consider. Considering first the simulation, the drone was performing ideally, it was stable, taking off vertically without any disturbance and so smooth in following the marker. Using the real hardware, the drone has some difficulties. Firstly, when it takes off, the movement is not performed purely vertical; it has some displacement due to some unbalancing feature of the structure of the drone itself. Something that we cannot control. This problem cause that sometimes the drone was not able to find marker in the initial position. A slight improvement was done for this phase, balancing each propeller using tape, because it is light weight and can be easily biased. A big difference was noticed after applying such a technique, but still not enough to perform as desired. Secondly, after many tests we can conclude that this drone causes some troubles when the self stability feature is disabled, because it forces to set non-zero values for Roll and Pitch axis. We can ensure that the drone is tracking properly for at least 30 seconds. Finally, a simple landing system is developed in order to be able to land while the platform is moving. This means that we are decreasing the altitude of the drone while tracking the marker. The pressure sensor is used to determine the current altitude of the drone and make it completely autonomous when landing. A slight problem was that sometimes the drone at certain altitude is not able anymore to detect the marker, in other words he lost the marker due to reducing the altitude. A video of final test can be seen here: https://youtu.be/xr5me2bS4HA

8

Conclusion

This paper presents a research focus on an UAV able to land on a moving known marker acting as a platform. First stage of the drone is taking off and start tracking the known marker. For that the image of the bottom’s camera is used to determine the position of the marker, while correcting it using Kalman filter 9

and controlling the drone with PID. While following stage a landing command can be sent to the drone and it will start the landing procedure on the platform. For testing in real hardware Ar Drone 2.0 was used. Ar Drone was performing satisfactory while tracking but some times it does not land on target platform. This is due to when it decreases its height to land on platform, some times it losses the marker and use the previous value as the target location. Possible solutions can be a gradual landing. It means dynamically changing the coefficient or range that makes the drone decrease the altitude. Also as we have turned off the auto stabilization feature of Ar Drone, so it was comparatively difficult to expect high performance from Ar Drone. More research on this topic should be done, testing different controllers and evaluating the performance and improving the platform with a ground autonomous robot.

Appendices A

Launch File of all necessary nodes and packages





10









B

Source File for tracking marker with Ar Drone 2.0

#include ” r o s / r o s . h” #include #include // TF #include ” t f / t r a n s f o r m l i s t e n e r . h” // Ardrone c o n t r o l #include ” s t d m s g s /Empty . h” #include ” b o o s t / t h r e a d . hpp” #include ” s e n s o r m s g s /Range . h” 11

#include #include #include ” K a l m a n F i l t e r . h” u s i n g namespace b o o s t ; u s i n g namespace b o o s t : : t h i s t h r e a d ; u s i n g namespace Eigen ; int s t a t e = 0 ; double a l t d = 0 . 0 ; double r = 0 . 0 ; double i n i t T i m e = 0 ; bool land = f a l s e ; double l i m i t = 1 . 0 ; double s u m e r r o r x = 0 . 0 ; double s u m e r r o r y = 0 . 0 ; double l a s t e r r o r x = 0 . 0 ; double l a s t e r r o r y = 0 . 0 ; double d i f f x = 0 . 0 ; double d i f f y = 0 . 0 ; double v e l x = 0 . 0 ; double v e l y = 0 . 0 ; void checkCin ( ) { char f o l l o w m a r k e r ; s t d : : c i n >> f o l l o w m a r k e r ; i f ( f o l l o w m a r k e r == ’ l ’ ) { state = 2; } char l a n d ; s t d : : c i n >> l a n d ; i f ( l a n d == ’ l ’ ) { state = 3; } } void n a v d a t a C a l l b a c k ( const ardrone autonomy : : NavdataConstPtr nav ) { v e l x = nav−>vx ∗ 0 . 0 0 1 ; v e l y = nav−>vy ∗ 0 . 0 0 1 ; a l t d = nav−>a l t d ∗ 0 . 0 0 1 ; } int main ( int argc , char ∗∗ argv ) { r o s : : i n i t ( argc , argv , ” MarkerLanding ” ) ; ROS INFO( ” MarkerLanding : S t a r t e d ” ) ; // D e f i n e node h a n d l e r f o r s u b s 12

t f : : TransformListener l i s t e n e r ; r o s : : NodeHandle n ; i n i t T i m e = ( double ) r o s : : Time : : now ( ) . t o S e c ( ) ; // Loop r a t e r o s : : Rate r a t e ( 1 0 0 ) ; // Thread t h a t w a i t s u s e r i n p u t b o o s t : : t h r e a d t (& checkCin ) ; // Twist msgs i n i t geometry msgs : : Twist t w i s t o l d ; geometry msgs : : Twist t w i s t ; twist . linear . x = 0.0; twist . linear . y = 0.0; twist . linear . z = 0.0; twist . angular . x = 0.0001; twist . angular . y = 0.0001; twist . angular . z = 0 . 0 ; int s i z e n = 4 ; int s i z e m = 2 ; double dt = 0 . 0 1 ; double s t a t e x = 0 . 0 , s t a t e y = 0 . 0 , c u r r e n t x = 0 . 0 , c u r r e n t y = 0 . 0 ; MatrixXf MatrixXf MatrixXf MatrixXf MatrixXf MatrixXf A H R P Q X