Auto Takeoff and Precision Terminal-Phase Landing ...

2 downloads 0 Views 2MB Size Report
This section presents the nonlinear model of the Maxi Joker. 3 helicopter which is used in this study. The ground effect model is also presented in this section.
Auto Takeoff and Precision Terminal-Phase Landing using an Experimental Optical Flow Model for GPS/INS Enhancement Mohammad Al-Sharman, Mohammad Amin Al-Jarrah and Mamoun Abdel-Hafez Abstract— The high estimated position error (EPE) in current commercial-off-the-shelf (GPS/INS) impedes achieving precise autonomous takeoff and landing flight operations. To overcome this problem, in this paper we propose an integrated GPS/INS/Optical Flow (OF) solution in which the OF provides an accurate augmentation to the GPS/INS. To ensure accurate and robust OF augmentation, we have used a robust modeling method to estimate OF based on a set of real-time experiments conducted under various simulated helicopter-landing scenarios. Knowing that the accuracy of the OF measurements are dependent on the accuracy of the height measurements; we have developed a real time testing environment to model and validate the obtained dynamic OF model at various heights. The performance of the obtained OF model matches the real OF sensor with 87.70 % fitting accuracy. An accuracy of 0.006 m/s mean error between the real OF sensor velocity and the velocity of the OF model is also achieved. The velocity measurements of the obtained OF model and the position of the GPS/INS are used in performing a dynamic model-based sensor fusion algorithm. In the proposed solution, the OF sensor is engaged when the vehicle approaches a landing spot that is equipped with a predefined landing pattern. The proposed solution has succeeded in performing a helicopter auto takeoff and landing with a maximum position error of 27 cm. Index Terms— Flybarless helicopter; Takeoff; Precision Landing; Sensor Fusion; OF Sensor; Sensor Modeling; System Identification; Model Validation. I.

INTRODUCTION

U

NMANNED vertical takeoff and landing (VTOL) vehicles have proved to be ideal platforms for applications such as surveillance, aerial photography and other military and civil applications [1, 2, 3]. For a successful utilization of the vehicle in these autonomous applications, the onboard solutions i.e.; GPS/INS solutions need to be improved to obtain more accurate state estimates of the vehicle while takeoff and landing [4, 5]. Several published works have primarily concentrated on the landing phase as it has been deemed as the most difficult phase in achieving a successful autonomous flight [2, 6, 7]. Different landing techniques have been proposed to perform landing on moving and fixed targets [8, 9, 10]. In this paper, a new approach for auto takeoff and precision landing on a stationary predefined target is addressed. Mohammad Al-Sharman is with the Robotics Institute, Khalifa University of Science and Technology, P.O. Box 127788, Abu Dhabi, UAE (email:[email protected]). Mohammad Amin Al-Jarrah is with the Department of Aeronautical Engineering, Higher Colleges of Technology, P.O. Box 15825, Dubai, UAE (e-mail: [email protected]). Mamoun Abdel-Hafez is with the Department of Mechanical Engineering, American University of Sharjah, P.O. Box 26666, Sharjah, UAE (e-mail: [email protected]).

Vision sensors have been employed in designing autonomous landing systems in recent years due to their precision and high measurement update rates. The existed visual landing systems can be classified into two groups: First, systems that use vision to detect a suitable place for landing [9]. Second, systems that use vision for landing on a predefined target [8] [10]. However, A vision-based landing algorithm for an autonomous helicopter was proposed in [8]. The study achieved a precise landing with an average position error equal to 47 cm. This landing solution is computationally heavy and ill-suited for helicopters that have smaller payloads. Moreover, the utilized AVATAR helicopter is equipped with a set of high cost sensors such as the Novatel RT-20 DGPS and CCD camera. In reference [10], a landing with 54 cm maximum position error was conducted for the Yamaha-RMAX helicopter. The measurements of the vision sensor and the INS sensors were used to feed the controller with accurate state estimates. Besides the heavy computations of this algorithm, a helipad with special patterns is needed to achieve accurate pose estimation. Reference [11] addresses the problem of landing a helicopter in unknown areas using vision based terrain recovery. A computer vision technique was used to find a safe landing area, avoid trees and obstacles and navigate the vehicle to the landing site. This solution is also computationally heavy and a Yamaha R-50 helicopter with 20 kg payload is used to carry the equipped sensors. In [12], Wii remote camera and a pattern of IR LEDs arranged in a T-shape were utilized to perform an indoor auto takeoff, hovering and landing. The algorithm performs well at 60 cm height, but at higher altitudes, the positional error will be large and inaccurate and TOL might occur. In [13], a vision system based on off-the-shelf hardware was used to provide real time estimates of the position and orientation of the UAV relative to the landing pad. In [14], an OF sensor was used to estimate the position of a quadrotor and an autonomous landing with 30 cm position error was performed. Low cost OF sensors have attracted a considerable attention in vision applications related to UAVs [15]. Optical flow sensors are used to avoid collision, measure the altitude and for position stabilization during the landing stage [16]. In [17], an optical flow mouse sensor was used for height approximation and terrain navigation. In [18], the optical flow was used as feedback information to regulate the automatic vertical landing on the moving platform. In [19], optical position stabilization was designed for a quadrotor before approaching the ground. In [20], a special type of OF CMOS camera called the PX4FLOW sensor was introduced. This sensor has no light constraint which allows performing indoor and outdoor flights under various light conditions. PX4FLOW was used to perform indoor and

outdoor flight operations for Cheetah Quadrotor. The OF components measured by the PX4FLOW sensor are inherently compensated for the 3D rotations and transformed to the metric scale. This reduces the transformation complexities and simplifies the control algorithm. Motivated by its significant performance in indoor and outdoor flight operations in [20], in this work, PX4FLOW sensor has been selected to be used for the OF modeling experiment. An accurate multisensor data fusion algorithm is needed for the helicopter to obtain accurate estimates while an autonomous flight is being performed. However, obtaining accurate states estimation is challenging due to the large drifts, possible measurement bias and immense noise on the onboard commercial-off-the-shelf (COTS) sensors [21, 22, 23]. COTS sensors with such expected errors are usually used in VTOL UAVs because of their lightweight, small size and low power consumption. By fusing the measurements of different sensors, an accurate estimate can be obtained. For example, in [3] and [24], high-accuracy helicopter’s attitude and flapping states estimates were achieved using the dynamic model of a flybarless helicopter platform. In [25], a data fusion between the kinematic OF and the GPS/INS was used to estimate the position and the velocity of an object revolving in a three dimensional space. In this paper, the problem of auto takeoff and precision terminal-phase landing for a small-scale flybarless helicopter has been resolved using an integrated GPS/INS/OF solution. The proposed solution fuses the velocity measurements of the experimentally obtained dynamic model of the OF sensor with the position measurements of the GPS/INS unit. In this research, the main contributions are concluded as follow. First, a novel experimental dynamic OF model has been identified. A system identification experiments were performed to model the dynamics of the OF sensor. The measurements of the obtained model have 87.70% fitting accuracy with the real OF measurements. Second, an experimental validation of the obtained OF model has been conducted at different heights to ensure an accurate response of the OF model while height is changing. An accuracy of 0.006 m/s mean error between the real OF sensor velocity and the velocity of the OF model is achieved. Third, a consistent prediction capability of the OF model’s accuracy at different heights has been formulated. The accuracy of the OF velocity estimate can be predicted at any given vehicle’s height at which the moving feauteres of the landing pattern can still be seen by the OF sensor. This is used to tune the error statistics of the OF sensor fusion algorithm at varying altitudes of the vehicle’s trajectory. Finally, for a flybarless helicopter, an accurate auto takeoff and precision landing have been performed with 27 cm maximum position error. To the extent of the authors’ knowledge, modeling the OF sensors and performing a sensor fusion between the experimental dynamic model of the OF sensor and the GPS/INS solution was not addressed by researchers before for precision landing applications. The rest of the paper is organized as follows. Section II describes the dynamic model of the helicopter and the ground effect model is also described. Section III presents the basic motion field equations, the OF modeling experiment, model validation and the formulation of the OF

accuracy prediction. In Section IV, the sensor fusion algorithm design is discussed. Section V presents the state estimation results. Finally, Section VI concludes the paper. II. HELICOPTER MODEL This section presents the nonlinear model of the Maxi Joker 3 helicopter which is used in this study. The ground effect model is also presented in this section. A) Maxi Joker 3 dynamics A top-down modeling approach is used to model the Maxi Joker 3 helicopter [26, 27]. This approach models the helicopter dynamics, via a Matlab Simulink environment, in four major blocks; the actuator dynamics, the flapping and thrust dynamics, the forces and torques and the helicopter rigid body equations. 1.

Reference Frames

To describe the position and orientation of the helicopter, two coordinate frames are used; the body frame and the inertial frame. The body frame (BF) has its origin at the center of gravity (CG) of the helicopter. This right-handed frame is shown in Figure 1. As seen from the figure, the BF axes are denoted as 𝑋𝐵 , 𝑌𝐵 , and 𝑍𝐵 . On the other hand, the earth frame (EF) is an inertial frame with axes denoted as 𝑋𝐸 , 𝑌𝐸 , and 𝑍𝐸 . The position and velocity of the helicopter are described using the EF. The EF is positioned at a fixed point on the Earth’s surface; usually at the location of the monitoring ground station as shown in Figure 1 [26].

2.

Figure 1: Reference frames Matrix Transformation

The orthonormal rotation matrix 𝑅𝐵𝐸 is used to change the representation of a vector from the BF to the EF and vice versa. The matrix, shown in equation 1, is obtained by successive rotations about the x, y, and z-axes by the roll, pitch, and yaw Euler angles,𝜙, 𝜃, and 𝜓, respectively [27] 𝑅𝐵𝐸 = 𝑐(𝜃) 𝑠(𝜓) 𝑐(𝜓) 𝑠(𝜃) 𝑠(𝜙) − 𝑠(𝜓) 𝑐(𝜙) 𝑐(𝜓) 𝑠(𝜃) 𝑐(𝜙) + 𝑠(𝜓) 𝑠(𝜙) [𝑐(𝜃) 𝑠(𝜓) 𝑠(𝜓) 𝑠(𝜃) 𝑠(𝜙) + 𝑐(𝜓) 𝑐(𝜙) 𝑠(𝜓) 𝑠(𝜃) 𝑐(𝜙) − 𝑐(𝜓) 𝑠(𝜙)] − 𝑠(𝜃) 𝑐(𝜃) 𝑠(𝜙) 𝑐(𝜃) 𝑐(𝜙)

(1) 3.

State Space Model

The position and attitude of a rigid body at any instant in time in 3-D space are defined using the 6 DOF model. This model is

(2)

helicopter is hovering at, 𝑉𝑖 represents the rotor speed, and 𝑣𝑖 is the induced velocity. Equation 11 illustrates the ratio between the thrust in both zones; in ground and out ground effect zones. It can be shown from equation 11 that at a height of one half the rotor’s diameter, the thrust is increased by 7%. However, at rotor heights above one rotor diameter, the increase in the thrust is small and decreases to zero at heights more than 1.25 of the rotor’s diameter.

𝑀 𝐽𝑥 − 𝐽𝑧 − 𝑝𝑟 𝐽𝑦 𝐽𝑦

(3)

III. OPTICAL FLOW MODEL

𝑁 𝐽𝑦 − 𝐽𝑥 𝑟̇ = − 𝑝𝑞 𝐽𝑧 𝐽𝑧

(4)

This section introduces the OF modeling experiment. The system identification process and model validation are also described.

needed to define the position and the attitude of the CG of the rigid body and the vehicle [5]. The model equations are written in the body frame. Equations (2-4) are the vehicle’s rotational dynamics equations, and equations (5-7) are the vehicle’s translational dynamics. These equations are described as follows [5]: 𝑝̇ =

𝐿 𝐽𝑧 − 𝐽𝑦 − 𝑞𝑟 𝐽𝑥 𝐽𝑥

𝑞̇ =

𝐹𝑋𝐵 (5) 𝑚 𝐹𝑌𝐵 (6) 𝑣̇ = −𝑟𝑢 + 𝑝𝑤 + 𝑚 𝐹𝑍𝐵 (7) 𝑤̇ = 𝑞𝑢 − 𝑝𝑣 + 𝑚 where p, q, and r are the body rates around the 𝑋𝐵 , 𝑌𝐵 , and 𝑍𝐵 axes, respectively; 𝑚 is the mass of the helicopter; 𝐽𝑋 , 𝐽𝑌 , 𝑎𝑛𝑑 𝐽𝑍 𝑎𝑟𝑒 the moments of inertia about the 𝑋𝐵 , 𝑌𝐵 , and 𝑍𝐵 axes, respectively; u, v, and w are the velocities in the body frame along the 𝑋𝐵 , 𝑌𝐵 , and 𝑍𝐵 axes, respectively; 𝐹𝑋𝐵 , 𝐹𝑌𝐵 , and 𝐹𝑍𝐵 denote the body forces along the 𝑋𝐵 , 𝑌𝐵 , and 𝑍𝐵 axes, respectively; and L, M, and N denote the body moments about the 𝑋𝐵 , 𝑌𝐵 , and 𝑍𝐵 axes. The attitude of the helicopter with respect to the earth frame is described using the Euler angles. These angles are computed based on the vehicle’s 6 DOF model. The equations that define the Euler angles are shown in equations (8 - 10) as: (8) 𝜃̇ = 𝑞 ∗ 𝑐𝑜𝑠(𝜑) − 𝑟 ∗ sin(𝜑) 𝑢̇ = 𝑟𝑣 − 𝑞𝑤 +

𝜑̇ = 𝑝 + (𝑞 ∗ 𝑠𝑖𝑛(𝜑) + 𝑟 ∗ 𝑐𝑜𝑠(𝜑)) ∗ tan(𝜃) 𝜓̇ = (𝑝 ∗ 𝑠𝑖𝑛(𝜑) + 𝑟 ∗ 𝑐𝑜𝑠(𝜑))/cos(𝜑)

A. Motion field equations The readout of the PX4FLOW sensor is computed based on well-known equations called Motion field equations [20]. These equations consist of a combination of the translational components and the rotational components of motion. The translational velocity components are dependent on the depth of the scene Z while the rotational components have no information about Z. The depth of the scene is inversely proportioned to the captured motion field value. This is clearly illustrated in Figure 2; objects that are close to the camera seem to move faster than objects which are relatively farther. For example, the pixel’s movement of the landing pattern in 2 (a) is faster than its movement in 2 (b). Therefore, in order to obtain an accurate OF measurement, an accurate measurement of the altitude has to be precisely measured. Similarly, the 3D rotational components of the camera must be accurately measured while computing the OF components.

(9) (10)

B) Ground effect model At altitudes that are in the range of the main rotor’s diameter, the ground has an effect on the airflow under the helicopter. The ground prevents the airflow from being established uniformly. As a result, the velocity of the induced flow will be reduced which leads to a reduction in the induced drag and an increment in the vertical lift vector. The ground effect on the main rotor’s thrust is modeled as follows [28]: 𝑇𝑂𝐺𝐸

𝑇𝐼𝐺𝐸 =

1 𝑅 2 )( ) ( 16 𝑧

(11) 1 ) 𝑉𝑖 2 1+( ) 𝑣𝑖 where 𝑇𝐼𝐺𝐸 is the main rotor thrust in the ground effect zone, 𝑇𝑂𝐺𝐸 is the main rotor thrust while flying out of ground effect zone, 𝑅 is the rotor’s radius, 𝑍 is the height that the 1−(

a

(a)

(b)

b

Figure 2. The correlation between the depth of scene and the OF values

Figure 3 illustrates the effect of the 3D rotational motion of the helicopter on the OF readings. Figure 3 (a) shows the helicopter when no 3D rotation is applied, while in 3 (b) the helicopter is rolled by a small angle. Clearly, in 3 (a) the pattern fills the view completely while it is partially seen in 3 (b) due to the roll motion. Therefore, in the latter case, and assuming that no translational velocities are applied in both scenarios, the measured OF values have to be accurately compensated for the 3D rotations. In this study, the proposed modeling approach uses the PX4FLOW smart kit which has 3D gyros and ultrasonic sensor onboard to compensate for the 3D rotations and the altitude variations at each time instant.

Assuming that the height of the pivot above the ground is 𝑙0 and the height of the OF sensor above the ground is ℎ, then the tip velocity of the pendulum at which the OF sensor is placed, for small oscillation magnitudes, is described by the equation: 𝑣𝑚𝑎𝑥 ℎ = √𝑔𝑙0 (1 − ) 𝜃0 𝑙0 (b)

(a)

(a)

(b)

Figure 3. The effect of the 3D rotations on the OF values

B. Problem Description In this research, the OF sensor is used to provide the translational velocities of the helicopter along the 𝑥 and 𝑦 axes, as the helicopter approaches the landing spot. We assume that the helicopter hovers over the landing pattern for a while and then starts descending until it touches the landing pattern. To simulate this performance, a pendulum setup with adjustable length is designed. The pendulum has the PX4FLOW attached to its lower end and a high-accuracy encoder attached to its upper end. The encoder is used to measure the angular position of the pendulum [29]. Figure 4 illustrates the geometric design of the experiment. As shown, the location of the PX4FLOW sensor in the x and z axes are given equations (12) and (13) as: 𝑥 = 𝑙. 𝑠𝑖𝑛𝜃

(12)

(13) 𝑧 = 𝑑 + 𝑙 − 𝑙. 𝑐𝑜𝑠𝜃 where 𝑙 is the length of the pendulum, 𝜃 is the angle from the zaxis and 𝑑 is the distance from the lower end of the pendulum to the ground. The velocity of the OF along the x and z directions are given in equations (14) and (15) as: (14) 𝑣𝑥 = 𝑙. 𝑐𝑜𝑠𝜃. 𝜃̇ 𝑣𝑧 = 𝑙. 𝑠𝑖𝑛𝜃. 𝜃̇

Figure 4. Geometric design of the experiment.

(15)

(16)

where 𝜃0 is the initial release angular position of the pendulum and 𝑔 is the gravitational acceleration. This equation shows that the optical flow sensor’s velocity is maximum when it is at its lowest position. The other issue that needs to be considered is that the FOV of the OF sensor is a vital factor to be considered when we infer OF sensor measurements. For instance, if we consider the pattern used for this study to be in the x-direction only, then the number of pattern lines that the OF would see is proportionate to the FOV angle times the height ℎ of the OF sensor. Therefore, by considering that the sampling frequency of the OF sensor is 250 Hz and to avoid aliasing many patterns with different spacing of alternating black and white, lines have been designed and tested. By conducting the experiment many times using different patterns, we found that the best spacing between two black consecutive lines is 7 mm. C. The experimental design A special pendulum test stand is designed to obtain the model the optical flow sensor. The PX4FLOW optical flow sensor is attached at the lower end of the pendulum, pointing downwards. An incremental encoder [29], with 10,000 pulses per resolution (ppr), is attached to the upper end of the pendulum. This encoder is used to measure, with high accuracy, the velocity of the pendulum while moving. A landing pad with black horizontal lines is placed underneath the optical flow sensor (see Figures 5 and 6). This allows the sensor to recognize the moving features and obtain an accurate velocity measurement. The optical flow sensor is serially interfaced with the microcontroller and its measurements are read using a special protocol, called the MAVlink protocol, at 115200 baud rate. The measurements of the PX4FLOW and the Baumer encoder [29] are measured through another serial port to a dSspace system, DS1104 R&D Controller Board, as shown in Figure 7. Using the measured position and the angular velocity of the encoder, the translational velocity in x is computed using equations (14). The compensated optical flow measurements, along the 𝑥 and 𝑦 directions, are then measured and sent to the dSspace unit with the Real-Time Interface (RTI). The DS1104 R&D Controller Board was used to monitor the sensors’ readings in a real-time environment. Knowing that the apparent motion is affected by varying the light conditions, in this work, the PX4FLOW sensor has been tested under different light conditions. As shown in Figure 8, the PX4FLOW is tested under low and intense light conditions. At intense light conditions, the apparent motion is highly seen and there are many features captured by the optical flow sensor. On the other hand, PX4FLOW is tested when low light sources are applied. As shown, it is proved that the PX4FLOW performs significantly under various light conditions.

D. OF sensor System Identification (ID) In this section, a model of the OF sensor is derived. A system identification process was performed to obtain the most representative models for the PX4FLOW sensor. A system identification MATLAB toolbox (ident) is used to identify the dynamic models of the OF sensor. As displayed in Figure 9, the identification process takes the encoder measurements as the input to the OF model and the OF sensor’s measurement as the output of the OF model. Encoder Input

Figure 5. Optical flow sensor’s pendulum test stand.

System ID Tool

Optical flow sensor s Model

PX4FLOW Output

Figure 9. System ID process for modelling the OF sensor

Figure 6: Landing pad

PX4FLOW

Serial0

Microcontroller

Serial1 Serial

Baumer encoder

Dspace Unit Plots

Velocity in X (m/s)

Velocity in X (m/s)

Figure 7. Real time OF sensor modeling experiment PX4FLOW under low light 0.1

encoder PX4FLOW

0 -0.1 0

5

10

15

20

25 30 35 Time(Sec) PX4FLOW under intense light

40

0.1

45

50

encoder PX4FLOW

To study the modeling experiment, it has been conducted repeatedly while the OF sensor is placed at different heights to assure the robustness of the derived models while the helicopter approaches the ground. The length of the pendulum is designed to be adjustable to test the optical flow sensor at different heights. In each test, the experimental procedure is implemented as follows: First, we make sure that the pattern is being seen clearly by adjusting the lens thread. This was guaranteed by monitoring the QGround Control software image [30]. Second, we apply a small-magnitude torque to sway the pendulum manually. Third, we record the motion profile of the pendulum as shown in Figure 10. Finally, we apply the system identification algorithm on the obtained data to generate the OF models that match the performance of the real OF the maximum. The OF sensor is tested at four heights from the ground: 60, 80, 100 and 150 cm. At each height, a set of high-fitting accuracy models has been selected to characterize the change in the sensor performance from one height to another. Table 1 includes the OF obtained models, their corresponding percentage of matching with the real OF and their second order dynamics. As discerned, all models are represented in second order dynamic models. Experimentally, we have found that second order models represent the dynamics of the OF the most. An attempt to model the OF using the first, third and higher order models was applied and the resulted models were not as fit as the second order models.

0 -0.1 0

5

10

15

20

25 30 Time(Sec)

35

40

45

50

Figure 8: PX4FLOW performance under low and high light conditions.

1) 60 cm height experiment A number of tests have been performed on the PX4FLOW sensor at this height. Figure 10 shows agreement between the measurements of the PX4FLOW sensor and the encoder. Table 1 shows the transfer functions obtained from the tests conducted

for this height. As shown in table 1, the obtained models match the PX4FLOW with 87.70 % highest fitting accuracy. 0.5

below 0.4 𝑚/𝑠𝑒𝑐. A new set of transfer functions has been obtained for this altitude. Coarse fitting is observed at this altitude as observed from table 1.

encoder PX4FLOW

X velocity (m/s)

1.5

Velocity in X (m/s)

0

-0.5 0

5

10

15

20

25 Time (s)

30

35

40

45

0.5 0 -0.5

50

-1

Figure 10. Optical flow vs encoder at 60 cm height

-1.5 0

2) 80 cm height experiment PX4FLOW sensor is placed and tested at 80 cm high. Figure 11 depicts the performance of the PX4FLOW and the encoder. The optical flow sensor reads small velocity while the pendulum is at rest. This is due to the apparent motion of the image by the OF sensor which cannot be zero in any case. According to Table 1, the characteristics of the identified second order models match the optical flow sensor’s with about 79.60 % fitting accuracy. 0.8

encoder PX4FLOW

0.6 Velocity in X (m/s)

encoder PX4FLOW

1

0.4 0.2 0 -0.2 -0.4

5

10

15

20

25 Time (s)

35

40

45

50

Figure 13. Optical flow vs encoder at 1.5 m height

E. OF Model Validation After acquiring the models that have the best matching with the optical flow sensor’s measurement, these models are validated at altitudes they are obtained at. As observed from the modeling experiment, the performance of the optical flow sensor differs from one height to another. Therefore, the identified OF models have to be validated at different heights. The validation is accomplished by performing a set of real-time experiments at each height. As shown in Figure 14, three velocities are plotted: the first one comes from the OF sensor which is read serially, the second velocity is the translational velocity which is computed from the measured angular position and angular velocity of the encoder, and the third velocity is the one computed using the identified OF model.

VPX4FLOW

-0.6 -0.8 0

5

10

15

20

25 Time (s)

30

35

40

45

50

Vencoder

Figure 11. Optical flow vs encoder at 80 cm

3) 100 cm height experiment This test shows the performance of the OF sensor as the height from the ground increases to be 1 m. Figure 12 describes the translational velocity measured by the OF and the optical sensors. As described in Table 1, it is clear that the percentage of fitting decreases as the altitude from the ground increases. 0.8

Scope Incremental encoder (Baumer1000ppr)

θ

.

θ

Conversion

Vx

OF model

VOF

Transfer function

encoder PX4FLOW

0.6 Velocity in X (m/s)

30

Figure 14. Block diagram of optical OF model validation

0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 0

5

10

15

20

25 Time (s)

30

35

40

45

50

Figure 12. Optical flow vs encoder at 1m height

4) 150 cm height experiment As shown in Figure 13, the performance of the PX4FLOW sensor degrades at this height. The optical flow sensor has failed in recognizing the small velocities of the pendulum which are

A validation experiment was performed to test each model at its corresponding height. At each height, the seven previously obtained transfer functions were considered in the validation experiment. Figures 15 illustrates the validation results for the selected models at 60 cm height. The model validation results show that the obtained optical flow models are accurate. This will allow for the utilization of the identified models in the design, testing and validation of sensor fusion applications. In Figure 15, a good matching between the OF sensor reading and the first OF model (TF1) reading is demonstrated. The mean error between the OF reading and transfer function reading is 0.0048 𝑚/𝑠 with an error standard deviation of 0.0256 m/sec.

encoder PX4FLOW TF1

0 -0.5 0

10

20

30

40

50

X velocity (m/s)

X velocity (m/s)

0.5

0.5

encoder PX4FLOW TF2

0 -0.5 0

10

20

0.5

encoder PX4FLOW TF3

0 -0.5 0

10

20

30

40

50

30

40

50

X velocity (m/s)

X velocity (m/s)

encoder PX4FLOW TF5

0 20

encoder PX4FLOW TF4

0 -0.5 0

10

20

0 30

40

50

X velocity (m/s)

X velocity (m/s)

encoder PX4FLOW TF7 20

40

50

encoder PX4FLOW TF6

0 -0.5 0

10

20

30

40

50

Time (s)

0.5

10

30

0.5

Time (s)

-0.5 0

50

Time (s)

0.5

10

40

0.5

Time (s)

-0.5 0

30 Time (s)

X velocity (m/s)

X velocity (m/s)

Time (s)

0.5

encoder PX4FLOW TF8

0 -0.5 0

10

Time (s)

20

30

40

50

Time (s)

Figure 15: OF model Validation at 60 cm

Similarly, the second model (TF2), achieves a significant matching with mean error of 0.0054 and error standard deviation of 0.0319 m/s. The third model shows an excellent matching between the OF sensor measurement and the sensor model reading. Comparatively, third model has lesser mean error of 0.0046 𝑚/𝑠𝑒𝑐 and the error in standard deviation is about 0.0313 m/s. The performance of the fourth model is significantly validated. (TF4) achieves significant matching between the OF sensor reading and the fourth OF model (TF4) reading. The mean error between the OF reading and transfer function reading is 0.0048 𝑚/𝑠 with an error standard deviation of 0.041m/s. The response of the fifth model to the encoder input shows significant matching with the performance of the real PX4FLOW sensor. The mean error is 0.00042 m/s with an error standard deviation of 0.0262 m/s. The validation of the sixth model is described in figure 15. The mean error is 0.0029 m/s with an error standard deviation of 0.0626 m/s. The seventh model matches the real sensor reading with high accuracy. As displayed, the mean error between the OF reading and the transfer function reading is 0.0018 𝑚/𝑠𝑒𝑐 with an error standard deviation of 0.0356 𝑚/𝑠𝑒𝑐. Hence, as per the previous validation tests, it can be concluded that all models show high accuracy matching against the real signal of the optical flow sensor at 60 cm. Table 2 summarizes the achieved error means and error standard deviations for all identified transfer functions at each height. The small errors, shown in Table 2 at 60 cm, demonstrate high-accurate modeling of the OF sensor’s measurement at this specific height. Subsequently, the validation of the optical flow sensor was carried out at higher heights. We adjusted the pendulum length to keep the distance between the pattern and the PX4FLOW at 80, 100, and 150 cm and the validation process was repeated at each of these altitudes.

Table 1: Obtained OF models at different altitudes Obtained Models

2nd order Dynamics 𝒂 𝒔𝟐 + 𝒃𝒔 + 𝒄

Best Fitting Accuracy

𝑎

𝑏

𝑐

87.70%

260.30

17.70

250.00

86.60% 84.20% 83.00% 84.20% 83.30% 84.00%

196.90 355.70 298.60 312.50 699.40 721.40

14.80 36.30 33.20 31.80 56.80 57.10

191.90 333.30 277.10 289.90 644.40 645.00

At 60 cm 𝟏 𝟐 𝟑 𝟒 𝟓 𝟔 𝟕

At 80 cm 𝟏 𝟐 𝟑 𝟒 𝟓 𝟔 𝟕

79.60%

549.70

45.20

522.00

78.00% 75.00% 78.10% 73.10% 73.60% 77.60%

309.00 355.70 302.00 194.00 694.40 373.50

25.70 43.60 33.20 27.20 56.90 36.00

297.30 475.00 277.10 184.90 644.40 325.80

𝟏

64.00%

395.20

41.90

440.00

𝟐

72.00%

224.50

30.90

237.80

𝟑

65.80%

702.30

59.00

756.30

𝟒

68.60%

465.00

45.70

522.40

𝟓

67.40%

268.70

35.00

307.00

𝟔

61.60%

1269

195.80

1359.0

𝟕

62.50%

751.00

81.00

898.90

𝟏

57.70%

1299.4

87.34

1119.3

𝟐

57.20%

226

26.63

177.30

At 1 m

At 1.5 m

𝟑

51.00%

314.15

32.35

261.66

𝟒

53.00%

807.16

66.50

619

𝟓

53.00%

255.50

42.90

199.80

𝟔

51.40%

209.58

38.70

169.75

𝟕

57.00%

205.64

26.40

174.80

Table 2 shows the model mean error and the standard deviations error for the various identified models at heights of 80 cm, 100 cm, and 150 cm, respectively. It is shown similar high-accuracy identified OF models at these heights. As expected, the (ME) and the SDE values of the obtained transfer functions at 1.5 m are high among all other heights. This is due to OF sensor inaccuracies at high heights. Table 2: OF models validation statistics Obtained OF Models

Mean error (ME) (m/s)

Standard Deviation Error (SDE)(m/s)

𝟏 𝟐 𝟑 𝟒 𝟓 𝟔 𝟕

0.0048

0.0256

0.0054 0.0046 0.00071 0.00042 0.0029 0.0018

0.0319 0.0313 0.0415 0.0262 0.0626 0.0356

At 60 cm

At 80 cm 𝟏 𝟐 𝟑 𝟒 𝟓 𝟔 𝟕

0.0061

0.0586

0.0038 0.0039 0.0054 0.0041 0.0046 0.0061

0.0555 0.0620 0.0844 0.0660 0.0642 0.0619

𝟏

0.0010

0.0750

𝟐

0.0055

0.0604

𝟑

0.0080

0.0564

𝟒

0.0107

0.1114

𝟓

0.0047

0.1240

𝟔

0.0176

0.0995

𝟕

0.0063

0.0949

At 100 cm

At 150 cm 𝟏

0.0183

0.2964

𝟐

0.0396

0.2377

𝟑

0.0243

0.2797

𝟒

0.0212

0.2511

𝟓

0.0139

0.1289

𝟔

0.0046

0.2583

𝟕

0.0099

0.2711

Comparing the results at the heights presented in this work and taking into account the designed experimental setup, we find that the image resolution is affected by the FOV of the sensor as the height changes. To make this clear, if the resolution of the image at a height of 0.6 m is used to normalize the resolution at the other three heights, the following is concluded, the image resolution at 0.8 m, 1m, 1.5 m is reduced by a ratio of 1.33, 1.667, and 2.5 with respect to that at 0.6 m respectively. However, considering the velocity at which the optical flow sensor is moving at various heights and normalizing by the velocity of the OF at 0.6 m high, the following is also concluded, the OF velocity at 0.8 m, 1.0 m, and 1.5 m is 0.89, 0.77, and 0.6325 of that obtained at 0.6 m respectively, (see equation 16). Therefore, the combind effects of the FOV and the sensor velcoity at 0.8 m, 1.0 m, and 1.5 m with respect to the velocity at 0.6 m is to expect a degradation of the prediction of the velocities by the following ratios: 1.33*0.8, 1.667*0.77, and 2.5*0.633 respectively. Therefore, the accuracy of the estimated OF velocity for the various heights is governed by the following relations:

1) 2) 3) 4)

At 0.6 m height: 87%*1*1= 87%. At 0.8 m height: 78% *1.33 * 0.89 = 92%. At 1 m height : 72%* 1.667*0.77 = 92%. At 1.5 m height: 57%* 2.5*0.633 = 90%.

This analysis clearly shows that the sensor model is performing at all heights with a consisitent accuaracy prediction capability. Approximating the right hand side of the above relations by 90%, one can predict the accuracy of the OF velocity estimate at any given vehicle’s height, at which, the moving features of landing pattern can still be captured by the OF sensor. This is used to tune the error statistics of the OF sensor fusion algorithm at varying altitudes of the vehicle’s trajectory. In this study we have come up with an accurate OF sensor model which obtains an accurate flow readings at various heights. As shown, the optical flow dynamics are represented by picking a second order model that has the following charactaristics; natural frequency of 13.8528 rad/sec and 0.5353 damping ratio. The selected model has been tested at different heights, by inputting the encoder reading to the selected model and comparing its output with the real PX4FLOW reading. As computed in table 3, the selected model has small mean errors and small SD errors at various heights in comparison with the actual reading of the PX4FLOW flow sensor. Table 3: Mean error between the output of the selected model and the actual reading at each height. Height (cm)

Mean error (ME) (m/sec)

Standard deviation error (SDE)(m/sec)

60 80 100 150

0.0064 0.0186 0.0613 0.0950

0.0354 0.0642 0.1815 0.2815

IV. SENSOR FUSION ALGORITHM The COTS GPS/INS unit provides an estimate of the vehicle’s position with few meters error [31]. This estimation error is due to GPS/INS measurement characteristics such as: IMU quality, IMU bias errors, the quality of the GPS receiver, multipath errors, and the number of satellites in view. The resulting estimation error does not allow the implementation of auto takeoff and precision landing for small-scale helicopters. Hence, the OF sensor is used to enhance the estimation accuracy of the GPS/INS system. In this study, a sensor fusion between the obtained OF sensor model and the GPS/INS is performed. The sensor fusion is utilized to obtain high-accuracy estimates of the position and the velocity of the helicopter (see Figure 16). The estimator uses the OF sensor measurements in (m/sec) and the position measurement from the GPS/INS unit. A dynamic, Kalman-based, sensor fusion technique is used to estimate the position and the velocity states of the helicopter.

P

GPS/INS

OF

Maxi joker3 Dynamic Model Sensors

Xestimated

KF Linear Estimator

Figure 16: The sensor fusion block diagram To design a linear estimator, a linear system model of the Maxi Joker3 helicopter is needed. Therefore, a linearization process was performed [32]. The linearization was calculated at a near hovering point where the attitude angles are constant and the attitude rates and body velocities are equal to zero. The linearized model is given in equation (17) as: 𝑋̇ = 𝐴𝑋 + 𝐵𝑈 + 𝑤 𝑌 = 𝐶𝑋 + 𝐷𝑈 + 𝑣

The input matrix 𝐵𝑟𝑒𝑑 is similarly calculated as: 0 0 0 0 0 0 0 0 0 0 0 0 (21) 𝐵𝑟𝑒𝑑 = 0 0 0.0374 0 0 0 1.5680 −17.3314 [0 0 −294.6074 ] 0 The C matrix is a 6×6 unity diagonal matrix, and the D matrix is a 6×4 zero matrix.

U V

𝑢𝑙𝑜𝑛 𝑢𝑙𝑎𝑡 (19) 𝑈 = [ 𝑢𝑐𝑜𝑙 ] 𝑢𝑝𝑒𝑑 where 𝑢𝑙𝑜𝑛 is the longitudinal stick input, 𝑢𝑙𝑎𝑡 is the lateral stick input, the collective lever input is denoted by 𝑢𝑐𝑜𝑙 and 𝑢𝑝𝑒𝑑 is the rudder pedal input. The reduced order estimator has the following dynamics matrix 𝐴𝑟𝑒𝑑 : 𝐴𝑟𝑒𝑑 = 0 0 0 1.0000 −0.0008 −0.0016 0 0 0 0.0007 0.9948 −0.1018 0 0 0 0.0017 0.1018 0.9948 (20) 0 0 0 −0.0190 0.1250 0.0699 0 0 0 −0.1256 −0.0268 0.5550 [0 0 0 −0.0687 −0.5378 −3.2826]

A model of a low cost MIDG GPS/INS unit has been considered for obtaining the position measurements at 50 Hz [33]. The measurements specifications of this unit are listed in Table 4. The variances of RK and 𝑄𝐾 were chosen carefully while designing the Kalman estimator. Taking into consideration that the optical flow sensor is more precise compared to the GPS/INS, therefore, the variances of the velocity states were selected to be smaller than the variances of the position states. Table 4: MIDG GPS/INS Measurements specs. Measurements

(17)

where 𝑋 is the state vector, 𝑈 is the input vector, 𝐴, 𝐵, 𝐶, 𝐷 are the state-space matrices, 𝑌 is the output vector, 𝑤 is the dynamic noise, and 𝑣 is the measurement noise. The latter two are assumed as Gaussian white noise processes. The state vector is reduced to have only the position and the velocity states as shown in the equation below: 𝑥 𝑦 𝑧 (18) 𝑋𝑟𝑒𝑑𝑢𝑐𝑒𝑑 = 𝑢 𝑣 [𝑤 ]

Mean error (ME)

𝐀𝐧𝐠𝐮𝐥𝐚𝐫 𝐫𝐚𝐭𝐞𝐬 Range ±300°/𝑠 Non-Linearity 0.1% of FS Noise Density 0.05°/𝑠/√𝐻𝑧 3 dB Bandwidth 20 Hz 𝐀𝐜𝐜𝐞𝐥𝐞𝐫𝐚𝐭𝐢𝐨𝐧 Range ±6 𝑔 Non-Linearity 0.3% of FS Noise Density 150 𝜇𝑔/√𝐻𝑧 3 dB Bandwidth 20 Hz Attitude Accuracy (Tilt) 𝟎. 𝟒° (𝟏𝝈) Position Accuracy 𝟐 𝒎 𝑪𝑬𝑷, 𝑾𝑨𝑨𝑺/𝑬𝑮𝑵𝑶𝑺

And the input is given as: The measurement noise covariance matrix RK is a diagonal matrix given as:

The process noise covariance matrix QK is also a diagonal matrix, represented as: 𝑄𝐾 =

0.0022 0 0 0 0 0 0 0.0022 0 0 0 0 0 0 0.0022 0 0 0 0 0 0 0.0022 0 0 0 0 0 0 0.0022 0 [ 0 0 0 0 0 0.0022]

(23)

After evaluating the system dynamics and the measurement models; 𝑅𝐾 , 𝑄𝐾 , the Kalman-based sensor fusion technique starts its recurrent estimation cycle. Each estimation cycle comprises two main phases, the prediction phase and the stateupdating phase. Equations 24-25 represent the prediction phase. 𝑥̂𝐾+1|𝐾 = 𝐴𝑟𝑒𝑑 𝐾 𝑥̂𝐾|𝐾 + 𝐵𝑟𝑒𝑑 𝐾 𝑈𝐾 𝑃𝐾+1|𝐾 =

𝐴𝑟𝑒𝑑 𝐾 𝑃𝐾|𝐾 𝐴𝑇𝑟𝑒𝑑 𝐾

(24)

+ 𝑄𝐾

(25) where 𝑥̂𝐾+1|𝐾 and 𝑃𝐾+1|𝐾 represent the a priori state estimate and the a priori estimate covariance, respectively; 𝐴𝑟𝑒𝑑 𝐾 and 𝐵𝑟𝑒𝑑 𝐾 are the discretized dynamics and input of the helicopter system, respectively. Equations 26-30 illustrate the updating phase of the estimator. The state estimates are updated recursively at 50 Hz [34, 3]. 𝑣𝐾+1 = 𝑧𝐾+1 − 𝐻𝐾+1 𝑥̂𝐾+1|𝐾

(26)

𝑇 𝑆𝐾+1 = 𝑅𝐾+1 + 𝐻𝐾+1 𝑃𝐾+1|𝐾 𝐻𝐾+1

(27)

𝑇 −1 𝑊𝐾+1 = 𝑃𝐾+1|𝐾 𝐻𝐾+1 𝑆𝐾+1

(28)

𝑥̂𝐾+1|𝐾+1 = 𝑥̂𝐾+1|𝐾 + 𝑊𝐾+1 𝑣𝐾+1

(29)

𝑇 𝑃𝐾+1|𝐾+1 = 𝑃𝐾+1|𝐾 − 𝑊𝐾+1 𝑆𝐾+1 𝑊𝐾+1

(30)

where the observation model of the estimator is defined by 𝐻𝐾+1 ; 𝑣𝐾+1 and 𝑆𝐾+1 represent the measurment innovation and the innovation covariance, respectively; 𝑊𝐾+1 denotes the optimal Kalman gain. As new observations become available, 𝑣𝐾+1 updates its value and then the a posteriori state estimate 𝑥̂𝐾+1|𝐾+1 and its covariance 𝑃𝐾+1|𝐾+1 are updated as shown in Equations 28-30. V. RESULTS A simulation environment was used to test the performance of the proposed sensor fusion method. Simulated data was acquired based on a linear dynamic model for the Maxi Joker 3 helicopter platform. The Kalman filter uses the modeled measurements to obtain an estimate of the position and velocity

The position and the velocity states are estimated during the takeoff test in this section. The position and the velocity are controlled and estimated in the helicopter’s body frame, where the Z-axis is positive down. The mean error between the Kalman signal and the truth signal is computed for each state. 1. Position estimation As an auto takeoff test is being performed, high-accuracy estimation of the position is very critical. Figure 17 shows the estimation performance for the position states. Although, the GPS/INS signal is noisy and has a lot of distortion, the Kalman estimator has succeeded in estimating the position states. As seen in Figure 17, the 𝑌 and X position states are estimated with a small error. Additionally, the estimator achieves accurate Zposition estimate while takeoff. The helicopter was commanded to climb to -3 m in the body Z-axis with a slope of -1 m/sec velocity in the body frame. After 10 seconds, a PID controller is used to maintain the helicopter’s altitude at -3m. X position(m)

(22)

states. An auto takeoff and precision landing tests were performed to validate the proposed estimation method. In the takeoff test, the helicopter is commanded to start climbing until it reaches -3m in the Z-body frame with -1 m/s slope. While in the landing test, the helicopter was commanded to start descending from 3m altitude until it reaches the ground with a slower slope of 0.3 m/s. The helicopter took approximately 10 seconds to land safely and precisely. A) Takeoff test

GPS Kalman Truth

X position estimation 0.5 0 -0.5 0

5

10

15

Y position(m)

0 0 0 0 0 0 0 0 0.09 0 0 0.09]

5

10

15

Z position(m)

𝑅𝐾 =

0.2236 0 0 0 0 0.2236 0 0 0 0 0.2236 0 0 0 0 0.09 0 0 0 0 [ 0 0 0 0

5 0 -5 0 5 0 -5 0

5

10

15

20

35

40

45 50 GPS Kalman Truth

20

25 30 Time (Sec) Z position estimation

35

40

45

20

35

25 30 Time (Sec) Y position estimation

25 30 Time (Sec)

50

GPS Kalman Truth 40

45

50

Figure 17: Position estimation during takeoff The estimation of the translational velocities in the body frame is shown in Figures 18-21. Based on the implemented algorithm, the translational velocities of the vehicle are measured by the OF sensor in the body frame. Therefore, highprecision estimate of the vehicle’s velocity is expected. 2. Velocity estimation The 𝑋 velocity estimation is presented in Figure 19. The figure exhibits the filter’s ability in rejected the measurement’s noise. The estimated velocity states match with the real velocity states. The estimation of the 𝑌 velocity is illustrated in Figure 20. The figure demonstrates the performance of the proposed estimator in rejecting the noise of the PX4FLOW and obtaining an accurate 𝑌 velocity estimate. The Z velocity is also precisely estimated, as shown in Figure 21. The Z velocity is about -1

PX4FLOW Kalman Truth

X Velocity estimation

0.5 0 -0.5 0

5

2 0 -2 0

5

2 0 -2 0

5

10

10

10

15

15

15

20

25 30 Time (Sec) Y Velocity estimation

20

25 30 Time (Sec) Z Velocity estimation

20

25 30 Time (Sec)

35

35

35

40

40

45 50 PX4FLOW Kalman Truth 45 50 PX4FLOW Kalman Truth

40

45

0.15

X velocity (m/s)

0.05 0 -0.05 -0.1 -0.15 11

12

13

14

15 16 Time (Sec)

17

18

19

20

30 25 20 Time (Sec) Y position estimation

35

40

5 0 -5 0

50 45 GPS Kalman Truth

5

10

15

30 25 20 Time (Sec) Z position estimation

35

40

5 0 -5 0

50 45 GPS Kalman Truth

5

10

15

30 25 Time (Sec)

35

40

45

20

50

0.2

PX4FLOW Kalman Truth

0.1

X velocity (m/s)

Y velocity (m/s)

15

0.15

PX4FLOW Kalman Truth

0.3

10

As the filter uses precise PX4FLOW measurements in m/sec as velocity measurements, a precise velocity estimates are expected. Figures 23-25 are zoomed to demonstrate the velocity estimation while the helicopter is approaching the ground. The helicopter performs slight velocity in 𝑋 and 𝑌 while landing and these small velocities are estimated accurately. Figure 25 shows the 𝑍 velocity estimation. The helicopter starts descending with 0.3 m/sec 𝑍 velocity until it reaches the ground.

Figure 19: X velocity estimation (zoomed) 0.4

5

Figure 22: Position estimation during landing 2. Velocity estimation

PX4FLOW Kalman Truth

0.1

GPS Kalman Truth

X position estimation 1 0 -1 0

50

Figure 18: Velocity estimation during takeoff

-0.2 10

3m in Z body frame until it reaches the ground with a slower slope of 0.3 m/s. The helicopter took approximately 10 seconds to land safely and precisely. Z position(m) Y position(m)X position (meters)

X Velocity (m/s) Z velocity (m/s)Y Velocity (m/s)

m/sec during takeoff and returns back to zero when the helicopter reaches the desired Z position; which is - 3m in this case.

0.05 0 -0.05 -0.1

0.1

-0.15

0 -0.2 10

-0.1 10

11

12

13

14

15 16 Time (Sec)

17

18

19

20

12

13

14

15 16 Time (Sec)

17

18

0.6

0 -0.2 -0.4

Y velocity (m/s)

PX4FLOW Kalman Truth

0.4 0.3 0.2 0.1

-0.6

0

-0.8

-0.1 10

-1 -1.2 10

11

12

13

14

15 16 Time (Sec)

17

18

19

20

11

12

13

14

15 16 Time (Sec)

17

18

0.6

Z velocity (m/s)

B) Landing test

In this section, the performance of the estimator is demonstrated during the landing phase of the helicopter. As shown in Figure 22, 𝑋 and 𝑌 position states are accurately estimated while landing. The mean error between the estimated signal and the truth signal is less than 2 cm. Similarly, the estimation algorithm also behaves well in estimating the 𝑍 position. The helicopter is controlled to start descending from -

20

PX4FLOW Kalman Truth

0.5

Position estimation

19

Figure 24: Y velocity estimation (zoomed)

Figure 21: Z velocity estimation (zoomed)

1.

20

PX4FLOW Kalman Truth

0.5

0.2

19

Figure 23: X velocity estimation (zoomed)

Figure 20: Y velocity estimation (zoomed)

Z velocity (m/s)

11

0.4 0.3 0.2 0.1 0 -0.1 10

11

12

13

14

15 16 Time (Sec)

17

18

19

20

Figure 25: Z velocity estimation (zoomed) Figures 26 and 27 demonstrate the position and velocity estimation error, respectively. As seen, the x, y and z states are

estimated during takeoff with a maximum error of 26 cm. Comparatively, the velocity states are also estimated in Figure 28 accurately during the takeoff. Figure 28 presents a small estimation error of less than 0.1 m/s with in the x, y velocity during the landing. As discerned, the maximum estimation error in the z velocity happens at the moment at which the helicopter leaves the ground. This error is because of the resulting vibration of the optical flow sensors during the thrust change (input force) which results into a transient response that affects the sensor reading. This is clearly illustrated in Figure 28, which represents the effect of the main rotor thrust change on the estimation accuracy while takeoff. While ascending, the error becomes less than 0.1 m/s. Figures 29 and 30 exhibits a small estimation error of the position and the velocity while landing. As shown, the maximum position error is less than 0.3 m.

estimated and the actual data for the position and velocity states of the Maxi Joker-3 helicopter. It is noticed that the velocity states have smaller mean errors compared to the position states. This interprets our use of precise OF measurements to augment the GPS/INS measurements. As shown in Table 5, the mean error between the estimated and the actual state is calculated for each position state. The highest position Mean error (ME) is for the altitude, which is less than 2.5 cm. The maximum difference between the estimated Z position and the ground truth Z position is about 25.83 cm.

Figure 29: Position estimation error during landing

Figure 26: Position estimation error during takeoff

Figure 30: Velocity estimation error during landing

Figure 27: Velocity estimation error during takeoff

Figure 28: Thrust change effect on the vertical velocity estimation

Table 5 includes the error statistics of the estimation algorithm during the takeoff test. The mean error and the difference in the standard deviation are computed between the

Table 6 presents the statistics of the proposed estimation algorithm during the landing test. Clearly, the mean errors of the position states in the landing test are similar to the Mean errors while takeoff. The maximum Z-position difference between the estimator signal and the actual is about 27 cm. The mean errors of the velocity in both tests are also similar, which proves the robustness of the proposed estimator. Tables 5 and 6 show that the sensor fusion algorithm has obtained accurate estimates during the auto takeoff and landing tests. The presented study shows that the estimator’s performance is greatly enhanced by fusing the OF measurement with the GPS/INS measurements. As a result, the radius of uncertainty (ROU) of the helicopter’s position and velocity is substantially reduced, as demonstrated in Figure 31. This will enable an accurate auto takeoff and landing flight phases.

Table 5: Position estimation during takeoff Helicopter states

Mean error (ME)

Difference in Standard Deviation (SDE)

0.0133 0.0653 𝒙 0.0161 0.1044 𝒚 0.0235 0.0834 𝒛 0.0016 0.0224 𝒖 0.0068 0.0279 𝒗 0.0112 0.0346 𝒘 𝑴𝒂𝒙𝒊𝒎𝒖𝒎 𝒅𝒊𝒇𝒇𝒆𝒓𝒆𝒏𝒄𝒆 𝒕𝒂𝒌𝒆 𝒐𝒇𝒇 = 𝟎. 𝟐𝟓𝟖𝟑 m

Table 6: Position estimation during landing Helicopter Mean error Difference in Standard states (ME) Deviation (SDE) 0.0133 0.0750 𝒙 0.0165 0.1380 𝒚 0.0401 0.1994 𝒛 0.0016 0.047 𝒖 0.0069 0.0446 𝒗 0.0117 0.2939 𝒘 𝑴𝒂𝒙𝒊𝒎𝒖𝒎 𝒅𝒊𝒇𝒇𝒆𝒓𝒆𝒏𝒄𝒆𝒍𝒂𝒏𝒅𝒊𝒏𝒈 = 𝟎. 𝟐𝟕𝟐𝟑 𝒎

utilization of the proposed algorithm in auto takeoff and landing of helicopter platforms. VII. REFERENCES

[1] J. S. Agte and N. K. Borer, "Nested Multistate Design for Maximizing Probabilistic Performance in Persistent Observation Campaigns," ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering, vol. 2, no. 1, 2016. [2] H. Lee, S. Jung and D. H. Shim, "Vision-based UAV landing on the moving vehicle," in International Conference on Unmanned Aircraft Systems (ICUAS), 2016. [3] M. Al-Sharman, M. F. Abdel-Hafez and M. Al-Omari, "Attitude and Flapping Angles Estimation for aSmallScale Flybarless Helicopter Using a Kalman Filter," IEEE Sensors Journal, vol. 15, no. 4, pp. 2114 - 2122, 2015. [4] M. . F. Abdel-Hafez, "The Autocovariance Least Squares Technique for GPS Measurement Noise Estimation," IEEE Transactions on Vehicular Technology, vol. 59, no. 2, pp. 574-588, 2010. [5] M. F. Abdel-hafez, "Detection of Bias in GPS Satellites' Measurements: A Probability Ratio Test Formulation," IEEE Transactions on Control Systems, vol. 22, no. 3, pp. 1166-1173, 2014. [6] K. A. Ghamry, Y. Dong and M. A. Kamel, "Real-time autonomous take-off, tracking and landing of UAV on a moving UGV platform," in 24th Mediterranean Conference on Control and Automation (MED) , Athens , 2016.

GPS/INS/OF ROU

Figure 31: Landing position error reduction

VI. CONCLUSION In this paper, the problem of auto takeoff and precision terminal-phase landing of a small-scale flybarless helicopter using OF sensor aided GPS/INS solution was addressed. Under simulated landing scenarios, a dynamic OF sensor’s model was first obtained using a special experimental setup. It has been demonstrated that the performance of the identified OF model matches the performance of the reference OF sensor. It is also observed that the OF sensor performs well at low altitudes. As the altitude increases, the performance of the PX4FLOW sensor degrades due to the degradation of the image resolution. At a certain altitude, the reduction in accuracy is formulated as a function of the change in the image resolution and the maximum velocity of the OF sensor. This model is used to tune the noise statistics of the GPS/INS/OF sensor fusion algorithm. The estimator has exhibited a rigorous performance while TOL flight phases. The helicopter has performed a precise takeoff and landing with a maximum error in position not exceeding 0.27𝑚. The accuracy of the obtained results allows the

[7] J. M. Daly, . Y. Ma and S. L. Waslander, "Coordinated landing of a quadrotor on a skid-steered ground vehicle in the presence of time delays," Autonomous Robots, vol. 38, no. 2, pp. 179-191, 2015. [8] S. Saripalli, J. Montgomery and G. Sukhatme, "VisuallyGuided Landing of an Unmanned Aerial Vehicle," IEEE Transactions on Robotics and Automation, vol. 19, no. 3, pp. 371-381, Jun 2003. [9] A. Cesetti, E. Frontoni and A. Mancini, "A Vision-Based Guidance System for UAV Navigation and Safe Landing using Natural Landmarks," Journal of Intelligent & Robotic Systems, vol. 57, no. 1-4, pp. 233-257, 2010. [10] T. Merz, S. Duranti and G. Conte, "Autonomous Landing of an Unmanned Helicopter based on Vision and Inertial Sensing," in Proceedings of the 9th International Symposium on Experimental Robotics (ISER-2004), Singapore, 2006. [11] M. Meingast, C. Geyer and S. Sastry, "Vision Based Terrain Recovery for Landing Unmanned Aerial Vehicles," in In Decision and Control, CDC. 43rd IEEE Conference on, 2004.

[12] K. E. Wenzel and A. Zell, "Automatic Take Off, Hovering and Landing Control for Miniature Helicopters with Low-Cost Onboard Hardware," in Proceedings of the AMS’09, Autonome Mobile Systeme, Karlsruhe, Germany, December 3-4 2009.. [13] C. Sharp, O. Shakernia and S. S. Sastry, "A Vision System for Landing an Unmanned Aerial Vehicle," in IEEE International Conference on Robotics and Automation, Seoul, Korea, 2001. [14] N. Gageik, M. Strohmeier and S. Montenegro, "An Autonomous UAV with an Optical Flow Sensor for Positioning and Navigation," International Journal of Advanced Robotic Systems,, vol. 10, 2013. [15] W. E. Green, P. Y. Oh, K. Sevcik and G. Barrows, "Autonomous Landing for Indoor Flying Robots Using Optic Flow," in Proceedings of IMECE’03 2003 ASME International Mechanical Engineering Congress, Washington, D.C., 2003. [16] H. Lim, H. Lee and H. J. Kim, "Onboard Flight Control of a Micro Quadrotor Using Single Strapdown Optical Flow Sensor," in IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, 2012. [17] S. Griffiths , J. Saunders, A. Curtis, . B. Barber, T. McLain and R. Beard, "Maximizing miniature aerial vehicles," IEEE Robotics & Automation Magazine, vol. 13, no. 3, pp. 34 - 43, 2006. [18] B. H´eriss´e, . T. Hamel, R. Mahony and . F.-X. Russotto, "Landing a VTOL Unmanned Aerial Vehicle on a Moving Platform Using Optical Flow," IEEE Transactions on Robotics, vol. 28, no. 1, pp. 77- 89, 2012. [19] M. Ax, . S. Thamke, . L. Kuhnert, . J. Schlemper and K.D. Kuhner, "Optical Position Stabilization of an UAV for Autonomous Landing," in 7th German Conference on Robotics, 2012. [20] D. Honegger, . L. Meier, . P. Tanskanen and M. Pollefeys, "An Open Source and Open Hardware Embedded Metric Optical Flow CMOS Camera for Indoor and Outdoor Applications," in Robotics and Automation (ICRA), 2013 IEEE International Conference on. IEEE, 2013. [21] W. R. Williamson, M. F. Abdel-Hafez, , I. Rhee, . E.-J. Song, J. D. Wolfe, D. F. Chichka and J. L. Speyer, , "An Instrumentation System Applied to Formation Flight," Control Systems Technology, IEEE Transactions on, vol. 15, no. 1, pp. 75 - 85, 2007. [22] B. Emran, M. Al-Omari, M. Abdel-Hafez and M. Jaradat, "Hybrid Low-Cost Approach for Quadrotor Attitude Estimation," ASME Journal of Computational and Nonlinear Dynamics, vol. 10, no. 3, pp. 1-9, 2015. [23] M. S. Chowdhury and M. F. Abdel-Hafez, "Pipeline Inspection Gauge Position Estimation Using Inertial Measurement Unit, Odometer, and a Set of Reference Stations," ASCE-ASME Journal of Risk and Uncertainty

in Engineering Systems, Part Engineering, vol. 2, no. 2, 2016.

B:

Mechanical

[24] M. Al-Sharman, M. Abdel-Hafez and M. Al-Omari, "State Estimation for a Small Scale Flybar-less Helicopter," in 2nd International Conference on SystemIntegrated Intelligence: Challenges for Product , Bremen, Germany, 2014. [25] D. A. Mercado, G. Flores, P. Castillo, J. Escareno and R. Lozano, "GPS/INS/Optic Flow Data Fusion for Position and Velocity estimation," in International Conference on Unmanned Aircraft Sysems (ICUAS), Atlanta,GA, 2013. [26] M. Al-Sharman, "Attitude Estimation for a Small-Scale Flybarless Helicopter," in Multisensor Attitude Estimation: Fundamental Concepts and Applications, CRC Press, 2016. [27] A. Alshoubaki, M. Al-Sharman and M. Aljarrah , "Flybarless helicopter autopilot rapid prototyping using hardware in the loop simulation," in International Symposium on Mechatronics and its application, ISMA'15, Sharjah, UAE, 2015. [28] J. S. Light, "Tip vortex geometry of a hovering helicopter rotor in ground effect.," Journal of the American Helicopter Society, vol. 38, no. 2, pp. 34-42, 1993. [29] "Baumer encoders," Baumer, [Online]. Available: http://www.baumer.com/us-en/products/rotary-encodersangle-measurement/absolute-encoders/. [30] "PX4FLOW Smart Camera," 2014. [Online]. Available: https://pixhawk.org/modules/px4flow. [31] J. Fang and X. Gong, "Predictive Iterated Kalman Filter for INS/GPS Integration and Its Application to SAR Motion Compensation," IEEE Transactions on Instrumentation and Measurement, vol. 59, no. 4, pp. 909 - 915, 2010. [32] A. Alshoubaki, M. Al-Sharman and M. A. Aljarrah, "Flybarless helicopter autopilot rapid prototyping using hardware in the loop simulation," in 10th International Symposium on Mechatronics and its Applications (ISMA), Sharjah, 2015. [33] "Omni Instruments,," 2013. [Online]. Available: http://www.omniinstruments.co.uk/gyro/MIDGII.htm. [34] Y. Bar-Shalom, X. R. Li and T. Kirubarajan, Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software, New York: John Wiley &Sons, 2001.