Fuzzy logic controllers for Mobile robot navigation in ...

70 downloads 0 Views 1MB Size Report
Unknown environment using Kinect sensor. Nemra Abdelkrim, Kecir Issam, KhroufLyes, Chaib Khaoula. BPI7, Ecole Militaire Polytechnique, Bordj EI-Bahri, ...
Fuzzy logic controllers for Mobile robot navigation in Unknown environment using Kinect sensor Nemra Abdelkrim, Kecir Issam, KhroufLyes, Chaib Khaoula BPI7, Ecole Militaire Polytechnique, Bordj EI-Bahri, Alger, Agerie

karim [email protected] _

Abstract - 3D vision is becoming more and more common in many applications such as localization, autonomous navigation, map construction, path following, inspection,

mobile robot P3AT in section 5. Finally, conclusion and perspectives are presented in Section 6.

monitoring or risky

II.

situation detection. Depth cameras are new sensing systems that capture RGB images along with per-pixel depth information. In this paper, how such cameras can be used in the context of robotics is investigated, specifically for navigation, recognition, and building dense 3D maps in indoor environments. Reactive navigation system for indoor environment using Kinect sensor is developed to allow the mobile robot to move to a goal or follow a moving target with obstacles avoiding. Fuzzy logic controllers were used for real time navigation. The proposed system also works in the dark, which is a great advantage for surveillance systems. Experiments were performed with the mobile robot Pioneer P3-AT equipped with a Kinect sensor in order to validate and evaluate the proposed navigation algorithm.

Keywords

-

RGBD

Camera,

Kinect

sensor,

Autonomous

navigation Fuzzy Controller, 3D mapping

I.

INTRODUCTION

Mobile robots are largely used in safety, security, and rescue applications. Autonomous mobile robots have been assuming an important role in many applications. A pertinent type of autonomous mobile robots is directed to surveillance, reconnaissance, and indoor safety tasks. Sensors are used to collect information about the robot and the environment. The use of Kinect sensor may represent a significant reduction of a robot costs, as this sensor has been showing that it can replace with large advantage the use of other sensors, including very expensive laser sensors and stereo cameras. The objective of our paper is to develop a fuzzy logic controller, which allows the mobile robot to navigate autonomously using the Kinect sensor. First, a visual perception system based on Kinect images (RGB and depth) segmentation is developed. Second, three fuzzy logic controllers for go to a goal, obstacles avoidances and map building are implemented. Finally, the proposed algorithms are validated using the mobile robot Pioneer 3AT. The rest of this paper is organized as follows, in section 2; related works about perception visual system and autonomous navigation are presented. In section 3, the perception visual algorithm based on Kinect images is developed and explained step by step. In section 4, three fuzzy controllers for reactive navigation are proposed. Also a 3D dense map is constructed using color and depth images from Kinect sensors. The proposed algorithms are validated experimentally on the

RELATED WORK

Perception visual system Although the creation of surveillance robots to indoor environments is not a totally new application [1], this area of research has been increasing allowing a growth in security applications for indoor robots [2] [3] [4]. The objective of this work is the development of an intelligent system which allows the robot to navigate autonomously and safely in an indoor environment during day and night. The use of the Kinect sensor in robotic navigation is very recent. In 2010 P. Henry, M. Krainin propose an algorithm to construct a 3D map using RGBD camera [14], the same author developed a SLAM algorithm using Kinect images for unmanned aerial vehicle localization and mapping in 2011. In 2012 Diogo Santos et al. used the Kinect sensor for Mobile Robots Navigation using Artificial Neuronal Network [12]. In the same year Gyorgy Csaba [13] proposed a fuzzy logic controller for obstacle avoidance using Kinect images, however this latter use a Mamdani controller from Matlab toolbox which is not suitable for real time application. In our work, a robust navigation system is proposed base on Kinect images for robust perception and Sugeno fuzzy controller for real time decision.

A.

B. Mobile robot navigation In robotics, many tasks require the robot to navigate in indoor environment. Traditionally the environment is assumed to be static and all objects rigid [6]. A deliberative approach in this type of environments is, for example, cell decomposition [7] with A * for shortest path computation. However, an indoor environment has an intrinsic uncertainty and complexity that cannot be overcome with this approach. As alternative, a reactive approach responds to the sensors stimuli to generate an adequate behavior. An example is the use of potential field methods (PFM) for local planning like obstacle avoidance [8]. Nevertheless, local minima are a best-known problem in reactive navigation such as PFM [9]. The local minima problem can be solved by detecting the trap-situation and recalculate the navigation plan with the new information [9]. Other popular solution for reactive navigation is Fuzzy Inference System (FIS) which allows the robot to navigate appropriately using fuzzy reasoning. Fuzzy Logic Controller (FLC) have many advantages, first it can generate complex behaviors (with uncertain observation) using simple fuzzy

IWSSIP 2014. 21" International Conference on Systems. Signals and Image Processing. 12-15 May 2014. Dubrovnik. Croatia

75

rules and prior knowledge can be integrated in the FIS easily, moreover, Sugeno controller is very suitable for real time application [10]. Another common practice it to combine global path planning with local path planning, such as [11], for fast incremental path planning. In our work, the mobile robot will navigate autonomously to reach a predefined area with obstacles avoidance in real time. During navigation, and using the Kinect sensor, the mobile robot will construct 3D dense map of the environment, this latter will be used later for global navigation. III.

PERCEPTION SYSTEM BASED ON KINECT IMAGES

The Microsoft Kinect sensor is categorized as a Depth Camera. Depth cameras are now changing robot perception and machine vision all over the globe replacing mono vision cameras, stereovision, and other types of range finders such as laser, ultra-sonic and radar sensors. The stereovision, produced by bi-focal cameras which requires two cameras or more, and monovision cameras are being replaced by Microsoft's Kinect sensor at many research projects due to the Kinect's capability to produce decent quality images and depth information of the direct ambient environment at also an even more competitive price. The main objective of our research is to develop an autonomous navigation system by integrating the Microsoft Kinect sensor to the mobile robot Pioneer 3AT, Fig.I. v

Image filtering: to remove nonlinear and impulsion noises and disturbances caused by illumination condition a Gaussian filter is applied on the depth image, Fig.3 (left). B. Background extraction: In this step the object should be extracted from the background using adaptive threshold given by:

A.

S

=

-l-If=oIJ�o p(i,j)

n.m

(I)

p(i,j) : Coordinat of the pixel (i,j). (n * m) : Size of the depth image. As can be seen from Fig.3 (right): if p(i,j) > S then p(i,j) E Background

Figure 3. Depth image filtered (left), Depth image after background extraction (right)

C. Ground extraction: One important step of the proposed system of perception is to separate obstacles from ground. First from the depth information and using camera model we the 3D coordinate (x, y, z) is calculated for each point in the Kinect frame (Fig. 1), then to extract the ground points with y < -hr represent the ground and should be removed. hr is the height of the kinect from the ground (Fig. 1). FigA (left) shows the results of ground and background extraction. Then, FigA (right) shows the result of depth image after binarisation.

Figure 1. Mobile robot with Kinect sensor

For research and rescue applications, the mobile robot should detect, localize, and recognize different objects around him. The Kinect have the advantage to combine visual and depth information for good perception. Obstacles in the indoor environment are detected and extracted in real time otherwise the mobile robot cannot accomplish its mission. Fig.2 gives an example of two images (RGB left, Depth right) acquired by the Kinect. The left image is very useful for dense map construction, where the right one is indispensable for safety navigation and 3D vision. It is not easy to extract obstacle from the depth image. First because the background depth is unknown and second the ground depth has a large range and obstacles depths are within this range. To detect and localize obstacles efficiently during navigation these steps are followed:

Figure 4. Image depth after ground extraction (left), Image binarisation(right)

Objects edge detection and localization: the last step of our perception system is objects localization, for that, the edges of binarised image are detected (Fig.S), then moments based

D.

approach is used in image frame (0, IT, V) to determine the center of each blob Fig.6 (right).

{Uc c V

=

=

::��:��

MuCO,l) MuCO,O)

Mu(1,O) : First order moment following V axis. Mu(O,l): First order moment following a axis. Mu(O,O): zero order moment (object surface).

Figure 2. RGB image (left) and Depth image (right) acquired by the Kinect

IWSSIP 2014, 21" International Conference on Systems, Signals and Image Processing, 12-15 May 2014, Dubrovnik, Croatia

76

(2)

With:

(Xb' Yb) : Goal coordinates. (xn Yr' (Jr) : Robot pose given by an accurate sensor

(odometry combined with an accurate gyro).

(A B

(J

=

=

Xb - xr Yb - Yr

arctan

=

(3)

(B /

A)

Figure 5. Objects edge detection (left), Objects localization (right)

(4) IV.

IV. FUZZY LOGIC CONTROLLER FOR REACTIVE NAVIGATION

A considerable research in mobile robot navigation in an uncertain and complex environment has received a considerable attention lately. Efficient algorithms like adaptive control and behavior-based control are the bedrock of the research in mobile robot navigation. Because of the difficulties involved in building precise path generating equation for unknown and complex environment, an alternative method to adaptive control is indispensable. Fuzzy logic systems represent knowledge in linguistic form, which permits the designer to define a highly abstract behavior in an intuitive manner [10]. Using fuzzy logic framework, the attributes of human reasoning and decision making can be formulated by a set of simple and intuitive IF (antecedent) THEN (consequent) rules, coupled with easily understandable and natural linguistic representations. In this paper three fuzzy logic controllers are implemented to establish two behaviors of reactive navigation (Go to a goal, Obstacles avoidances and both of them). These controllers have as input the results of our perception system and the output are the translation and rotation velocities of the mobile robot.

B. Obstacles avoidance In this case the fuzzy inference system (FIS) has three input Fig.7 (a): left distance[-28.5° - 11°], frontal distance [-11° 1P] and right distance [11°28°] , each distance is represented by three membership functions Near (P), Medium (M) and Far (L), Fig.7 (b). These three distances are given by the perception system using depth image given by Kinect sensor. Then 27 fuzzy rules are used. Obstacles avoidance FIS have two output translation and rotation velocities (Vt and Vr) of the robot.

Left Distance ------. Frontal Distance Right Distance

v,.

700

1300

2000

(b)

2500

4000

Figure 7. a) FIS for obstacles avoidance, b) Membership function for each distance

In this case a Sugeno fuzzy inference system two inputs: angular error (Ean g) (divided on 7 function) and position error (Epas ) (divided on 5 function), and two output rotation and translation the mobile robot (Vy, and Vd, Fig.6(b).Thus, go to controller uses 35 fuzzy rules.

is used with membership membership velocities of a goal fuzzy

y

Go to a goal with obstacles avoidance

C.

The two previous behaviors (go to a goal location and obstacles avoidance) are combined to allow the robot to go to a goal and avoid obstacles in the same time. To guarantee a smooth transition from one behavior to the second one a linear combination is used as follows: (5) C f3Cgg + (1 - (3)Caa =

------ -

:.-' �

F -------

"

Yr

1-----1.

IT

Go to a goal location behavior:

y,

1----. v,

(a)

o

A.

Obstacles avoidance Fuzzy Inference System

________

Robot

--------

-r

_ ____

e

__ _

Goal

Cgg and Caa are the command given by the two FLC go to a goal and obstacles avoidance respectively.

{

: :

I

:

:

L-____�__________�____�

X

Eang -------,.. ..

1----1. V, 1-----..

v,.

(b)

0

=

f3 f3

(a)

Go to a goal Fuzzy Inference System

f3

if

dobs-dmin =

=

dmax-dmin

dabs:::; dmin and dabs:::; dr if dmin:::; dabs:::; dmax if

1

dabs

2':

(6)

dmax

dabs: Distance between robot and the nearest obstacle. dmin : Distance from which the obstacle is considered. dr: Distance between robot and goal. V.

Figure 6. a) Robot Go to goal behaviour, b) Go to goal FIS

A.

EXPERIMENTAL RESULTS

Go to goal with obstacle avoidance results

IWSSIP 2014, 21" International Conference on Systems, Signals and Image Processing, 12-15 May 2014, Dubrovnik, Croatia

77

VI.

Fig.S Shows the results obtained for two combined behaviors (go to goal with obstacles avoidance). The two top figures show the evolution of angular and position error. As can be seen, when the robot navigates in free environment (without obstacles) it converges directly to the goal (blue curve). When the robot finds obstacles, it tries first to avoid them before converging to the goal Fig.9.

'�u� angular error

40

-\\l(hobstac1cs

"0

o

Il 10-

10

20



I I �



Time rotation velocity

__ ___ __

I L

-\\·ithobstacles �

I

i��� ������ �����L��I ����������� �������: -(,0

position error

IlX�))

I

I I

00

I I I __ L __ L __ I

I (,((:(1 -



4CKXl-

o

o

.

_-wrthobstades -wilhou(obs(ocics

__

L __ I

---c'--'--" � -H\c )-----OI ----cc - c50 60 Time

I - - - - - L - - - -

10

L

I

--�--�--�-

I

20

40

Ture translation velocity

n-rrrrrrN-

-

-

50

60

CONCLUSION

In this paper a fuzzy navigation system using Kinect sensor for surveillance and inspection application is proposed. The objective of this work was how allow mobile robot to reach its full autonomy, which means go to a goal, avoid obstacles, follow moving object and construct a dense 3D map during day and night without any human intervention. This work can be divided on two parts; first a robust perception visual system based on Kinect sensor is developed to detect and localize static and moving object during day and night. Second, two fuzzy logic controllers for go to a goal and obstacles avoidance are proposed. The proposed algorithms are validated experimentally using the mobile robot P3AT, and good results are obtained. A 3D color map is constructed during navigation which is very important for surveillance and inspection applications. VII. REFERENCES

- - -- withobstacles

[I] J. L. Crowley, "Coordination of action and perception in a surveillance

�liO



robot," IEEE Expert, v. 2, n. 4, San Francisco, CA, USA, 1987, p. 32-43.

�](() >

[2] D.

Carnegie, D.

Loughnane e S.

Hurd, "The design of a mobile

autonomous robot for indoor security applications" Proceedings of the

50

Institution of Mechanical Engineers, Part B: Journal of Engineering

I 10

20

30

40

Time

50

Figure 8. Position and angular error with controller outputs evolution

60

Manufacture, vol. 218, n. 5, 2004, p. 533. [3] F. Monteiro, "Sistema de localiza9ao remota de robos moveis baseado em camaras de vigilancia". Doctoral Thesis, Universidade do Porto, 20II. [4] F. Osorio, D. Wolf. K. C. Branco, J. Ueyama, G. Pessin, L. Fernandes, M. Dias, L. Couto, D. Sales, D. Correa, M. Nin, S. L. Louren90, L. Bonetti, L. Facchinetti e F. Hessel, "Pesquisa e Desenvolvimento de Robos Taticos para Ambientes Internos," Internal Workshop of INCTSEC, n. Sao Carlos, SP, 2011. [6] L.Kavraki, P. Svestka, J.-C. Latombe, and M. Overmars, "Probabilistic roadmaps for path planning in high-dimensional configuration spaces"

..

.• •

..,-

1�15

/ / . /­ / " / / / ' / .. I -; ---;-. , .

Figure 9. Go to a goal, obstacle avoidance and map construction

Fig.II shows the results of mobile robot reactive navigation with 3D map construction, this latter will be used for global navigation in future works. A 3D dense map of two rooms constructed by mobile robot during navigation is presented in Fig.IO.

Robotics and Automation, IEEE Transactions on , vol. 12, no. 4, Aug 1996, pp. 566 580. [7] J. Latombe, "Robot motion planning", Springer Verlag, 1990 . [8] Y. Koren and

1. Borenstein, "Potential field methods and their inherent

limitations for mobile robot navigation," IEEE Proceedings, International Conference in Robotics and Automation, vol.2, Apr 1991,

pp. 1398 -

1404. [9] R. Tilove, "Local obstacle avoidance for mobile robots based on the method

of

artificial

potentials"

IEEE

Proceedings,

International

Conference in Robotics and Automation, voU. May 1990, pp. 566 -571. [IO]Nemra Abdelkrim, Hacen Rezine "Genetic reinforcement algorithm for Mobile robotic application", ICINCO'07, Angers France, 21-23 May 2007. [II] S. Koenig and M. Likhachev, "Improved fast replanning for robot navigation

in

unknown

terrain"

IEEE

Proceedings,

International

Conference in Robotics and Automation, ICRA '02. vol. I, 2002, pp. 968 - 975. [12] Diogo Santos Ortiz Correa, Diego Fernando Sciotti, Marcos Gomes Prado, Daniel Oliva Sales, Denis Fernando Wolf, Fernando Santos Osorio "Mobile

Robots Navigation in Indoor Environments Using Kinect

Sensor" Second Brazilian Conference on Critical Embedded Systems, IEEE, 2012. [13] Gyorgy Csaba, ZoItan Vamossy "Fuzzy Based Obstacle Avoidance for Mobil Robots with Kinect Sensor" LlNDI 2012, 4th IEEE International Symposium on Logistics and Industrial Informatics, September 5-7; Smolenice, Slovakia, 2012.

Figure 10. 3D dense map construction during navigation

IWSSIP 2014, 21" International Conference on Systems, Signals and Image Processing, 12-15 May 2014, Dubrovnik, Croatia

78