Multi-robot Hunting Behavior - IEEE Xplore

1 downloads 0 Views 165KB Size Report
Tulane University, New Orleans, LA 70118 [email protected] ... erative capture of a target by multiple robots is demonstrated in [19] using simulation.
Multi-robot Hunting Behavior F. Belkhouche, B. Belkhouche and P. Rastgoufard Electrical Engineering and Computer Science Department Tulane University, New Orleans, LA 70118 [email protected] Abstract — In this paper we consider the problem of hunting an unpredictably moving prey using a group of robots. We elaborate a mathematical model for the trackingnavigation problem based on geometric rules. This model consists of systems of two differential equations describing the relative motion of the prey with respect to the robots. The control laws are decentralized and the robots move in different modes, namely: navigation-tracking mode, obstacles avoidance mode, cooperative collision avoidance, and circle formation. In the tracking-navigation mode, we use the deviated pursuit strategy, which consists of a closed loop control law based on geometric rules. The properties of this strategy are explored briefly. For obstacles and cooperative collision avoidance, a collision cone approach is used. Our method is illustrated using simulation. Keywords: Multi-robot navigation, target tracking, hunting behavior, cooperative collision avoidance.

1 Introduction Cooperative robotics has seen important developments during the last decade. Multi-robot systems are used for various applications to accomplish tasks that are difficult or time consuming for a single robot. Different aspects of robotics cooperation are considered in the literature. Multiple robot navigation is discussed in ([1, 2, 3]), where different techniques are used. An algorithm for cooperative collision avoidance is suggested in [4], which is based on geometrical aspects of the robots paths. Different approaches ranging from Lyapunov theory to structural complexity theory were suggested. Robot formation control in the presence and absence of obstacles is widely discussed. The literature on robot formation is substantial, and solutions based on both behavioral and model-based controls are suggested. Behavior-based strategies are discussed in ([5, 6, 7]). Behavior-based methods enhence real–time response and reduce the complexity that accompanies the exact models. However, these methods

do not support means to predict the exact achievements of the robots. Geometrical approaches were also used for formation control ([8, 9, 10]). These methods are known for their robustness. Modeling and controlling robot formation using graph theory was discussed in ([11, 12]), where decentralized control laws are suggested. In [13], graph theory was used to switch between formations. Algorithms based on graph theory may be computationally expensive. For this reason it is recommended to use decentralized control laws. Tracking a moving target using a mobile robot was considered by many researchers. Recently, multi-robot systems were introduced to accomplish the same task. This problem is seen from different points of view and solved using various methods and techniques. Multi-robot tracking of a moving object using directional sensors was carried out in [14], where a sensor fusion scheme based on inter-robot communications is proposed in order to obtain real-time information on the target position and orientation. Similar problems are considered in ([15, 16]). In [15], both aerial and ground vehicles are used. In [16], the problem is considered for multiple moving targets. Yamaguchi [17] discussed the problem of multi-robot cooperative hunting behavior for both holonomic and nonholonomic vehicles. The method suggested by the author is model-based, where each robot has a vector defined as the formation vector. The robots move in formations controlled through the formation vectors. This approach is combination between feedback control laws and reactive control framework. Visual tracking of a target using a team of robots was addressed in [18], where a supervisor control system keeps track of the relative location of the robots, and utilizes this information to coordinates the cooperative tasks between robots. Cooperative capture of a target by multiple robots is demonstrated in [19] using simulation. It is shown that the capture task is accomplished successfully by the robots repeating simple behaviors. Experimental demonstration of robot cooperative target interception is carried out in [20]. In this paper we consider the problem of multi-robot hunting behavior. The goal is to derive control strategies that allow the robots to reach and enclose an unpredictably moving

prey. The robots move in several modes, namely: navigationtracking mode, obstacle avoidance mode, cooperative collision avoidance mode, and circle formation mode. In the tracking-navigation mode, the robots do not move in a formation, and their motion depends only on the motion of the prey. Techniques suggested for navigation towards an unpredictably moving goal ([21, 22, 23]) are used to reach the prey. The control laws are decentralized for all navigation modes.

Consider a target (or prey) moving in the horizontal plane according to the following kinematics equations (1)

where (xp , yp ) are the target coordinates in the Cartesian reference frame. vp is the target’s velocity and θp is its target’s orientation angle with respect to the positive x-axis. Consider N wheeled mobile robots, which are engaged in a hunting behavior against the target (the prey). The motion of the prey is not a priori known to the robots, however it is assumed that the orientation angles and the linear velocity of the target can be measured by the sensory system of the robots. The robots are holonomic. The motion of robot Ri is given by the following system x˙ i = vi cos θi y˙ i = vi sin θi i = 1, ..., N

2. The robots’ velocities satisfy vi > vp , for i = 1, ..., N . 3. The robots and the prey move with constant speed. 4. The path traveled by the prey is continuous.

3 Geometry and kinematics equations

2 Statement of the problem

x˙ p = vp cos θp y˙ p = vp sin θp

1. Each robot has a sensory system which allows to detect obstacles, and estimate the position of the prey with respect to the robot.

(2)

where (xi , yi ) denote the position of Ri in the Cartesian frame of reference. vi is the linear velocity of robot Ri and θi is the orientation angle. Our aim in this paper is to elaborate a decentralized control law for each robot in order to reach the prey while avoiding collision with obstacles and other robots. Our control law is based on the deviated pursuit ([21, 22]), where the robots use different values for the control parameters to track the prey. The prey can perform three types of motion 1. Non-accelerating motion: The prey moves with a constant orientation angle and speed. 2. Accelerating motion: The orientation angle of the prey is time-varying. 3. Smart motion: The orientation angle of the prey is timevarying but in a smart way in order to escape to the predators. Smart preys require more elaborated strategies. We have the following assumptions

Figure 1 represents a scenario with three predator robots and the prey. The lines of sight are the imaginary straight lines that start from each robot and are directed towards the target. These lines are called L1 , L2 and L3 in figure 1. The angles formed by the positive x-axis and the lines of sight are called the angles of the lines of sight. These angles are denoted by β1 , β2 and β3 in figure 1. The relative velocity in the Cartesian frame of reference between robot Ri and the prey is given by the following system x˙ pi = vp cos θp − vi cos θi y˙ pi = vp sin θp − vi sin θi i = 1, ..., N

(3)

This model can be written in polar coordinates as follows r˙pi = vp cos (θp − βp ) − vi cos (θi − βi ) rpi β˙ pi = vp sin (θp − βp ) − vi sin (θi − βi ) i = 1, ...N

(4)

Systems (3) and (4) give the relative motion of the prey as seen by robot Ri . The first component in equation (4) represents the relative velocity along the lines of sight, the second component represents the relative velocity across the lines of sight Li . It also gives the rate of turn of the prey with respect to the target.

4 Navigation modes Each robot moves in four navigation modes as follows 1. Tracking-navigation mode: During this mode the robot is controlled to track the prey. 2. Obstacle collision avoidance mode: This mode is activated when an obstacle appears in the robot’s path. 3. Cooperative collision avoidance mode: This mode is activated when a collision between the robots is detected. 4. Circle formation: This is the final mode, where the robots form a circle to enclose the prey. These modes are discussed separately in the next sections.

sight angle rate given in (6). The equilibrium position for βi is given by   vi sin α0i ∗ −1 (8) βi = θp − sin vp This equilibrium point is asymptotically stable, as can be proven using linearization. Thus βi tracks βi∗ , which is a deviation of the prey’s orientation angle. Proposition 2 Under the deviated pursuit control law, the orientation angle of robot Ri tracks the prey’s orientation angle with a constant deviation angle. The proof for this proposition is simple and can be obtained by using equation (5) and the result of proposition 1. Since βi → βi∗ , we get

Figure 1. An illustration of the geometry of the tracking-navigation problem

5 Tracking-navigation mode

from which it results that

This mode is probably the most important mode in the hunting problem. The robots start tracking the prey from their initial positions. The control law is based on the deviated pursuit [21, 22]. The orientation angle for robot Ri is given by θi = βi + α0i (5) where α0i is a constant deviation angle. It represents the angle between the velocity vectors of robot Ri and the prey. The relative kinematics model under the deviated pursuit is given by r˙pi = vp cos (θp − βp ) − vi cos α0i rpi β˙ pi = vp sin (θp − βp ) − vi sin α0i

(9)

θi → βi∗ + α0i

θi → θp − sin−1



vi sin α0i vp



+ α0i

(10)

Proposition 3 Robot Ri reaches the prey when vi > vp and  α0i ∈ − π2 , + π2 . The proof can be obtained by writing the equation for the relative range rate when βi → βi∗ , from which it is possible to prove that r˙pi < 0 when the conditions in proposition 3 are satisfied.

5.1 Special case: The pure pursuit (6)

Under the deviated pursuit, the robots kinematics equations are given by x˙ i = vi cos (βi + α0i ) y˙ i = vi sin (βi + α0i ) (7) i = 1, ..., N The deviation angle α0i must be chosen to satisfy r˙pi < 0 for the given values of θp , vp and vi . The use of the Cartesian coordinates to provide the conditions under which robot Ri reaches the prey results in a difficult problem. For this reason, we suggest to use the polar kinematics model to prove the properties of the control law. Proposition 1 Under the deviated pursuit control law the line of sight angle between robot Ri and the prey tracks the prey’s orientation angle with a constant deviation angle. Due to space limitation, the detailed proof will not be given here. However we give a sketch of the proof. The proof can be achieved by using the equation for the line of

This particular case is characterized by a special value for α0i , that is α0i = 0 (11) This value can be seen as an optimal value for α0i , which results in a minimum time to reach the prey by robot Ri . The kinematics equations for this case are similar to the general case. In the case of the pure pursuit, propositions 1 and 2 state that the line of sight angle and the robot orientation angle track the prey orientation angle. The proof that the robot reaches the prey is obvious in this case.

6 Obstacle avoidance Static obstacles are modeled as circles. The obstacle avoidance mode is activated when an obstacle is detected within a given distance from the robot. We use a collision cone approach as shown in figure 2. The robot’s deviates from its nominal path by changing the value of α0i , but at the same time, the robot keeps α0i ∈ − π2 , π2 in order to accomplish the pursuit.

Figure 2. Representation of a static obstacle

Figure 4. Velocity command and orientation command

7.1 Velocity command When two robots are in a collision course, only the robot with higher priority action acts to avoid the collision. This robot is denoted by Ri∗ . The collision point is estimated based on the orientation angles and linear velocities of the robots. The control law for the velocity of Ri∗ is given by v˙ i = −k1 vi − vides Figure 3. Two robots in a collision course

7 Cooperative collision avoidance The deviated pursuit control law allows the robots to reach the prey, but at the same time it can result in collision between robots. We suggest an algorithm for collision avoidance based on the collision cone principle discussed in various recent publications [19]. In our approach the robots are modeled as circles of radius d. We assume that all robots have the same size. Two robots in a collision course can avoid the collision using: 1. Velocity command: The robot with higher priority accelerates or slows down, the robots reach the collision circle at different times. 2. Orientation command: The robot with higher priority deviates from the collision circle by deviating from the nominal path. In this paper we use both approaches. The collision avoidance mode is activated when a collision is detected within a given distance.



(12)

where vides is the desired linear velocity and k1 is a real positive constant. If the distance between the robots in the collision course is smaller or equal to a given threshold, then Ri∗ accelerates or slows down so that the robots do not arrive at the collision circle at the same time.

7.2 Orientation command In this case, a collision cone is built, and the collision is avoided by deviation of Ri∗ from its nominal path. The control law using the deviated pursuit is disactivated during this phase, and re-activated after the collision danger is averted. The control law for the orientation angle of Ri∗ is given by θ˙i = −k2 θi − θides



(13)

where θides is the desired orientation angle, and k2 is a real positive constant. The collision cone is built around the estimated point of collision, where this point becomes the center of a circle of radius 2d (as shown in figure 3) upon which the cone is constructed. The orientation angle for Ri∗ is calculated so that the velocity vector of Ri∗ is outside the cone. Ri∗ can perform a right or left deviation (towards point A or B). We suggest that Ri∗ turns towards the closest point to the other robot, which is point A in figure 4–b.

Figure 5. Paths traveled by the prey and the robots in the Cartesian frame of reference

Figure 6. Orientation angles for the prey and the robots

8 Circle formation

Table 1. Initial positions and deviation angles for the robots

This is the final navigation mode, where the aim of the robots is to enclose the prey. Circle formation as well as other geometric forms formation was addressed in [10] based on geometrical rules. The problem here is more difficult since the aim is to form a circle around a moving object. This mode is activated for each robot separately when the robot reaches a point within a given distance from the prey (say l0 ). The circle enclosing the prey has l0 as radius. The circle formation is established in the following steps:

Robot Initial position deviation angle

1. Given the number of robots and the distance l0 , calculate l1 , the distance between two successive robots in the final circle formation. 2. When the robots reach a point with distance l0 from the prey, the circle formation mode is activated. The robots perform a complex motion to keep the distance from the prey constant and equal to l0 and at the same time form a uniform circle around the prey. 3. After the final circle formation is reached, the robots perform the same type of motion as the enclosed prey.

9 Simulation Here, we illustrate the algorithms using simulation. In this example the prey moves with constant orientation angle, and vp = 2m/s. The initial position of the prey is (12, 12). The predators move with vi = 3m/s, i = 1, ..., N . The initial positions and the deviation angles for the robots are given in table 1. The paths traveled by the prey and the robots is shown in figure 5, where the capture task is accomplished successfully. The orientation angles for the prey and the robots are

R1 (0, 0) 15◦

R2 (0, 24) -15◦

R3 (12, 0) 15◦

R4 (12, 24) -15◦

shown in figure 4, where it is easy to see that the angles θi (i = 1, ..., N ) track θp with small deviation.

10 Conclusion In this paper we discussed the problem of modeling and controlling a hunting problem using a group of robots. The motion of the prey is not a priori known to the robots. We model the hunting problem using the relative equations of motion of the prey with respect to the robots. The control laws for the robots are decentralized and depend mainly on the motion of the prey. In the navigation-tracking mode, the robots use the deviated pursuit control law with different deviation angles. The deviated pursuit is a closed loop control law based on geometrical rules and kinematics equations. The properties of the deviated pursuit are discussed briefly. For collision avoidance with static obstacles or between robots, a collision cone approach is used. For the final mode a simple algorithm for circle formation is suggested. Simulation is used to illustrate the method.

References [1] P. Wang, “Navigation strategies for multiple autonomous mobile robots moving in formation,” Journal of Robotic Systems, vol. 8, no. 2, pp. 177–195, 1991.

[2] A. Sgorbissa and R. Arkin, “Local navigation strategies for a team of robots,” Robotica, vol. 21, no. 461-473, pp. 461–473, 2003. [3] A. Scalzo, A. Sgorbissa, and R. Zaccaria, “Distributed multi robot reactive navigation,” in Proc. of the International Symposium on Distributed Autonomous Robotic Systems, Fukuoka, Japan, June 2002, pp. 267–276. [4] A. Fujimori, M. Teranoto, P. Nikiforuk, and M. Gupta, “Cooperative collision avoidance between multiple robots,” Journal of Robotic Systems, vol. 17, no. 7, pp. 347–363, 2000.

[14] M. M. Jr., A. Speranzon, K. Johanson, and X. Hu, “Multi-robot tracking of a moving object using directional sensors,” in Proc. of IEEE International Conference on Robotics and Automation, Apr. 2004, pp. 1103– 1108. [15] R. Vaughan, G. Sukhatme, J. Mesa-Martinez, and J. Montgomery, “Fly spy: lightweight localization and target tracking for cooperating ground and air robots,” in Proc. of the International Symposium on Distributed Autonomous Robotic Systems, Knoxville, Oct. 2000, pp. 315–324.

[5] H. Duman and H. Hu, “United we stand, divided we fall: team formation in multiple robot applications,” International Journal of Robotics and Automation, vol. 2001, no. 4, pp. 153–160, 2001.

[16] L. Parker and B. Emmons, “Cooperative multi-robot observation of multiple moving targets,” in Proc. of IEEE International Conference on Robotics and Automation, Albuquerque, NM, Apr. 1997, pp. 2082– 2089.

[6] T. Balch and R. Arkin, “Behavior-based formation control for multirobot teams,” IEEE Transactions on Robotics and Automation, vol. 14, no. 6, pp. 926–938, 1998.

[17] H. Yamaguchi, “A cooperative hunting behavior by mobile robot troops,” The international Journal of Robotics and Research, vol. 18, no. 8, pp. 931–940, 1999.

[7] J. Fredslund and M. Mataric, “A general algorithm for robot formation using local sensing and minimal communication,” IEEE Transactions on Robotics and Automation, vol. 18, no. 5, pp. 837–845, 2002.

[18] M. Hajjawi and A. Shirkhodaie, “Cooperative visual team working and target tracking of mobile robots,” in Proc. of IEEE Southeastern Symposium on Systems Theory, Mar. 2002, pp. 376–380.

[8] S. Spry and J. Hedrick, “Formation control using generalized coordinates,” in Proc. of IEEE Conference on Decision and Control, Atlantis, Bahamas, Dec. 2004, pp. 2441–2446.

[19] K. Takahachi and M. Kakikura, “Research on cooperative capture by multiple mobile robots a proposition of cooperative capture strategies in the pursuit problem,” in Proc. of the International Symposium on Distributed Autonomous Robotic Systems, Fukuoka, Japan, June 2002, pp. 393–402.

[9] C. Belta and V. Kumar, “Motion generation for formations of robots: A geometric approach,” in Proc. of IEEE International Conference on Robotics and Automation. [10] X. Yun, G. Alptekin, and O. Albayrak, “Line and circle formation of distributed physical mobile robots,” Journal of Robotic Systems, vol. 14, no. 2, pp. 63–76, 1997. [11] J. Deasi, “A graph theoretical approach for modeling mobile robot team formations,” Journal of Robotic Systems, vol. 19, no. 11, pp. 511–525, 2002. [12] Z. Jin and R. Murray, “Double-graph control of multivehicle formations,” in Proc. of IEEE Conference on Decision and Control, Atlantis, Bahamas, Dec. 2004, pp. 1988–1994. [13] J. Desai, J. Ostrwoski, and V. Kimar, “Modeling and control of formations of nonholonomic mobile robots,” IEEE Transactions on Robotics and Automation, vol. 17, no. 6, pp. 905–908, 2001.

[20] T. McLain, R. Beard, and J. Kelsey, “Experimental demonstration of multiple robot cooperative target intercept,” in Proc. of AIAA Guidance, Navigation and Control Conference, Monterey, Aug. 2002, pp. 1–7. [21] F. Belkhouche and B. Belkhouche, “A method for robot navigation towards a moving goal with unknown maneuvers,” To appear in Robotica. [22] ——, “On the tracking and interception of a moving object by a wheeled mobile robot,” in Proc. of IEEE Conference on Robotics, Automation and Mechatronics, Singapore, Dec. 2004, pp. 130–135. [23] ——, “A control strategy for tracking-interception of moving objects using wheeled mobile robots,” in Proc. of IEEE Decision and Control Conference, Bahamas, Dec. 2004.