Visual Feedback Attitude Synchronization in Leader ... - CiteSeerX

9 downloads 0 Views 1MB Size Report
wireless communication device Wiport (LANTRONIX). The sampling period of the ... law works successfully. It is possible to download the movie of this exper-.
49th IEEE Conference on Decision and Control December 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA

Visual Feedback Attitude Synchronization in Leader-follower Type Visibility Structures Tatsuya Ibuki, Takeshi Hatanaka, Masayuki Fujita and Mark W. Spong Abstract— In this paper we consider visual feedback attitude synchronization in leader-follower type visibility structures in SE(3). We first define visual robotic networks consisting of the dynamics describing rigid body motion, visibility structures among bodies and visual measurements. We then propose a visual feedback attitude synchronization law combining a vision-based observer with attitude synchronization law presented in our previous works. We moreover prove that the robotic network with the control law achieves visual feedback attitude synchronization in the absence of communication and measurement of the states. Finally, the validity of the proposed control law is demonstrated through both numerical simulations and experiments on a planar testbed.

I. I NTRODUCTION A mobile sensor network [1] is a network consisting of a number of mobile sensors or mobile robots with a sensing device. The network has potential advantages in performance, robustness and versatility for sensor-driven tasks such as environmental monitoring, search, exploration and mapping, especially in dynamical environments. Cooperative control provides methodologies to tackle distributed control problems for mobile sensor networks, where a variety of control objectives are investigated [2]-[7]. In cooperative control problems, two distinctly different approaches have emerged, where an agent taking on leader roles exists or all agent are fully autonomous. This paper focus on a leaderbased attitude synchronization problem. Attitude or pose synchronization is one of cooperative control problems whose objective is to lead agents’ attitudes or poses to a common one by using distributed control strategies. Synchronization is not only useful for mobile sensor networks [7] but partially explains cooperative behaviors in nature such as flocking of birds [8], [9]. Vision is a powerful tool to get information necessary to implement cooperative control laws since it has a lot of information. Vision is also considered to play a central role in cooperative behaviors in nature [10]. Though numerous research works have been devoted to the combination of control techniques with vision [11], vision-based cooperative control is also tackled in recent years as listed below. Moshtagh et al [12] propose a vision-based flocking law for nonholonomic robots using only monocular vision images. Morbidi et al. [13], [14] tackle a vision-based leaderfollowing type formation control problem while they make use of a range estimator. Goi et al. [15] address the problem, where the follower vehicle tracks the trajectory of the leader delayed by a constant time. Vidal et al. [16] also tackle the problem based on visual servoing control techniques. In these vision-based cooperative control problems, there is no

978-1-4244-7744-9/10/$26.00 ©2010 IEEE

observer-based control law which guarantees convergence in the absence of communication. In one of our previous works [17], we have presented a visual feedback pose synchronization law, where we employed a passivity-based nonlinear observer [18] to implement a passivity-based pose synchronization law presented in [5], [6]. In [17], we showed that these control and estimation schemes were combined successfully and the effectiveness was demonstrated through experiments. However, no theoretical guarantees on synchronization for the integrated system were provided. In this paper, we investigate an attitude synchronization problem for a network of rigid bodies and present a novel synchronization law using only vision with theoretical guarantees on synchronization. We first introduce a notion of visual robotic networks consisting of the dynamics describing the rigid body motion, visibility structures among bodies and visual measurements. After defining visual feedback attitude synchronization for the robotic networks, we present a novel synchronization law, which consists of the visionbased observer and synchronization law. The structure is different from that of [17], which gives us theoretical guarantees on synchronization. We first show that accomplishment of synchronization is reduced to asymptotic stability of an equilibrium for the integrated system of the estimation error and control error systems. Then, the stability is proved by using a potential function evaluating both estimation and control errors as a Lyapunov function candidate. The effectiveness of the control scheme is demonstrated through both simulations and experiments on a planar testbed. The main contribution of this paper is to present theoretical guarantees for attitude synchronization based on only vision. Another contributions are as follows. We propose a novel robotic network which includes how to get the relative visual information. We moreover give a total control system integrating synchronization control law, relative sensing and estimation, whose structure is novel to the best of our knowledge. We also perform the experiment in order to confirm the effectiveness of the proposed control law. II. V ISUAL ROBOTIC N ETWORK In this section, we define visual robotic networks. Though a formal definition of robotic networks is presented in [2], it does not cover the networked system under consideration in this paper. We thus present another formulation of robotic networks called visual robotic network consisting of the dynamics describing rigid body motion, visibility structures among bodies and visual measurements.

2486

1 Frame

Frame

2 4

3 5

6

Frame

Fig. 2: Leader-follow Type Visibility Structure World Frame

Fig. 1: Rigid Body Motion A. Rigid Body Motion In this paper, we consider a network of n rigid bodies in 3-dimensional space (see Fig. 1). Let Σw be an inertial coordinate frame and Σi , i ∈ V := {1, · · · , n} a body-fixed coordinate frame whose origin is located at the center of mass of body i. Assume that all the coordinate frames are right-handed and Cartesian. We denote by pwi ∈ R3 the ˆ position of body i in the frame Σw . We use eξwi θwi ∈ R3×3 to represent the rotation matrix of a body-fixed frame Σi T relative to the frame Σw . Here, ξwi ∈ R3 , ξwi ξwi = 1 and θwi ∈ R specify the direction and the angle of rotation, respectively. The notation ’∧’ (wedge) is the skew-symmetric operator such that a ˆb = a × b for the vector cross-product × and any vector a, b ∈ R3 . The notation ’∨’ (vee) denotes ˆ wi to the inverse operator to ’∧’. For simplicity, we use ξθ ˆ wi ξθ ˆ is orthogonal with denote ξwi θwi . The transformation e unit determinant, i.e. an element of the Special Orthogonal ˆ group SO(3). A configuration consists of the pair (pwi , eξθwi ) and hence the configuration space of the rigid body motion is the Special Euclidean group SE(3), which is the product space of R3 with SO(3). We use the 4 × 4 matrix ¸ · ˆ eξθwi pwi , i∈V gwi = 0 1 ˆ

as the homogeneous representation of (pwi , eξθwi ) ∈ SE (3). Let us now introduce the velocity of each rigid body to represent the rigid body motion of the frame Σi relative to b Σw . We define the body velocity Vwi ∈ R6 as ¸ · b b ω ˆ vwi −1 b ∈ R4×4 , Vˆwi := gwi g˙ wi = wi 0 0 # · b ¸ " −ξθ ˆ e wi p˙wi vwi b Vwi = b = ∈ R6 , i ∈ V, ˆ ˆ ωwi (e−ξθwi e˙ ξθwi )∨ b b ∈ R3 represent the linear and ∈ R3 and ωwi where vwi angular velocities of body i relative to Σw , respectively. Then, each rigid body motion is represented by the kinematic model b g˙ wi = gwi Vˆwi , i ∈ V.

(1)

b Throughout this paper, we consider Vwi as control input b and assume vwi = v ∀ i ∈ V. In contrast, some practical mechanical systems such as spacecraft or UAV systems use torque and force control. Even in such real systems, considering simplified dynamics can be useful at least to build a high-level planning controller generating desired

trajectories under the assumption that these can be tracked by a lower level mechanical controller. In addition, it is also useful as a preliminary step towards an integrated controller. We denote the pose of a frame Σj relative to Σi as ˆ gij = (pij , eξθij ) ∈ SE(3). Then, differentiating gij w.r.t. time yields the body velocity of the relative rigid body motion −1 b b Vijb := (gij g˙ ij )∨ = −Ad(g−1 ) Vwi + Vwj , (2) ij " # ˆ ˆ eξθij pˆij eξθij where Ad(gij ) := ∈ R6×6 is the adjoint ˆ 0 eξθij transformation associated with gij [19].

B. Visibility Structure Throughout this paper, we assume each rigid body has vision to capture other visible bodies. In this subsection, we describe visibility structures between bodies. A set E ⊂ V×V is defined so that (j, i) ∈ E means body j is visible from body i. Next, we define the set of visible bodies from body i by Ni := {j ∈ V | (j, i) ∈ E}, i ∈ V.

(3)

Let us now make the following assumptions on the visibility structure. Assumption 1: (Leader-follower Type Visibility Structure) • N1 = ∅ • |Ni | = 1 and Ni is fixed for all i ∈ V \ {1}. ∀ • i ∈ V, ∃ v1 , · · · , vr ∈ V s.t. v1 = i, vr = 1 and (vk , vk+1 ) ∈ E, ∀ k ∈ {1, · · · , r − 1}. Here, |Ni | represents the number of components of Ni . This visibility structure is a leader-follower type visibility structure since there is a leader (rigid body 1) who has no visible body, the other bodies have a fixed visible body and there is a visibility path from each body to the leader. This visibility structure can be interpreted as a directed graph G := (V, E) by regarding V and E as the node set and the edge set respectively (Fig. 2). Then, Assumption 1 means the visibility structure is a directed spanning tree whose root is body 1. C. Measured Output Suppose that each rigid body j has m (m ≥ 4) feature points, whose positions relative to a frame Σj are denoted by pjjk ∈ R3 , k ∈ {1, · · · , m}. A coordinates transformation yields the positions of feature points relative to a frame Σi , denoted by pijk , as below.

2487

pijk = gij pjjk ,

Relative Rigid Body Motion

Perspective Projection

Frame

Nonlinear Observer

Frame Image Plane

Control Input

World Frame

Fig. 3: Perspective Projection Model

Fig. 4: Visual Feedback Attitude Synchronization Law

where pijk and pjjk should be regarded, with a slight abuse of notation, as [pTijk 1]T and [pTjjk 1]T via the wellknown homogeneous coordinate representation in robotics, respectively (see, e.g., [19]). Let us now consider visual measurements of each rigid body. We denote the k-th feature point onto the image plane as fijk := [fxijk fyijk ]T ∈ R2 , k ∈ {1, · · · , m}. Then, by perspective projection (Fig. 3), this is given by ¸ · λ xijk , fijk = zijk yijk

Unlike [5], [6] premising the measurement of e−ξθwi eξθwj , the objective is to present a velocity law only with the visual measurements (4).

where λ ∈ R is a focal length and pijk = [xijk yijk zijk ]T [19]. Moreover, let fij be the stuck vector of m feature points on image plane coordinates, i.e., ¤T £ T T · · · fij ∈ R2m . fij := fij 1 m

We assume each body can extract the feature points of visible bodies from image data, namely, the measured output of body i is fi := (fij )j∈Ni , i ∈ V.

(4)

Hereafter, the aggregate system consisting of n rigid bodies with kinematic model (1), the visibility structure (3) satisfying Assumption 1 and measured output (4) is called visual robotic network Σ. III. V ISUAL F EEDBACK ATTITUDE S YNCHRONIZATION In this section, we present a visual feedback attitude synchronization law and prove that a visual robotic network Σ with the law achieves visual feedback attitude synchronization. A. Definition of Visual Feedback Attitude Synchronization The goal of this paper is to design a body velocity input b so that the visual robotic network Σ achieves visual Vwi feedback attitude synchronization defined as bellow. Definition 1: A visual robotic network Σ is said to achieve visual feedback attitude synchronization, if ( b b vwi = vwj ˆ ˆ , ∀ i, j ∈ V. (5) lim φ(e−ξθwi eξθwj ) = 0 t→∞ ˆ

ˆ

Here, φ(eξθwi ) := 12 tr(I3 − eξθwi ) ≥ 0 is the energy of ˆ ˆ rotation eξθwi and by the definition, φ(eξθwi ) = 0 if and ˆ wi ξθ = I3 , where In is the n × n identity matrix. only if e Equation (5) implies that the orientations of all the rigid bodies asymptotically converge to a common value.

ˆ

ˆ

B. Visual Feedback Attitude Synchronization Law We propose the following control law ( b vwi = v, Controller ˆ ¯¯ b ωwi = Kij sk(eξθij )∨ ,  −1 ˙ b ¯b  gij + uij , g¯ij )∨ = −Ad(¯g−1 ) Vwi Vij = (¯ ij  à " #! ˆ 1 ξθeeij b Observer − v e wi Keij   , uij = Keij eij − ˆ ¯¯ sk(eξθij )∨ j ∈ Ni , i ∈ V.

(6a) (6b) (6c) (6d)

ˆ

where Kij , Keij ∈ R are positive gains and sk(eξθij ) ˆ is the skew-symmetric part of the matrix eξθij , i.e. ˆ ˆ ˆ sk(eξθij ) := 12 (eξθij − e−ξθij ). The block diagram of the control law (6) is shown in Fig. 4. The angular velocity input (6b) is the same as that in [5], [6] except for using ˆ ¯¯ g¯ij = (¯ pij , eξθij ) ∈ SE (3) instead of gij . Here, g¯ij is an estimate of relative pose gij given by the observer (6c) and (6d). In the following, we explain the structure of (6c) and (6d). (6c) simulates the relative rigid body motion (2) by using the estimate g¯ij as its state. Here, uij := [uTvij uTωij ]T ∈ R6 is the external input to be determined so that the estimated values g¯ij and V¯ijb are driven to their actual values. (6d) ˆ eeij ∨ T T determines uij , where eij := [pTeeij (sk(eξθ ) ) ] ∈ 6 R with " # ¯θ¯ij ξˆθˆij ¯θ¯ij −ξˆ −ξˆ e e e (p − p ¯ ) −1 ij ij geeij := g¯ij gij = . (7) 0 1 ˆ

geeij = (peeij , eξθeeij ) ∈ SE (3) is the estimation error between the actual relative pose gij and its estimate g¯ij . In the following, eij is called estimation error vector. Note that eij = 0 if and only if geeij = I4 , namely, if the estimation error vector is equal to zero, then the estimated relative pose g¯ij equals to the actual one gij . Differentiating (7) w.r.t. time and using (2) and (6c), we get the following estimation error system −1 b b Veeij := (geeij g˙ eeij )∨ = −Ad(g−1 ) uij + Vwj . eeij

(8)

In (6d), eij can be derived by the image feature point fij ˆ (see [18]) and if |θeeij | < π2 , then eξθeeij can be derived by

2488

1

b = Lemma 1: If the leader’s angular velocity is zero (ωw1 0), then the collective error system Σcol is autonomous and xe = 0 is an equilibrium point of the system.

We now show that a visual robotic network Σ with the control law (6) achieves visual feedback attitude synchronization (5). Theorem 1: Suppose the leader’s angular velocity is zero b (ωw1 = 0). Then, a visual robotic network Σ with the control law (6) achieves visual feedaback attitude synchronization at least locally if  2Kij Keij Kjk < K , k ∈ Nj , j ∈ Ni , i ∈ Vq ij +Keij . (10) 2K K ij eij  Kjk < , k ∈ N , j ∈ N , i ∈ V j i r Kij +2Keij

Fig. 5: Definition of Rigid Body Sets eij as follows. ˆ

ξθeeij =

sin−1 ksk(eξθeeij )∨ k ˆ eeij ∨ ksk(eξθ ) k

ˆ

sk(eξθeeij )∨ ,

ˆ eξθeeij = I3 + ξˆ sin θeeij + ξˆ2 (1 − cos θeeij ).

Namely, the present control law (6) is composed of only visual measurements (4). The form of the angular velocity input (6b) is derived by the passivity property of the rigid body motion (1); if we ˆ wi ∨ T T b choose Vwi as input and [pTwi (sk(eξθ ) ) ] as output, then rigid body motion (1) is passive [5]. If rigid body j is b static (Vwj = 0) and we consider uij as input and −eij as output, then the estimation error system (8) is passive [18]. Using this property, a nonlinear observer whose input is the first term of (6d) (uij = Keij eij ) is proposed in [18]. In this paper, we expand this observer in order to gain theoretical guarantees on visual feedback attitude synchronization. C. Convergence Analysis In this subsection, we prove that a visual robotic network Σ with the control law (6) achieves visual feedback attitude synchronization (5). We define orientation error vectors with respect to control ecij ∈ R3 and vectors with respect to estimation eeij ∈ R3 as below. ˆ ¯¯

ˆ

ecij := sk(eξθij )∨ , eeij := sk(eξθeeij )∨ . Moreover, we use the following notations (see Fig. 5)  Vp := {i ∈ V | 1 ∈ Ni }     / Nj , ∀ j ∈ V} Vq := {i ∈ V | i ∈ Vr := V \ ({1} ∪ Vp ∪ Vq ) .  ∃  V := {j ∈ V | v , · · · , v ∈ V s.t. v = j, v = i  i q 1 r 1 r   and (vk , vk+1 ) ∈ E, ∀ k ∈ {1, · · · , r − 1}}

We consider the system combining the dynamics with respect to orientations in (6c) with the estimation error system (8) as #· ¸ · ¸ · b ¸ " −ξˆ¯θ¯ b ω ¯ ij 0 ωwi −e ij I˜ + . (9) = b b Vwj 0 −Ad(g−1 ) uij Veeij eeij ¤ £ Here, I˜ := 0 I3 ∈ R3×6 . In this paper, we call the collection of (9) for j ∈ Ni , i ∈ V with the control law (6) collective error system Σcol , whose state, denoted by xe , is given by the stuck vector of [eTcij eTij ]T , j ∈ Ni , i ∈ V. Then, we have the following lemma.

Proof: From the definition of xe , ecij and eij , if the equilibrium point xe = 0 is asymptotically stable, then local visual feedback attitude synchronization is achieved. It is thus sufficient to prove asymptotic stability of xe = 0 for the collective error system Σcol . This can be proved by differentiating the following Lyapunov function candidate w.r.t. time and utilizing completing square. ¶ µ n X X ˆ 1 ˆ ¯¯ qi φ(eξθij )+ kpeeij k2 +φ(eξθeeij ) . E := 2 i=2 j∈Ni

Here, qi ∈ {1, 2, 3, · · · } is |Vi | for i ∈ V \ Vq and qi = 1 ˆ ¯¯ for i ∈ Vq (Fig. 5). Note that E = 0 if and only if eξθij = ˆ eeij ξθ = I3 (i.e. ecij = 0, eij = 0), j ∈ I3 , peeij = 0, e Ni , ∀ i ∈ V and otherwise E > 0. The equations eij = 0 and ecij = 0 mean g¯ij = gij ˆ ˆ ¯¯ and eξθij = I3 . Thus, from Assumption 1, we get eξθij = I3 , ∀ i, j ∈ V which implies that the visual robotic network Σ with the control law (6) achieves visual feedback attitude synchronization (5). We now analyze the condition (10). If we set Keij = Kij , the condition is represented by ( Kjk < Kij , k ∈ Nj , j ∈ Ni , i ∈ Vq . Kjk < 23 Kij , k ∈ Nj , j ∈ Ni , i ∈ Vr This means that the forward body should move more slowly than the body regardless of Kij . Meanwhile, if we choose any positive K as Keij , then, we get the following equations. 2Kij K Kij (Kij − K) = , Kij + K Kij + K 2 Kij 2Kij K = . > Kij − Kij + 2K Kij + 2K

Kij − Kjk > Kij − Kij − Kjk

It is thus found that for body i ∈ Vq , if we choose K > Kij , then the forward body can move faster than the backward body. However, for body i ∈ Vr which has followers, the forward body should move slowly relative to the body regardless of K. These are intuitive since the motion of forward bodies has large influence on the group motion while that of backward bodies has small impact. It should be noted that Theorem 1 proves synchronization for the system integrating the observers instead employing

2489

3 2 4

1 RC-12 (RF SYSTEM)

5

Video Signal

Digital Control Device

Fig. 6: Visibility Graph in Simulation 0.15

ξθx21 ξθx32

0.1

15

p

10

pw2

5

pw3

0

pw4

ξθ

x42

ξθx54

0.05 ξθx

zw [m]

w1

0

Control Input

p

w5

-5

HALCON (MVTec) SIMULINK (The Math Works) DS1104 (dSPACE)

-0.05

-10 5 0 -5

yw [m]

-10

-5

0

5

10

-0.1

20

15

-0.15

e-nuvo WHEEL (ZMP) 0

5

xw [m]

Fig. 7: Positions in Σw 1

Time [s]

10

0.1

3

ξθz21

ξθy32

0.5

Fig. 11: Experimental Environment

Fig. 8: Relative Rotation Angles about x-axis

ξθy21

ξθ

0.05

y42

z42

ξθy54

2

1

Fig. 12: Visibility Graph in Experiment

ξθz32

ξθ

Wiport (Lantronix)

15

ξθz54

ξθ

ξθz

y

0 0

-0.5 -0.05 -1

-1.5

0

5

Time [s]

10

15

Fig. 9: Relative Rotation Angles about y-axis

-0.1

0

5

Time [s]

10

15

Fig. 10: Relative Rotation Angles about z-axis

certainly equivalence principle. It is well known in robot control that proving stability for the integrated system in observer-based control strategies is much more difficult than the individual control and estimation problems even for a single passive system [20]. It should be also true or might be much harder for synchronization since it is required to estimate not their own but the other individuals’ information only from relative measurements. b Though we assume ωw1 = 0 in Theorem 1, it is expected for the followers to track the leader, i.e. to achieve a flockinglike behavior, as long as the leader moves slowly. Proving b is left as a future synchronization even in the presense of ωw1 work of the paper. The theory of L2 -gain analysis or inputto-state stability might be helpful to tackle the problem as in [18], where the authors investigate a target tracking problem by using a vision-based observer. IV. N UMERICAL S IMULATION In this section, we demonstrate the effectiveness of the proposed control law (6) by a numerical simulation. We consider five rigid bodies with the visibility structure depicted in Fig. 6. The common linear velocity is v = [0 0 1]T , which means that all the rigid bodies finally move in the same direction as z axis of the body 1’s frame. The control law (6) with K21 = 1.8, K42 = 2.3, K32 = K54 = 3, Keij = 5, j ∈ Ni , ∀ i ∈ V satisfying (10) is applied to each body under the following initial conditions pw1 (0) = [5 − 5 5]T , pw2 (0) = [0 0 0]T , pw3 (0) = [0 0 − 5]T , pw4 (0) = [−5 − 5 − 5]T , pw5 (0) = [−5 0 − 10]T ,

ξθw1 (0) = [0 π4 0]T , ξθw2 (0) = [0 0 0]T , ξθw3 (0) = [0 − π4 0]T , ξθw4 (0) = [0 π3 0]T , ξθw5 (0) = [0 0 0]T .

Fig. 7 shows the trajectory of each rigid body and Figs. 8-10 illustrate the relative attitude vectors between corresponding bodies. In Fig. 7, we see that every body eventually moves in the same direction as z axis of the body 1’s frame. In Figs. 8-10, the relative attitude vectors asymptotically converge to 0. Thus, the orientations of all bodies asymptotically converge to that of body 1, that is, visual feedback attitude synchronization is achieved at around 10 [s]. V. E XPERIMENT In this section, we present experimental results on a planar test bed. We use three wheeled mobile robots e-nuvo WHEEL (ZMP) as rigid bodies. We attach a plate with four colored circles to each robot in order to improve accuracy of extracting feature points. Each robot has a wireless on-board radio camera RC-12 (RF SYSTEM) for visual measurements. We also use a MTV-7310 camera (komoto) attached above the robots to measure the actual pose of robots. Both of the frame rates of the cameras are 30 [fps]. Transmitted video signals are loaded into PC via a frame grabber board PICOLO DILLIGENT (Euresys) and manipulated by image processing software HALCON (MVTec). The control and observer models are designed by Simulink (The Math Works) and calculated by DSP board DS1104 (dSPACE) in real time. Then the control inputs are sent to robots via an embedded wireless communication device Wiport (LANTRONIX). The sampling period of the controller is 33 [s]. This experimental schematic is shown in Fig. 11. We use the visibility structure depicted in Fig. 12. We let the gains be Ke21 = Ke32 = 15, K21 = K32 = 1 satisfying (10) and the common linear velocity be v = [0 0.04 0]T [m/s]. Finally, we set initial conditions as pw1 (0) = [0.823 0.682 0]T , ξθw1 (0) = [0 0 2.563]T , pw2 (0) = [1.315 0.572 0]T , ξθw2 (0) = [0 0 2.978]T , pw3 (0) = [1.663 0.421 0]T , ξθw3 (0) = [0 0 2.800]T . The experimental results are shown in Figs. 13-15. Fig. 13 illustrates the trajectories of the robots on the 2-dimensional plane, Fig. 14 the time responses of actual (measured)

2490

1.4

law achieves visual feedback attitude synchronization in the absence of communication and measurement of the states. Finally, the numerical simulation and experimental results have demonstrated the validity of our results. Further directions of this research is to extend the results of this paper to pose (position and attitude) synchronization in SE(3) and to consider multiple rigid bodies with actuators.

Agent 1 Agent 2 Agent 3

1.2 1

w

y [m]

0.8 0.6 0.4 0.2 0 0

0.2

0.4

0.6

0.8 1 x [m]

1.2

1.4

1.6

R EFERENCES

1.8

w

Fig. 13: Positions in Σw 3.2

Agent 1 Agent 2 Agent 3

Orientation [rad]

3

2.8

2.6

2.4

2.2 0

5

10

15 Time [s]

20

25

30

Fig. 14: Rotation Angles in Σw 0.5 ξθ

z32

ξθ

Relative Orientation [rad]

0.4

esz32

0.3 0.2 0.1 0 -0.1 -0.2

0

5

10

15 Time [s]

20

25

30

Fig. 15: Actual (Measured) and Estimated Rotation Angles between 3 and 2 orientations and Fig. 15 the actual (measured) and estimated orientations of robot 2 relative to robot 3. We see from Figs. 13 and 14 that orientations converge to a common value (robot 1’s value) at around 20 [s]. This means that the proposed control law achieves visual feedback attitude synchronization. Moreover, Fig. 15 shows that the error between actual (measured) and estimated orientations is small enough to achieve stable attitudes. Thus, the visual feedback attitude synchronization law works successfully. It is possible to download the movie of this experiment from http://www.fl.ctrl.titech.ac.jp/ researches/movie/movie2/vfas.wmv VI. C ONCLUSIONS In this paper, we have investigated attitude synchronization by using visual information as measured output of each rigid body. We first have introduced visual robotic networks consisting of the dynamics describing the rigid body motion, visibility structures among bodies and visual measurements. Then we have proposed a visual feedback attitude synchronization law combining a vision-based observer with attitude synchronization law presented in [5], [6], [17]. Moreover, we have proved that the robotic network with the control

[1] P. Ogren, E. Fiorelli and N. E. Leonard, “Cooperative Control of Mobile Sensor Networks: Adaptive Gradient Climbing in A Distributed Environment,” IEEE Trans. on Automatic Control, Vol. 49, No. 8, pp. 1292–1302, 2004. [2] F. Bullo, J. Cortes and S. Martinez, Distributed Control of Robotic Networks, Princeton Series in Applied Mathematics, 2009. [3] R. M. Murray, “Recent Research in Cooperative Control of Multivehicle Systems,” Journal of Dynamic Systems Measurement and ControlTrans. of the Asme, Vol. 129, No. 5, pp. 571–583, 2007. [4] A. Jadbabaie, J. Lin and A. S. Morse, “Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules,” IEEE Trans. on Automatic Control, Vol. 48, No. 6, pp. 988–1001, 2003. [5] Y. Igarashi, T. Hatanaka, M. Fujita and M. W. Spong, “Passivity-based Attitude Synchronization in SE(3),” IEEE Trans. on Control Systems Technology, Vol. 17, No. 5, pp. 1119–1134, 2009. [6] Y. Igarashi, T. Hatanaka, M. Fujita and M. W. Spong, “Passivity-based Output Synchronization and Flocking Algorithm in SE(3),” Proc. of the 47th IEEE Conference on Decision and Control, pp. 1024–1029, 2008. [7] N. E. Leonard, D. A. Paley, F. Lekien, R. Sepulchre, D. M. Fratatoni and R. E. Davis, “Collective Motion, Sensor Networks and Ocean Sampling,” Proc. of the IEEE, Vol. 95, No. 1, pp. 48–74, 2007. [8] C. W. Reynolds, “Flocks, Herds and Schools: A Distributed Behavioral Model,” Computer Graphics, Vol. 21, No. 4, pp. 25–34, 1987. [9] T. Vicsek, A. Czirok, E. Ben-Jacob, I. Cohen and O. Shochet, “Novel Type of Phase Transition in A System of Self-driven Particles,” Physical Review Letters, Vol. 75, No. 6, pp. 1226–129, 1995. [10] C. W. Reynolds, “An Evolved, Vision-based Behavioral Model of Coordinated Group Motion,” in From Animals to Animats 2: Proc. of the 2nd International Conference on Simulation of Adaptive Behavior, J.-A. Meyer, H. L. Roitblat and S. W. Wilson, eds., The MIT Press, pp. 384-392, 1993. [11] F. Chaumette and S. A. Hutchinson, “Visual Servoing and Visual Tracking,” in Springer Handbook of Robotics, B. Siciliano and O. Khatib, eds., Springer-Verlag, pp. 563–583, 2008. [12] N. Moshtagh, N. Michael, A. Jadbabaie and K. Daniilidis, “Visionbased, Distributed Control Laws for Motion Coordination of Nonholonomic Robots,” IEEE Trans. on Robotics, Vol. 25, No. 4, pp. 851–860, 2009. [13] F. Morbidi, F. Bullo and D. Prattichizzo, “On Leader-follower Visibility Maintenance for Dubins-like Vehicles via Controlled Invariance,” Proc. of the 47th IEEE Conference on Decision and Control, pp. 1821– 1826, 2008. [14] F. Morbidi, G. L. Mariottini and D. Prattichizzo, “Observer Design via Immersion and Invariance for Vision-based Leader-follower Formation Control,” Automatica, Vol. 46, No. 1, pp. 148–154, 2010. [15] J. L. Giesbrecht, H. K. Goi, T. D. Barfoot and B. A. Francis, “A Vision-based Robotic Follower Vehicle,” Proc. of the SPIE Defence, Security and Sensing, Vol. 7332, pp. 14-17, 2009. [16] R. Vidal, O. Shakernia and S. Sastry, “Following The Flock,” IEEE Robotics & Automation Magazine, Vol. 11, No. 4, pp. 14–20, 2004. [17] M. Fujita, T. Hatanaka, N. Kobayashi, T. Ibuki and M. Spong, “Visual Motion Observer-based Pose Synchronization: A Passivity Approach,” Proc. of the 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference, pp. 2402–2407, 2009. [18] M. Fujita, H. Kawai and M. W. Spong, “Passivity-based Dynamic Visual Feedback Control for Three Dimensional Target Tracking: Stability and L2 -gain Performance Analysis,” IEEE Trans. on Control Systems Technology, Vol. 15, No. 1, pp. 40–52, 2007. [19] Y. Ma, S. Soatto, J. Kosecka and S. S. Sastry, An Invitation to 3-D Vision, Springer, Chapter 2, 2003. [20] H. Berghuis and H. Nijmeijer, “A Passivity Approach to ControllerObserver Design for Robots,” IEEE Trans. on Robotics and Automation, Vol. 9, No. 6, pp. 740–754, 1993.

2491