Output feedback control of Markov jump linear systems ... - IEEE Xplore

1 downloads 0 Views 199KB Size Report
Abstract—This paper addresses the dynamic output feedback control problem of continuous-time Markovian jump linear systems. The funda- mental point in the ...
944

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000

Output Feedback Control of Markov Jump Linear Systems in Continuous-Time D. P. de Farias, J. C. Geromel, J. B. R. do Val, and O. L. V. Costa

Abstract—This paper addresses the dynamic output feedback control problem of continuous-time Markovian jump linear systems. The fundamental point in the analysis is an LMI characterization, comprising all dynamical compensators that stabilize the closed-loop system in the mean and -norm control problems are studied, and square sense. The the and filtering problems are solved as a by product. and control, Index Terms—Jump parameter systems, filtering, linear matrix inequalities, output feedback.

and

I. INTRODUCTION There have been several examples showing the importance of dynamic systems subject to abrupt variations in their structures. This is in part due to the fact that very often dynamic systems are inherently vulnerable to component failures or repairs, sudden environmental disturbances, changing subsystem interconnections, abrupt variations of the operating point of a nonlinear plant, etc. Markovian jump linear systems (MJLS) comprise an important class of stochastic dynamic systems which can, in several situations, model the above problems. The theory of stability, optimal control and H -control, as well as some important applications of such systems, can be found in several papers in the current literature, for instance, in [1], [4], [6]–[8], [11], [12], [14], and [16]. Continuous-time LQ control problems for MJLS have been studied in the current literature usually under the assumption of perfect information of the jump variable of the system as well as of the state variable. By using standard arguments on dynamic programming it has been shown that this formulation leads to a set of coupled differential Riccati equations (CDRE) for the solution of the problem [1], [14]. Taking the limit as time goes to infinity we have, under some appropriate conditions, that the CDRE tends to coupled algebraic Riccati equations (CARE), and numerical methods have been designed for solving it in [4], [7], [8] and [16]. The state feedback H -control problem has also been analyzed, and generalized coupled algebraic Riccati equations have been derived for this problem in [6]. The LQG control problem with no direct knowledge of the state variable, but assuming that the jump variable is available, was addressed in [11] (named JLQG) and in [12]. In [11] it was shown that the separation principle holds for the optimal control problem, leading to a sample path dependent optimal state estimator (the Kalman filter) combined with the CDRE obtained for the state feedback LQ control problem. As pointed out in [11], this optimal solution is not satisfactory since the filter portion of the controller for the optimal infinite time horizon JLQG is not invariant, which makes any asymptotic and stability analysis unfeasible. In this paper the output feedback control of continuous-time MJLS with perfect measurement of the Markov state is revisited under a convex programming approach. The H2 and the H -norm control problems are solved employing dynamic compensators that

1

1

1

Manuscript received September 1, 1998; revised July 21, 1999. Recommended by Associate Editor, T. Katayama. This work was supported in part by Fundação de Amparo à Pesquisa do Estado de São Paulo, FAPESP and Conselho Nacional de Desenvolvimento Científico e Tecnológico, CNPq. D. P. de Farias, J. C. Geromel, and J. B. R. do Val are with LAC-DT, School of Electrical and Computer Engineering, UNICAMP, 13081-970, Campinas, SP, Brazil. O. L. V. Costa is with the Department of Electronic Engineering, USP, 05508-000, São Paulo, SP, Brazil. Publisher Item Identifier S 0018-9286(00)04062-9.

can change structure when a jump of the Markov state occurs. The formulation provides a suitable framework for implementations when compared with the optimal stochastic filter. Convex analysis has shown to be a powerful tool to derive numerical algorithms for several important control problems, for instance, H2 guaranteed cost control for uncertain systems, H -control problems, and mixed H2 /H -control problems and has been widely studied in the international literature (see, for instance, [2] and [3]). For the state feedback MJLS case, convex analysis had been previously considered in [4] and [16]. In [4] the state-feedback H2 -control problem of continuous-time MJLS was defined and a convex programming approach was used to study it. It was shown that the existence of a solution to the convex programming problem is completely equivalent to the existence of a mean square stabilizing solution to CARE, and one solution can be recovered from the other, showing the equivalence between these two formulations. As in [11] we assume in this paper that the jump variable is perfectly observable to the controller but not the state variable, which is observed only through an output variable. The first important problem we consider in this paper is to find necessary and sufficient conditions for the existence of output feedback dynamic controllers, dependent on the jump variable, such that the closed loop system is stable in the mean square sense. It is proven that the necessary and sufficient conditions can be written in terms of an LMI problem, providing thus a parametrization for all mean square stabilizing compensators in terms of the elements of a convex set. Having obtained this result we can move on to the control problems, and we write the output feedback H2 -norm and H -norm control problems of continuous-time MJLS in terms of LMI optimization problems. The optimal solution of the LMI optimization problem leads to a closed-loop controller which stabilizes the system in the mean square sense. As a byproduct of the output feedback control problems, we can also consider the H2 and H filtering problems for the continuous-time MJLS. The LMI optimization problems, as before, provide mean square stabilizing controllers for the error estimator equation. Last, but not least, it is important to emphasize that, besides providing important theoretical results for the H2 and H -norm control problems of continuous-time MJLS, the convex approach naturally leads to powerful numerical methods, as illustrated by an example of a multiple mode VTOL helicopter. Section II presents the notation to be employed, as well as some basic results concerning stability in the mean square sense. The definition of the H2 and the H norms for continuous-time MJLS are also presented, and the connection between the H2 -norm and the observability and controllability gramians is made. Section III shows that the existence of a compensator that stabilizes in the mean square sense an output feedback continuous-time MJLS is equivalent to a convex set described in terms of LMI’s not being empty, and we show how to obtain a compensator with this property from an element of this convex set. Section IV considers, respectively, the H2 and the H -control problem for the output feedback system via LMI optimization problems, and Section V analyzes the corresponding filtering problems. The proofs for the control and filtering problems can be deduced from the Theorem 3.1, and they are omitted. The paper is concluded in Section VI with an application and some final remarks.

1

1

1

1

1

1

1

II. PROBLEM FORMULATION AND BASIC RESULTS A Markov jump linear system (MJLS) is described as follows. Let S = f1; 1 1 1 ; N g be an index set, and consider the collections of real matrices: A = (A1 ; 1 1 1 ; AN ), dim(Ai ) = n 2 n; E = (E1 ; 1 1 1 ; EN ), dim(Ei ) = n 2 m; C1 = (C11 ; 1 1 1 ; C1N ),

0018–9286/00$10.00 © 2000 IEEE

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000

dim(C1 ) = p D2 = (D21 ;

n; C2 = (C21 ; 1 1 1 ; C2N ), dim(C2i ) = q 2 n; and ; D2N ), dim(D2i ) = q 2 m; i = 1; 1 1 1 ; N . Let us consider a continuous-time homogeneous Markov chain, 2 = ft ; t  0g having S as state space and 3 = [ij ; i 2 S ; j 2 S ] as the transition rate matrix. The probability distribution of the Markov chain at the initial time is given by  = (1 ; 1 1 1 ; N ) in such a way that P (0 = i) = i . Consider a fundamental probability space ( ; F ; fFt g; P ). We denote by Lm 2 the Hilbert space formed by the stochastic processes z = fzt ; t  0g such that, for each t  0, zt is a second order real valued m-dimensional random variable, Ft -measurable and i

2

111

z

2

k k2

1

:=

0

E jzt j2 dt < 1:

The MJLS is the stochastic system x_ t = A(t )xt + E (t )wt zt = C1 (t )xt G0 : (1) yt = C2 (t )xt + D2 (t )wt ; t  0 w 2 L2m ; E (jx0 j2 ) < 1; 0  . Thus, whenever t = i 2 S , one has that A(t ) = Ai ; E (t ) = Ei , C1 (t ) = C1i , C2 (t ) = C2i , and D2 (t ) = D2i . The processes x = fxt ; t  0g, z = fzt ; t  0g, and y = fyt ; t  0g are, respectively, the state, the output, and the measured output for system G0 . We assume that the process w = fwt ; t  0g 2 L2m . The definition below generalizes of the concept of stable systems for stochastic systems. Definition 2.1: We say that model G0 with w  0, is mean square stable (MSS) if E jxt j2 ! 0; as t ! 1 for any initial condition x0 and initial distribution for 0 . The following proposition connects the main results on stochastic stability for MJLS. They can be derived from [10] and [13]. Proposition 2.1: The assertions below are equivalent. a) System G0 is MSS. b) The linear inequalities

A0i Pi + Pi Ai +

N

j =1

ij Pj < 0;

i = 1;

111

;N

(2)

are feasible for some P = (P1 ; 1 1 1 ; PN ) with dim(Pi ) = n 2 n, Pi > 0 for all i = 1; 1 1 1 ; N . c) (Coupled Lyapunov Equations) For any given U = (U1 ; 1 1 1 ; UN ) with dim(Ui ) = n 2 n, Ui > 0, there exists a unique P = (P1 ; 1 1 1 ; PN ); Pi > 0 8 i satisfying

A0i Pi + Pi Ai +

N

ij Pj j =1

+ U = 0; i

i = 1;

111

; N:

2

=

r

N

s=1 i=1

defined above can be calculated as the solution of the coupled observability and controllability gramians, a result that mirrors its deterministic counterpart. Suppose that G0 is MSS and consider

A0i Wi + Wi Ai +

Ai Vi + Vi A0i +

ij Wj + C10 i C1i

j =1

= 0;

i = 1;

+  E E 0 = 0;

i = 1;

111

; N (4)

N

ji Vj j =1

i

i

i

; N (5)

111

with Wi  0 and Vi  0; i = 1; 1 1 1 ; N the unique solution of these equations. N i Tr(Ei0 Wi Ei ) = N Proposition 2.3 [4]: kG0 k22 = i=1 i=1 Tr(C1i Vi C10 i ). Remark 2.1: A very interesting property obtained from Proposition 2.1-d) is the increasing behavior for the solutions Pi > 0 of

A0i Pi + Pi Ai +

N

ij Pj j =1

+ C10 C1

i

i

< 0;

i = 1;

111

; N: (6)

Indeed, for W and P satisfying (4) and (6), respectively, we have that Pi > Wi  0; i = 1; 1 1 1 ; N and the consequence is that N 2 i Tr(Ei0 Pi Ei ). In other words, feasible solutions kG0 k2 < i=1 of the above linear matrix inequalities provide upper bounds to the H2 -norm. Besides, we can find Pi arbitrarily close to Wi , and the H2 -norm can be found by the following optimization problem:

inf

N

Tr(E 0P E ) i

i=1

N

s.t. A0i Pi + Pi Ai +

ij Pj j =1

i

i

+ C10 C1

i

i

< 0;

i = 1;

111

; N:

If the model G0 is MSS and w 2 L2m , then from Proposition 2.2 we have that z 2 Lp2 . The next definition is a generalization of the H1 -norm for MJLS’s. Definition 2.3: The H1 -norm of an MSS system G0 , kG0 k1 is the smallest > 0, such that

z

k k2

holds for all w 2 L2 . Propostition 2.4 [6]: If P m

A0i Pi + Pi Ai + i

= 1;

111

= (P1 ;

N

ij Pj j =1

< kwk2 ; PN ); Pi > 0 satisfies

111

+ 02 P E E 0P + C10 C1 i

i

i

i

i

i

0; i = 1; i

ci

i

Bi Cci Aci

(8)

= 1; ; N , and P ; N. Let diag( ) indicate a block diagonal matrix whose entries are given by ( ). We are ready to present the following theorem.

i

111

i

1

1

111

946

1;

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000

Theorem 3.1: There exist Pi > 0, Aci , Bci ; and Cci ; i = 1 1 1 ; N such that (7) holds if

Ai Yi + Yi A0i + Bi Fi + Fi0 Bi0 + ii Yi Ri (Y ) 0; i = 1;

T 0J 0P J T i

(9)

N

ij Xj < 0 (10)

j =1

=

i = 1; 1 1 1 ; N , has a feasible solution Xi = Xi0 , Yi = Yi0 , Li , and Fi .

Proof: For the necessity, let us consider the following partition for Pi :

=

i

Y Y

Ti =

I

Ji =

i

i

i

I

i

0 0 1

3i

N

0 0 (J P J )T i

i

i

i

(16)

0 0

(17)

j =1

which, after performing the indicated calculations, yields (18), shown at the bottom of this page, where

F = 0C P 0 P 0 Y L =P B M = A0 + P A Y + P B F 0 P A P0 P0 Y + i

ci

i

2i

i

i

3i

1

2i

i

ci

1i

2i

ci

i

1i

i

3i

1

i

i

+

N

2i

LC Y

ij

i

2i

i

P1 P2 P301 P20 j

j

i

i

Ai Yi + Yi A0i + Bi Fi + Fi0 Bi0

M Y Y0 i

+

ij

i

j

1

N

Y: i

N

i

i

+ Xi Bi Fi

ij Yj01 Yi Yi01

(22)

(23) (24) (25) (26) (27)

+

Let B = (B1 ; 1 1 1 ; BN ), dim(Bi ) = n 2 r and D1 = 1 1 1 ; DN i ), dim(D1i ) = p 2 r, for each i = 1; 1 1 1 ; N; be

(19)

A0 P1 i

i

(D1i ;

+ P1i

M0 A +L C i

i

i

2i

0 L0

+ C2i

i

P2i P30i 1 P3j 0 P2j P30j 1 P2i P30i 1 P3j 0 P2j 0

Y

0

i

P1j

0

j =1

Ti0 A~0i Pi + Pi A~i +

0

IV. THE CONTROL PROBLEM

i

j =1

N

0

it follows that (28) shown at the bottom of this page, holds and hence (7) is verified. One important aspect refers to the parameterization of the compensators. Notice that Theorem 3.1 selects a particular parametrization, but it also establishes the relations which are satisfied by any compensator Gc that stabilizes the closed-loop system in the stochastic01sense. One can always pick the trivial solution P2i = 0P3i = Yi 0 Xi and Mi = 0, yielding the compensator adopted in (22), (24), and (25). The H2 and H1 -norm problems are not affected by the particular parametrization of Gc , and the trivial solution in terms of the choice made in Theorem 3.1 will be adopted throughout the paper. Consequently, the matrices Pi ’s assume the particular structure in (26).

Ji01 A~i Ji Ti

+ Ti (Ji Pi Ji )

i

i

01

2i

ij Ti0 Ji0 Pj Ji Ti < 0

+

01 A0 + X A Y

Bci = Yi01 0 Xi Li 0 1 Cci = Fi Yi Xi Yi01 0 Xi Pi = >0 0 1 Yi 0 Xi Xi 0 Yi01 Y I Ti = i Yi 0

and multiplying (7) to the left by Ti0 Ji0 and to the right by Ji Ti we get that

Ti0 Ji01 A~i Ji

(20)

j =1

(15)

0P 0 P 0

0

I > 0: P1i

(21)

+ Li C2i Yi +

(14)

i

i

6

(13)

01 > 0 P1 0 P2 P301 P20

I

ij P2i P30i 1 P3j 0 P2j P30j 1 P2i P30i 1 P3j 0 P2j

Aci = Xi 0 Yi01

We shall assume, with no loss of generality, that P2i is nonsingular. Defining the matrices

Y

=

it follows that (9)–(11) hold for Yi = Yi , Xi = P1i , Fi = Fi and Li = Li ; i = 1; 1 1 1 ; N . For the sufficiency part, let us suppose that Xi = Xi0 , Yi = Yi0 , Fi and Li , i = 1; 1 1 1 ; N are feasible solutions of (9)–(11). Then if we set for each i

Ri (Y ) = 1i Yi 1 1 1 (i01)i ; Yi (i+1)i Yi 1 1 1 N i Yi Si (Y ) = 0diag(Y1 ; 1 1 1 ; Yi01 ; Yi+1 ; 1 1 1 ; YN ) (12)

Pi =

i

j =1; j =i

0

p

P1i P2i : P20i P3i

i

1 1 1 ; N are equivalent to

ij P2i P30i 1 P3j 0 P2j P30j 1 P2i P30i 1 P3j 0 P2j N

Yi I > 0 (11) I Xi

p

i

i

Since

j =1

with

i

Y

0; Xi

Ai Yi

(33)

0 +Y A +B F i

i

i

i

i

i

i

i

i

i

(36)

i

i

i

i

i

i

i

i

i

i

i

ij

i

j

Let us turn our attention to the H1 -control problem with output observations. The H1 constraints in Proposition 2.4, written here as N

~i B ~ 0 Pi + A~0i Pi + Pi A~i + C~ i0 C~i + 02 Pi B i

ij Pj < 0

(37)

j =1

1 1 1 ; N , can be rephrased in LMI form. Theorem 4.2: The H1 constraints (37) are equivalent to (38) and (39), shown at the bottom of this page. The corresponding compensator Gc is given by i

= 1;

where =

01 01 L 0 Y 10X 01 M Y 01 Y 01 0 X

Cci

= Fi Yi

Bci

=

Aci

=

i

i

i

i

i

i

(40)

i

0A0 0 X A Y 0 X B F 0 L C2 Y 0 C10 (C1 Y + D1 F ) 0 02 (X E + L D2 )E 0 0  Y 01Y : i

i

i

i

i

i

i

i

ij

j =1

i

i

i

i

i

i

i

i

i

i

i

i

j

V. THE FILTERING PROBLEM

111

111

2

i

i

i

c

=

cN )

,

v_ t = Ac (t )vt + Bc (t )yt z^t = Cc (t )vt ; t  0: The H2 -filtering problem is to find Gf  (Ac ; Bc ; Cc ) such that the H2 -norm of system Ge is minimized, where Ge stands for a system

i

C1i Yi + D1i Fi Ri0 (Y )

2

G

f:

E 0 X + D20 L0 0 0 02 0 +F B + Y + E E i

2 n; B 111; C

Consider Ac = (Ac1 ; ; AcN ), dim(Aci ) = n ; BcN ), dim(Bci ) = n q ; and Cc = (Cc1 ; (Bc1 ; dim(Cci ) = p n; and let the output filter be of form

Ai Yi + Yi A0i + Bi Fi + Fi0 Bi0 + ii Yi C1i Yi + D1i Fi Ri0 (Y )

> 0;

i

=

i

B. H1 -Control

A0i Xi + Xi Ai + Li C2i + C20 i Li0 + C10 i C1i +

Yi I

Aci

i

j =1

(32)

i Tr(Zi )

Ei Xi Ei + Li D2i Zi

=

i

i=1

I Xi Ei0 Xi + D20 i Li0

Bci

N

N

Yi I Ei0

= Fi Yi

i

Mi

Notice that (30) ensures stochastic stability for the closed-loop system of approaching solutions to the problem of minimizing kGk2 . The next theorem casts the solution in terms of LMI’s; the proof is omitted, since it is immediate from Theorem 3.1, (29)–(31) and Remark 2.1. Theorem 4.1: The H2 output feedback control problem is solved by the following LMI problem: min

01 01 L Y 01 0 X 01 M Y 01 Y 01 0 X

Cci

N

~i ; and C ~i are given by where the matrices A~i , E Ai Bi Cci Ei ; E~i = A~i = Bci C2i Aci Bci D2i

C~i

(35)

0A0 0 X A Y 0 X B F 0 L C2 Y 0 C10 (C1 Y + D1 F ) 0  Y 01Y :

(30)

i = 1; 1 1 1 ; N

1 1 1 ; N:

i = 1;

The corresponding parameters of Gc are

j =1

Pi

ij Xj < 0; j =1

i

ii

i

Yi C10 i + Fi0 D10 i 0I 0

i

0

Si (Y )