Robust moving horizon state observer - Semantic Scholar

6 downloads 0 Views 327KB Size Report
In this paper, we develop two robust moving horizon state observer (MHSO) algorithms which are capable of handling system non-linear uncertainties and ...
Downloaded By: [University of Alberta] At: 20:06 20 September 2007

International Journal of Control Vol. 80, No. 10, October 2007, 1636–1650

Robust moving horizon state observer D. CHU, T. CHEN* and H. J. MARQUEZ Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, Canada, T6G 2V4 (Received 4 May 2006; in final form 28 May 2007) In this paper, we develop two robust moving horizon state observer (MHSO) algorithms which are capable of handling system non-linear uncertainties and physical state constraints. The algorithms employ open-loop and closed-loop prediction strategies to convert the design into multi-parameter quadratic programming (mp-QP), and also utilize the novel rewinding optimization to eliminate the conflict between open-loop prediction and closed-loop implementation. Based on the optimal solutions to mp-QP problems, MHSO is obtained by a series of offline linear/affline observation polices, and computational complexity is reduced dramatically. The convergence of observation errors, as one of challenges for robust MHSO, is also solved by introducing two auxiliary tuning parameters, the arrival weighting and the arrival observer gain. Finally, a simulation example demonstrates that our algorithms are practical and effective.

1. Introduction State observer theory has been widely used in many branches of science and engineering and there exists a rich collection of state observer design methods and algorithms. The Luenberger observer (Luenberger 1971) and the Kalman filter (Kalman 1960), as two most successful observer strategies, were developed for deterministic systems and stochastic systems respectively. The former is limited to systems with accurate models neglecting both internal and external uncertainties; the latter considers external uncertainties as white noises, but modelling errors in Kalman filtering systems often lead to poor performance (Peterson and McFarlane 1994). To incorporate modelling uncertainties with state observer, robust observer design has received considerable attention in the past decade, and different kinds of robust observers were published, e.g., unknown input observer (UIO) (Xiong and Saif 2003), spectrum assignment observer (Zitek 1998), LMI based observer (Lien 2004), high-gain robust observer (Mahmouda and Khalil 2002), and inputoutput observer (Marquez and Riaz 2005).

*Corresponding author. Email: [email protected]

Although these algorithms possess the ideal properties of stability and convergence, few of them combine the practical issues with theoretical analysis, such as system physical constraints, computational complexity and implementation efficiency. To obtain a new observation method with a wide scope of applications, moving horizon state observer (MHSO) was proposed by reformulating the design as an optimization problem (Muske et al. 1993, Robertson et al. 1996). MHSO is motivated by the full information state observer which, however, suffers from the curse of increasing dimension (Findeisen 1997). Different from full information state observer, MHSO includes only the most recent measurement and defines the problem within a fixed prediction horizon, so that the dimension of problems is fixed and determined by the length of the horizon. This idea originates from the success of finite horizon model predictive control (FH-MPC) (Rawlings 1989, Morari and Lee 1999, Mayne et al. 2000), and a similar scenario is borrowed here. More specifically, an iterative loop of MHSO is composed of four steps: determining initial parameters, predicting future states, solving an optimal problem, and then updating state observation (Michalska and Mayne 1995, Rao et al. 2001, 2003). Because of the potential to handle state constraints, MHSO witnesses wide applications to

International Journal of Control ISSN 0020–7179 print/ISSN 1366–5820 online ! 2007 Taylor & Francis http://www.tandf.co.uk/journals DOI: 10.1080/00207170701473979

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

Robust moving horizon state observer different physical systems in the past decade, for example, state observer of biomass concentration in CHO animal cell cultures (Gonzalez et al. 2003, Raissi et al. 2005). Arrival cost, as one of the fundamental concepts to MHSO, is proposed to summarize the effort of the past data ahead of current prediction horizon (Robertson et al. 1996). It can be shown that if we can compute the explicit solution to arrival cost, the convergence of estimation error, i.e., the stability of MHSO can be easily guaranteed by solving a Riccati equation. For an instance, the Kalman filter, as a special case of MHSO with free state constraints and unit prediction horizon, can achieve stability in this fashion. In the general case, however, computing an explicit solution to arrival cost still remains an open problem (Haseltine and Rawlings 2005). Rao et al. (2001), gave a sufficient condition for the stability of MHSO employing the approximation of arrival cost; but one assumption was critical. The system must have a precise model. To remove such a limitation, in this paper we propose an extended MHSO, namely, robust MHSO (RMHSO), for the systems with both internal uncertainties and external disturbances. Also, the importance of RMHSO can be seen from another point of view: RMHSO is critical for explicit MPC systems whose states are unmeasured or partially unavailable. Because of the nature of offline MPC which employs a series of affine control policies corresponding to state space partitions, it is mandatory to combine state constraints with the observer formulation, otherwise there is no way to implement offline controllers (Bemporad et al. 2002a,b, Chu et al. 2006). To preserve the advantages of offline MPC (for instance, small implementation cost), the associated observer should be featured as offline optimization and online implementation as well. Therefore, the purpose of this paper is to develop an offline MHSO algorithm in the presence of internal uncertainties and external disturbances. There are three main contributions in this article: first, we construct a state observer method in the presence of uncertainties and constraints; second, rewinding closed-loop prediction is employed to offline observer design, which dramatically improves the implementation efficiency; third, the RMHSO is obtained in two approaches: open-loop MHSO and closed-loop MHSO, and by comparison we provide the advantages and disadvantages of these two prediction strategies. As a non-trivial problem, robust observer stability is also covered in this paper. By constructing two tuning parameters, arrival weighting Q0 and arrival observer gain L, we are able to use the objective function of RMHSO as a Lyapunov function candidate, and then by properly choosing Q0 and L, we guarantee the convergence of the candidate. The observer stability is

1637

obtained and the computational complexity of arrival cost is avoided. As a technical innovation, the rewinding closed-loop prediction is employed in optimization loops. From Lee and Yu (1997), it can be seen that the closed-loop prediction eliminates the conflict of open-loop optimization and closed-loop implementation. In other words, the scheme of the close-loop prediction accounts for the fact that only the first element of the manipulated sequence is implemented and the future observation is determined from new optimization problems after feedback updates. Due to the curse of computational complexity, up to now, there is inadequate acceptance of closed-loop prediction, but we will turn to one-step observation based on closed-loop prediction to overcome such a drawback. Thanks to the structure of rewinding optimization, although only one step prediction is performed within an optimization loop, we can simulate a moving horizon state observer with an arbitrary prediction horizon. The rest of this paper includes six sections. In x 2, we present the model of robust moving horizon state observer and propose the issue of observer stability. In x 3, we address the details of offline RMHSO using open-loop and rewinding closed-loop prediction. In x 4, we list the steps of offline RMHSO and extend MHSO to systems with measurement noises. In x 5, we use a simulation example to demonstrate the effectiveness of our algorithms. Finally in x 6, we draw the conclusions. Notation: .

.

.

.

.

.

.

Snþ ðSnþþ Þ denotes the space of symmetric non-negative (positive) definite n $ n matrices, and Dnþ ðDnþþ Þ for the space of diagonal non-negative (positive) matrices. kvk2P :¼ vT Pv denotes the weighted 2-norm of a vector ! and !ðXÞ are v, where P 2 Snþ . The symbols !ðXÞ maximal and minimal singular values of a matrix X. ^ & N þ iÞ denotes the ith predicted observation xðk over the kth prediction horizon given the initial ^ & NÞ, i.e., xðk ^ & N þ iÞ :¼ xðk ^ & N þ ijkÞ value xðk for ease of notation. xj is the jth element of a vector x, Xj is the jth row of a matrix X, and Xij is the ijth element: The superscript ‘o’ stands for the corresponding optimal or sub-optimal solution, e.g., xo. ' ð(Þ and ) ð*Þ denote the generalized element-wise (strict) inequality signs, i.e., e ' emax !ej + emax, j for 8 j. The sign of ‘‘ , ’’ is used to indicate the independent variables of a function, whose definition can be inferred from the context, e.g., fðx, kÞ is sometimes written as f( , ) without special indication. ^ 1 Þ, . . . , fðk ^ 2 Þg, fk1 !k2 denotes the sequence of ffðk similarly for uk1 !k2 , x^ k1 !k2 and ek1 !k2 .

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

1638

D. Chu et al.

2. Robust MHSO We consider a system with a non-linear uncertain term as follows: xðk þ 1Þ ¼ AxðkÞ þ BuðkÞ þ fðxðkÞ, dðkÞ, kÞ, yðkÞ ¼ CxðkÞ þ vðkÞ:

ð1Þ

Here xðkÞ 2 Rn stands for the state, uðkÞ 2 Rm the input, yðkÞ 2 Rq the output, vðkÞ 2 Rq the measurement error and dðkÞ 2 Rl a combination of input and state disturbances. To simplify design, we firstly assume vðkÞ - 0 and will discuss the case of vðkÞ 6¼ 0 in x 4. The matrices A and B are constant of appropriate dimensions. We assume the output matrix C has a full row-rank and the pair ðC, AÞ is observable. Suppose that the states and disturbances obey the constraints: xðkÞ 2A x ,

vðkÞ 2A v

and

dðkÞ 2A d ,

ð2Þ

where Ax (Av ) is the admissible state (noise) set defined by a set of generalized element-wise inequalities, and Ad is the admissible disturbance set defined by an ellipsoidal invariant set, i.e., ðD1:1Þ

ðD1:2Þ ðD1:3Þ

Ax :¼ fx 2 Rn j xmin ' x ' xmax , xmin , xmax 2 Rn g, Av :¼ fv 2 Rq j vmin ' v ' vmax , vmin , vmax 2 Rq g, Ad :¼ fd 2 Rp j dT ðkÞWd dðkÞ + 1, Wd 2 Slþ g:

The non-linear term f : Ax $ Ad $ Rþ !Rn reflects the composition of internal and external uncertainties, which satisfies k f ðxðkÞ, dðkÞ, kÞk2 + "ð" . 0Þ:

ð3Þ

In fact, many structured internal and external uncertainties can be reformulated into the form of (3). Case I—external uncertainties: explicitly expressed by

The function f ðkÞ is

zðk þ 1Þ ¼ W&1 L AWL zðkÞ þ "ðkÞWR WL zðkÞ: Because xðkÞ 2A x , there exists a constant term $ such that kzðkÞk2 + $: Denote fðxðkÞ, dðkÞ, kÞ :¼ "ðkÞWR WL zðkÞ, consequently, ! R WL Þ$, k f ðxðkÞ, dðkÞ, kÞk2 + ðmax #i Þ!ðW i

which is also in the form of (3). To proceed the further discussion, we will first assume vðkÞ - 0 and focus on the system xðk þ 1Þ ¼ AxðkÞ þ BuðkÞ þ f ðxðkÞ, dðkÞ, kÞ, yðkÞ ¼ CxðkÞ,

where dðkÞ 2A d and Bd is the a constant matrices. Based on definition (D1.1), we can see that ! d W&1=2 kBd dðkÞk2 ¼ kBd W&1=2 W1=2 Þ, d d dðkÞk2 + !ðB d which is recast into the form of (3). Case II—internal uncertainties: The widely used structured uncertainties in the feedback loop (Kothare et al. 1996) can be also converted into (3). Consider the system ð4Þ

! i ðkÞÞ + #i : where "ðkÞ ¼ diagð"1 ðkÞ, . . . , "r ðkÞÞ, and !ð" WL , WR are constant scaling matrices and WL is

ð5Þ

where xðkÞ 2A x and k f ð,Þk2 + ", for designing a robust moving horizon state observer. 2.1 Formulation of RMHSO Based on the state space model in (5), the observer is given by ^ þ 1Þ ¼ AxðkÞ ^ þ BuðkÞ þ f^ ðkÞ, xðk ^ ^ yðkÞ ¼ CxðkÞ,

ð6Þ

^ ^ 2 Rq the where xðkÞ 2 Rn is the estimated state, yðkÞ n ^ estimated output, and fðkÞ 2 R the estimated disturbance. Given the model in (6) and past estimated ^ & NÞ, we predict the intermediate observation state xðk ^ & N þ iÞ, xðk ^ & NÞ þ ^ & N þ iÞ ¼ Ai xðk xðk þ

fðxðkÞ, dðkÞ, kÞ ¼ Bd dðkÞ,

xðk þ 1Þ ¼ ðA þ WL "ðkÞWR ÞxðkÞ,

invertible. Performing the similarity transformation to (4), i.e., setting xðkÞ ¼ WL zðkÞ, we have

i&1 X j¼0

i&1 X j¼0

Ai&1&j Buðk & N þ jÞ

Ai&1&j f^ ðk & N þ jÞ,

ð7Þ

where i 2 ½0, N0 is the index of estimated signals, and the sequence x^ k&N!k is the observation components over the ^ & NÞ is the initial condition kth prediction horizon. xðk of the kth prediction horizon and is optimized by the ðk & NÞth prediction horizon. Obviously, if we can optimize the estimated sequence f^ ok&N!k , the current ^ estimated state xðkÞ can to be represented by ^ & NÞ þ ^ xðkÞ ¼ AN xðk þ

N &1 X j¼0

N &1 X j¼0

Aj Buðk & 1 & j Þ

Aj f^ ðk & 1 & j Þ:

ð8Þ

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

Robust moving horizon state observer ^ Retain the value of xðkÞ, reject intermediate observation x^ k&N!k&1 and repeat the above procedure. Finally, we can obtain the state observation at any instant k. Equation (8) shows the essential difference between MHSO and MPC regulation. MPC predicts future states/outputs based on current measurements, and after determining the optimal input sequence uk!kþN&1 , it retains the first element u(k) and rejects the intermediate solutions, including predicted states and the rest of optimal inputs. But MHSO is different: ^ & NÞ to predict it is based on the past observation xðk intermediate observation till the current observation ^ xðkÞ. After determining the optimal sequence f^k&N!k , it calculates x^ o ðkÞ and then rejects all of the intermediary variables x^ k&N!k&1 . Essentially, MPC performs the optimization loop in a forward manner, but MHSO does it in the backward way which, fortunately, coincides with the nature of the closed-loop prediction (Lee and Yu 1997). The rule in (8) makes it straightforward to convert MHSO design into an optimization problem with rewinding (closed-loop) prediction. Definition 1: The design of robust MHSO (RMHSO) for the system with internal and external uncertainties in (5), is a constrained optimization problem as follows.

^ & yðkÞk2Q0 s:t: Jk&N!k ¼ kCxðkÞ þ

k&1 X

j¼k&N

þ

i&1 X j¼0

i&1 X j¼0

2.2 Robust observation stability To guarantee the stability of the robust observer in (11), we employ the objective function (10) as a Lyapunov function candidate, so that we have the Lyapunov functions VðkÞ :¼ Jk&N!k and Vðk þ 1Þ :¼ Jk&Nþ1!kþ1 : From (10), the difference of the Lyapunov functions is given by V~ ¼ Vðk þ 1Þ & VðkÞ

¼ keðk þ 1Þk2Q^ þ keðkÞk2Q^ þ k f^ ðkÞk2R 0

&

^ j Þ & yð j Þk2Q þ k f^ð j Þk2R , kCxð

ð10Þ

& keðk & NÞk2Q^ & k f^ ðk & NÞk2R ,

ð12Þ

where Q^ 0 ¼ CT Q0 C, Q^ ¼ CT QC: In Definition 1, we proposed the arrival observer gain L, satisfying ^ & yðkÞÞ ¼ LCeðkÞ: f^ ðkÞ ¼ LðCxðkÞ

ð13Þ

V~ ¼ keðkÞk2Qtot þ k f ð,Þk2Q^ & 2eðkÞT ðA þ LCÞT Q^ 0 f ð,Þ 0

& keðk & NÞk2Q^ & k f^ ðk & NÞk2R ,

Ai&1&j Buðk & N þ jÞ

where Qtot :¼ ðA þ LCÞT Q^ 0 ðA þ LCÞ þ Q^ þ ðLCÞT RðLCÞ & Q^ 0 :

Ai&1&j f^ðk & N þ jÞ ð1 + i + NÞ,

To guarantee stability, we need V~ + 0, i.e.,

^ & 1Þ & yðk & 1ÞÞ, f^ðk & 1Þ ¼ LðCxðk ^ & N þ iÞ 2A x , xðk

keðkÞk2Qtot þ k f ð,Þk2Q^ & 2eðkÞT ðA þ LCÞT Q^ 0 f ð,Þ 0

where Q 2 Sqþ and R 2 Dnþþ are weightings, Q0 2 Sqþ and L are the arrival weighting and the arrival observer gain, respectively, which are constructed for robust observer stability, and pair (C, A) is observable. From (5) and (6), we can write down the model of observation errors, eðk þ 1Þ ¼ AeðkÞ þ f^ ðkÞ & f ðxðkÞ, dðkÞ, kÞ,

keðkÞk2Q^ 0

Inserting (11) and (13) into the difference of the Lyapunov functions in (12), we have

^ & N þ iÞ ¼ Ai xðk ^ & NÞ xðk þ

Definition 2: The observer in (6) is stable for the system with internal and external uncertainties in (5), if for any "! > 0 there exists a number %! > 0 and a positive integer T such that if keð0Þk + %! and ^ ^ 2 Ax , then keðkÞk + "! and xðkÞ 2A x for all k . T. xð0Þ The admissible state set Ax and observation error dynamics are given in (D1.1) and (11), respectively.

ð9Þ

min Jk&N!k

f^k&N!k&1

1639

ð11Þ

^ & xðkÞ. Therefore, the robust stability where eðkÞ :¼ xðkÞ of state observer is converted into a problem on the convergence of e(k) in the presence of the uncertain term f ð,Þ in (11).

& keðk & NÞk2Q^ & k f^ ðk & NÞk2R + 0,

ð14Þ

where & > 0 and P 2 Snþþ are constructed as the tuning parameters which are critical to the robustness of RMHSO. So, we have a pair of sufficient conditions to (14) ðA þ LCÞT Q^ 0 ðA þ LCÞ þ Q^ þ ðLCÞT RðLCÞ & Q^ 0 þ &P ¼ 0, ð15Þ k f ð,Þk2Q^ & 2eðkÞT ðA þ LCÞT Q^ 0 f ð,Þ 0

& k f^ ðk & NÞk2R & &keðkÞk2P + 0:

ð16Þ

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

1640

D. Chu et al.

keðk & NÞk2Q^ is the initial observation error which is positive and omitted here. Apparently, (15) is a Riccati equation with the unknown variables of the arrival observer gain L and the transformed arrival weighting Q^ 0 . It is not hard to derive Q0 based on the solution of (15), Q0 ¼ ðCCT Þ&1 CQ^ 0 CT ðCCT Þ&1 ,

Q^ 0 2 Snþ :

1 XT Y þ YT X + "XT X þ YT Y, " holds for any " . 0: Theorem 1: The observer in (6) is robust stable for the constrained system with internal and external uncertainties in (5) if the arrival weighting Q0 and the arrival observer error L are determined by the Riccati equation in (15) and the estimated disturbance f^ ðk & NÞ is solved by minimizing the following linear program,

s:t:

ðA þ LCÞT ðA þ LCÞ "Q^ 0

ð18Þ

So if the conditions

Lemma 1 (Wang et al. 1992): Let X, Y be real constant matrices of compatible dimensions. Then

#

1 eðkÞT ðA þ LCÞT Q^ 0 ðA þ LCÞeðkÞ & &keðkÞk2P " þ ð1 þ "Þ f T ð,ÞQ^ 0 f ð,Þ & k f^ ðk & NÞk2R + 0:

ð17Þ

Note that we assume that C has full row-rank, so that the pseudo-inverses ðCCT Þ&1 C and CT ðCCT Þ&1 exist. It can be seen that no matter what tuning parameters & and P are chosen, we always can derive L and Q0 from (15) and (17). Therefore, the feasibility of (16) plays a critical role on robust stability analysis.

min ", " &P

Therefore (16) is necessary to

. 0,

and satisfies ! Q^ 0 Þ", j rowðR1=2 Þ f^ ðk & NÞ j. ðn þ n"Þ1=2 !ð 1=2 where rowðR1=2 Þ :¼ ½R1=2 11 , . . . , Rnn 0 is a row vector composed of the diagonal elements of R, and " is the uncertainty bound defined in (3).

Proof: Following the conditions in (15 and 16) and applying Lemma 1, we have & 2eðkÞT ðA þ LCÞT Q^ 0 fð,Þ 1 + "fT ð,ÞQ^ 0 fð,Þ þ eðkÞT ðA þ LCÞT Q^ 0 ðA þ LCÞeðkÞ " ð" > 0Þ:

1 &P & ðA þ LCÞT Q^ 0 ðA þ LCÞ . 0, " k f^ ðk & NÞk2R . ð1 þ "Þ f T ð,ÞQ^ 0 f ð,Þ,

ð19Þ ð20Þ

are satisfied simultaneously, the condition in (18) is obtained. To minimize k f^ ðk & NÞk2R , we minimize the positive scalar ": Consequently, (19) can be recast into a semi-definite optimization problem. Performing Schur complement, we have

s:t:

min ", " &P

ðA þ LCÞT ðA þ LCÞ "Q^ 0

#

. 0:

ð21Þ

Equation (20) is equivalent to n "pffiffiffiffiffiffi #2 X Rii f^i ðk & NÞ . ð1 þ "Þ f T ð,ÞQ^ 0 f ð,Þ: i¼1

As we know the condition n "pffiffiffiffiffiffi #2 1 X Rii f^i ðk & NÞ . ðrowðRÞ f^ ðk & NÞÞ2 , n i¼1

so that a sufficient condition to (20) is ! Q^ 0 Þ": jrowðR1=2 Þ f^ ðk & NÞj . ðn þ n"Þ1=2 !ð

ð22Þ

^ & NÞ 2 Rn . Theorem 1 is then Note R 2 Dnþþ and fðk proven. œ Remark 1: The feasibility of the semi-definition optimization problem in (21) is strongly related to the selection of the tuning parameters & and P. Roughly speaking, if we choose an appropriate pair of & and P (large enough), the robust stability can be always satisfied. Remark 2: After determining the values of f^i ðk & NÞ, Q0 and L, we can calculate the upper bound of " to satisfy both conditions (19) and (20). The upper bound of " reflects the robustness of our algorithm, i.e., by adjusting the values of & and P, we can get some tradeoff

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

Robust moving horizon state observer between the performance and stability of our robust observers.

1641

where # ¼ 2ððCBF ÞT QðCBF Þ þ RÞ, % ¼ QCBF ,

3. Optimization From the above discussion, we know that the RMHSO design can be converted into a quadratic program, and associated with Theorem 1, the robust stability is guaranteed. In 2002, Bemporad and co-workers, reformulated the nominal MPC problem into an mp-QP regulation, and developed offline affine/linear MPC policies (Bemporad et al. 2002a). In this section, we will first follow the idea of (Bemporad et al. 2002a) to develop an open-loop MHSO, and then extend the design to an MHSO with the rewinding closed-loop prediction.

From (7), we can predict the N steps coming observation xk&Nþ1!k , so that the objective Jk&N!k can be rewritten ^ & NÞ, i.e., as an expression of xðk ^ & NÞ & yðk & NÞk2Q þ kCAxðk ^ & NÞ Jk&N!k ¼ kCxðk ð23Þ

2

3

A 6 6 .. 7 A ¼ 4 . 5, B ¼ 4

AN I 6 .. BF ¼ 4 . 2

AN&1

B .. . AN&1 B 3

,,, 0 .7 .. . .. 5, ,,,

,,, .. .

Theorem 2: The optimal estimated disturbance vector F in (23) is determined by an mp-QP problem with elementwise inequality and equality constraints. Proof: Employing the notation in (24), the constraint x^ k&Nþ1!k 2 Ax can be explicitly expressed by ^ & NÞ þ BU þ F ' X max , X min ' Axðk

! Q^ 0 Þ", &F ' &ðn þ n"Þ1=2 !ð

^ & NÞ þ LCAN&2 Buðk & 1Þ ¼ LCAN&1 xðk þ , , , þ LCBuðk & 2Þ þ LCAN&2 f^ ðk & NÞ þ LCAN&3 f^ ðk & N þ 1Þ

0 .. 7 . 5,

,,, B

$ %T U ¼ uðk & NÞT , . . . , uðk & 1ÞT , h iT F ¼ f^ ðk & NÞT , . . . , f^ ðk & 1ÞT , $ %T Y ¼ Yðk & N þ 1ÞT , . . . , Yðk & 1ÞT , YðkÞ , C ¼ diagðC, . . . , CÞ, Q ¼ diagðQ, . . . , Q, Q0 Þ,

þ , , , þ LCf^ ðk & 2Þ,

ð28Þ

equivalently ð24Þ

^ & NÞ þ 'U U: 'F F ¼ 'x xðk

ð29Þ

Combining (27) with (26), we have a piecewise inequality constraint, ^ & NÞ þ G3 , G1 F ' G2 xðk

R ¼ diagðR, . . . , RÞ:

ð30Þ

where

Proceeding further, (23) becomes a standard mp-QP problem, i.e., 1 Jk&N!k ¼ FT #F þ x^ T ðk & NÞ$F 2 þ ðCBU & YÞT %F þ W,

ð27Þ

where & :¼ ð&1Þ# ½rowðR1=2 Þ, 0, . . . , 00 (# ¼ 0 or 1). As we know that the arrival observer gain L is determined by a Riccati equation, namely,

3

I

ð26Þ

^ & 1Þ & yðk & 1Þ f^ ðk & 1Þ ¼ LCxðk

where the augmented matrices are given by 2

and W is the residue term independent of F and determined by the variables in (24). Notice that Q is the matrix of the arrival weighting Q0 which is fundamental to closed-loop stability and # 2 Sþþ .

where X min :¼ ½xTmin , . . . , xTmin 0T and X max :¼ T ½xmax , . . . , xTmax 0T . For closed-loop stability, the condition in (22) has to be satisfied,

3.1 Robust MHSO using open-loop prediction

þ CBU þ CBF F & Yk2Q þ kFk2R ,

$ ¼ ðCAÞT QðCBF Þ,

2

3 2 3 2 I &A 6 7 6 7 6 G1 ¼ 4 &I 5, G2 ¼ 4 A 5, G3 ¼ 4 &

ð25Þ

0

3 X max & BU 7 &X min þ BU 5: ! Q^ 0 Þ" &ðn þ n"Þ1=2 !ð

Therefore, imposed by constraints (30) and (27), the design of RMHSO in (25) is converted into a

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

1642

D. Chu et al. Notice that '2 * 0 corresponds to the equality constraints, and 'F has full row-rank (refer to (29)). Inserting (37), (35) into (36), we finally derive the explicit solution to 'A as follows:

constrained mp-QP problem, Jok&N!k

&

1 T F #F þ x^ T ðk & NÞ$F 2 ' T þ ðCBU & YÞ %F þ W ,

¼ min F

^ & NÞ þ 'U U, s:t: 'F F ¼ 'x xðk ^ & NÞ þ G3 : G1 F ' G2 xðk

^ & NÞ þ G'A , 'A ¼ Gx'A xðk where ð31Þ œ

Theorem 2 is then proven.

ð39Þ

Theorem 3: The analytic (explicit) solutions to the mp-QP problem in (31), which is defined for MHSO using open-loop prediction, are piece-wise affine functions ^ & NÞ, over the corresponding state critical region of xðk Ajx , where index j denotes the jth partition within the admissible state set Ax : Proof: Taking advantages of two Lagrange multipliers '1 ) 0, '2 * 0 and a slack variable (, (31) can be converted into an unconstrained version. From the first-order Karush–Kuhn–Tucker (KKT) theorem, the optimal conditions to (31) are known as

( )&1 Gx'A :¼ & G~ 1 #&1 G~ T1 þ G~ 1 #&1 'TF G'' ( ) G~ 1 #&1 $T þ G~ 1 'x þ G~ 2 þ G~ 1 #&1 'TF Gx'2 , ( )&1 G'A :¼ & G~ 1 #&1 G~ T1 þ G~ 1 #&1 'TF G'' ( ) G~ 1 #&1 %T ðCBU & YÞ þ G~ 1 'U U þ G~ 3 þ G~ 1 #&1 'TF G'2 :

It is obvious that (G~ 1 #&1 G~ T1 þ G~ 1 #&1 'TF G'' Þ 2 Sþþ (replacing G'' by (38)). From (37) and (39), we can conclude that the optimal solution Fo is the affine ^ & NÞ, i.e., function of xðk ^ & NÞ þ GF , F ¼ GxF xðk

ð40Þ

where T

T

^ & NÞ þ % ðCBU & YÞ #F þ $ xðk þ GT1 '1 þ 'TF '2 ¼ 0,

^ & NÞ & G3 þ (ÞT '1 ¼ 0, ðG1 F & G2 xðk ^ & NÞ & 'U U ¼ 0: 'F F & 'x xðk

ð32Þ

ð33Þ

ð34Þ

Based on the properties of optimization duality, '1 can be divided into two parts, i.e., 'N ¼ 0 (non-active constraints, ( > 0) and 'A * 0 (active constraints, ( ¼ 0), where '1 ¼ ½'TN , 'TA 0T . From (32) we have ^ & NÞ & #&1 %T ðCBU & YÞ &#&1 $T xðk &#&1 G~ T 'A & #&1 'T '2 ¼ F, 1

F

G~ 1 F & G~ 2 xðk & NÞ & G~ 3 ¼ 0,

ð35Þ ð36Þ

where G~ 1 , G~ 2 , G~ 3 is a combination of the active constraints out of G1, G2, G3, with the maximal full-row rank. Inserting (35) into (34), we have ^ & NÞ þ G'2 , '2 ¼ G'' 'A þ Gx'2 xðk

ð37Þ

where ( )&1 G'' :¼ & 'F #&1 'TF 'F #&1 G~ T1 , ( )&1 Gx'2 :¼ & 'F #&1 'TF ð'F #&1 $T þ 'x Þ, ( )&1 G'2 :¼ & 'F #&1 'TF ð'F #&1 %T ðCBU & YÞ þ 'U UÞ:

ð38Þ

( ) GxF :¼ & #&1 $T & #&1 G~ T1 þ #&1 'TF G'' Gx'A & #&1 'TF Gx'2 ,

( ) GF :¼ & #&1 %T ðCBU & YÞ & #&1 G~ T1 þ #&1 'TF G'' G'A & #&1 'TF G'2 :

To guarantee '1 ) 0 and satisfy the constraints imposed on estimated states, we need ^ & NÞ & G3 ' 0, G1 F & G2 xðk ^ & NÞ þ G'A ) 0, Gx'A xðk

ð41Þ ð42Þ

where F is derived in (40). Therefore, (41) and (42) define a critical region Ajx inside the admissible set Ax : From the above discussion, we conclude that the optimal solution to (31) is an affine function of ^ & NÞ corresponding to the region Ajx . Therefore, xðk Theorem 3 is proven. œ Theorem 3 succeeds in converting the design of RMHSO into an mp-QP problem and makes it possible to utilize existing solvers to obtain the partitions of the critical region Ax and the optimal solution Fo , e.g., the Matlab-Hybrid Toolbox: Due to the existence of equality constraints, the mp-QP problem turns out to be quite complicated and the optimal solutions of F and Ajx are memory-consuming. This fact impairs the implementation efficiency of RMHSO, one of the essentials of offline observation schemes. Therefore, we

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

Robust moving horizon state observer consider: is it possible to use the closed-loop prediction strategies to get simpler solutions (because only onestep prediction is necessary) and reduce the number of unavoidable parameters? Remark 3: Theorem 3 solves the mp-QP problems with both piece-wise inequality and equality constraints. To the best knowledge of the authors, how to solve this kind of problems remains an open problem. 3.2 Robust MHSO using rewinding closed-loop prediction Closed-loop prediction can overcome the conflict of open-loop optimization and closed-loop implementation, and utilize one-step prediction to simulate MHSO with an arbitrary prediction horizon. In this section, we will construct a novel rewinding optimization framework, meanwhile divide the objective in (10) into N pieces, fJk&1!k , . . . , Jk&N!k g. As a result, the optimal piece objective Jok&iþ1!k becomes a term of piece objective Jk&i!k , so that Jok&i!k will reflect the influence of the future predicted observer gain on current observation. The optimization problem in (10) is recast into an iterative program: Jok&N!k ¼ min

f^ ðk&NÞ

& ^ & NÞ & yðk & NÞk2Q0 kCxðk

þ k f^ ðk & NÞk2R þ

&

^ & N þ 1Þ min kCxðk

f^ ðk&Nþ1Þ

& yðk & N þ 1Þk2Q þ k f^ ðk & N þ 1Þk2R þ , , , & & ^ & 1Þ & yðk & 1Þk2Q þ min kCxðk f^ ðk&1Þ

^ & yðkÞk2Q0 þ k f^ ðk & 1Þk2R þ kCxðkÞ s:t:

^ & i þ 1Þ ¼ Axðk ^ & iÞ xðk þ Buðk & iÞ þ f^ ðk & iÞ,

'''' ,

ð43Þ

^ & 1Þ & yðk & 1ÞÞ, f^ ðk & 1Þ ¼ LðCxðk x^ k&Nþ1!k 2 Ax , ^ & NÞÞ ð&1Þ# ðrowðR1=2 Þfðk

þ J ok&iþ1!k :

Remark 4: Equqtion (43) derives a rewinding optimization problem and takes advantage of closed-loop prediction. Meanwhile, it proposes two challenges: how to derive the expression of optimal piece objective Jok&iþ1!k in terms of predicted ^ & i þ 1Þ; and how to guarantee the observation xðk expression of Jok&iþ1!k to be a quadratic (or linear) function and keep the uniform structure of all piece objectives Jk&i!k .

3.2.1 Piece objective Jk&i!k . From the above discussion, we know that two equality constraints are imposed on the arrival (terminal) observer gain and the initial observer gain. So when we choose the different value of i, the piece objective Jk&Nþi!k ð1 + i + N & 2Þ is associated with the different constraint structure. Two cases are discussed here. Case I: While i ¼ 1 and the optimal value of J0k&Nþ1!k are given, optimize the total objective Jk&N!k . In this case, two constraints are imposed on both ^ & NÞ, i.e., ^ & N þ 1Þ and fðk xðk ^ & N þ 1Þ ' xmax , xmin ' xðk ! Q^ 0 Þ", ð&1Þ# rowðR1=2 Þf( ' &ðn þ n"Þ1=2 !ð

ð46Þ ð47Þ

( and & f( ' ðAx( þ BuÞ ( & xmin , ð48Þ f( ' xmax & ðAx( þ BuÞ ð44Þ

More specifically, the intermediate piece objective Jk&Nþi!k can be represented by ^ & iÞ & yðk & iÞk2Q þ k f^ ðk & iÞk2R Jk&i!k ¼ kCxðk

Equation (45) accounts for the influence of the coming predicted observer gain, equivalent to f^ ðk & i þ 1Þ, on the current optimization. If the optimal piece objective ^ & i þ 1Þ, then only one-step Jok&iþ1!k is a function of xðk prediction is sufficient within iteration loops. Similar to (43), (45), the same prediction pattern is iterated N times and the prediction length is determined by the number of iteration loops. This feature enables us to implement MHSO with an arbitrary horizon by one-step prediction.

where (46) is the physical constraints, and (47) is ^ & NÞ to constructed for robust stability. Using xðk ^ & N þ 1Þ in (46), we have replace xðk

i ¼ 1, . . . , N,

! Q^ 0 Þ" . ðn þ n"Þ1=2 !ð ð# ¼ 0 or 1Þ:

1643

ð45Þ

( u( and f( represent the current signals for ease of where x, notation, e.g., x( :¼ xðk & NÞ for this case. Stacking (48) and (47), we derive an element-wise inequality constraint for the total objective Jk&N!k , ( Gf( f( ' Gc( þ Gx( x,

ð49Þ

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

1644

D. Chu et al.

where

where 2

3

3 xmax & Bu( 6 7 6 7 Bu( & xmin Gf( :¼ 4 5, Gc( :¼ 4 5 # 1=2 1=2 ^ ! ð&1Þ rowðR Þ &ðn þ n"Þ !ðQ0 Þ" 2 3 &A 6 7 ð50Þ and Gx( :¼ 4 A 5: I &I

2

0

Case II: If 2 + i + N & 2, the initial estimated distur^ & NÞ will not appear in the piece objective bance fðk Jk&i!k , so that the constraint in (49) is simplified as ( G0f( f( ' G0c( þ G0x( x,

ð51Þ

where *

+ * + * + I xmax & Bu( &A 0 0 0 Gf( :¼ , Gc( :¼ : and Gx( :¼ &I A Bu( & xmin ð52Þ Comparing (49) and (51) with (31), it can be seen that the constraints for RMHSO using closed-loop prediction are much simpler. For the cases of i 6¼ 1, there is no need to consider the constraints imposed on ^ & NÞ. As a result, we avoid the computational fðk burden derived by the mixture of the augmented inequality and equality constraints. Following (45), the optimization of piece objective Jk&i!k turns out to be Jok&iþ1!k

¼ min Jk&i!k , f^ ðk&iÞ

ð2 + i + N & 1Þ

ð53Þ

^ & i þ 1Þ ¼ Axðk ^ & iÞ þ Buðk & iÞ þ f^ ðk & iÞ, s:t: xðk ^ & i þ 1Þ 2A x : xðk Note that the problem in (53) excludes the case of total objective Jk&N!k . We first assume that Jok&iþ1!k is a quadratic function, ^ & i þ 1Þk2Qi&1 þ 'i&1 xðk ^ & i þ 1Þ þ &i&1 : Jok&iþ1!k ¼ kxðk

ð54Þ

^ & iÞ to replace Inserting (54) into (45), and using xðk ^ & i þ 1Þ, we finally derive xðk Jk&i!k ¼

& ' 1 (T ( f Hf( f þ x( T Hf( x( f( þ Zf( f( þ Hx& ( , 2

ð55Þ

Hf( :¼ 2Qi&1 þ 2R, Hf(x( ¼ 2AT Qi&1 , ( T Qi&1 þ 'i&1 , Zf( ¼ 2uB

( & yk ( 2Q! þ kAx( þ Buk ( Qi&1 Hx& ( :¼ kCx ( þ &i&1 : þ 'i&1 ðAx( þ BuÞ

ð56Þ

Here f( :¼ f^ ðk & iÞ for the current signal. Notice that is independent of f,( i.e., irrelevant Hf( 2 Snþþ and Hx& ( o to Jk&i!k . From the definition of the piece objective Jk&i!k in (55), we can convert (53) into an mp-QP problem with the element-wise inequality constraints. Setting the different value of index i, the mp-QP problem for piece objective Jk&i!k is iterated N & 2 times. Remark 5: Given the assumption on the quadratic form of Jok&iþ1!k , the mp-QP problem for the piece-objective Jk&i!k is given by Jok&iþ1!k ¼ min Jk&i!k , f^ ðk&iÞ

s:t: Jk&i!k

ð2 + i + NÞ

ð57Þ

& ' 1 (T ( f Hf( f þ x( T Hf( x( f( þ Zf( f( þ Hx& ¼ ( , 2

( G0f( f( ' G0c( þ G0x( x:

where f( is the optimization variable, and x( is the multi-parameter vector (the current state observation). Theorem 4: The analytic (explicit) solutions to the piece-objective Jk&i!k defined in (57) are piece-wise ( over the corresponding state critical affine functions of x, region Ajx , where index j denotes the jth critical region within the admissible state set Ax : Proof: Similar to the proof of Theorem 3, we use the KKT theorem and the property of optimization duality. The problem is recast into an unconstrained mp-QP problem. The multiplier ' is separated into active components and non-active ones, i.e., ' ¼ ½'A , 'N 0. Because of being free of equality constraints, it is easy to obtain the express of 'A as follows: ( )&1 ( 0 &1 T ) 'A ¼ & G~ 0f( H&1 G~ f0T G~ f( Hf( Hf(x( þ G~ 0x( x( ( f( ( )&1 ( 0 ) ZTf( , G~ f0T & G~ 0f( H&1 G~ c( þ G~ 0f( H&1 ð58Þ ( f( f(

where G~ 0f(, G~ 0c( , G~ 0x( is a combination of the active constraints out of the constraints in (51), and the rows of G~ 0u( are linearly independent. The optimal estimated disturbance is

( ( )&1 ( )) f( ¼ & H&1 HTf(x( þ H&1 HTf(x( þ G~ x( x( G~ Tf( G~ f(H&1 G~ Tf( G~ f(H&1 f( f( f( f( ( )&1 ( ) Z^ Tf( & H&1 ZTf( G~ Tf( G~ f(H&1 G~ Tf( þ H&1 G~ c( þ G~ f(H&1 f( f( f( f( :¼ Lji x( þ Oji ,

ð59Þ

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

Robust moving horizon state observer where Lji can be regarded as the current observer gain corresponding to the jth critical region Ajx : To guarantee the condition of (51) and ' ) 0, we can derive the definition of the critical region A jx , , ( ) ( A jx :¼ x( 2 Ax j Lji x( þ Oji f( + Gc( þ Gx( x, ( ) ( ) &1 HTf(x( þ G~ x( x( G~ Tf( G~ f(H~ &1 G~ f(H&1 f( f( ( )&1 ( ) ZTf( ' 0g: þ G~ f(H~ &1 G~ Tf( G~ c( þ G~ f(H&1 f( f(

1645

Proof: Let’s first consider the optimal solution of the first piece objective Jk&1!k and it is solved by a Riccati equation, ^ & 1Þ & yðk & 1Þk2Q Jok&1!k ¼ kCxðk

^ & yðkÞk2Q0 , þ k f^ ðk & 1Þk2R þ kCxðkÞ

ð60Þ

In the case that there are no active constraints out of the conditions (51), i.e., G~ 0f(, G~ 0c( , G~ 0x( do not exist, (59) and (60) degenerate to f( ¼ &H&1 HTf(x( x( & H&1 ZTf( :¼ Lji x( þ O ji , Gf( f(& Gc( & Gx( x( ( 0, f( f( which result in the second case of the explicit solutions to the mp-QP problem in (57), i.e.,

ð62Þ

where ^ ^ & 1Þ þ Buðk & 1Þ þ f^ ðk & 1Þ: xðkÞ ¼ Axðk

^ & 1Þ ¼ LCxðk ^ & 1Þ & Lyðk & 1Þ fðk ^ & 1Þ þ O1 :¼ L1 xðk

ð63Þ

where L is the arrival observer gain, obtained by the Riccati equation in (15). Inserting (63) into (62), we have ^ & 1Þk2Q1 þ '1 xðk ^ & 1Þ þ &1 , Jok&1!k ¼ kxðk

ð64Þ

where f( ¼ Lji x( þ O ji

ð8x( 2 Ajx Þ,

ð61Þ

where Ajx :¼ fx( 2 Ax j Gf( f( & Gc( & Gx( x( ( 0g. Obviously, the analytic (explicit) solutions to the mp-QP problem ( defined in (57) are piece-wise affine functions of x. Theorem 4 is then proven. œ Remark 6: Theorem 4 offers the explicit solutions to the piece objective Jk&i!k : The solutions are much simpler than those of Theorem 3 (open-loop RMHSO). However, Theorem 4 is built on the assumption that Jok&iþ1!k must be quadratic. Remark 7: Replacing all of the parameters G0f(, G0c( , G0x( by Gf(, Gc( , Gx( , we can derive the optimal solutions to the last step iteration, i.e., the calculation of the total objective Jk&N!k :

3.2.2 Offline robust MHSO. The purpose of this subsection is to remove the assumption on Jok&iþ1!k (Remark 6) and construct the affine solutions to Jk&N!k : Note that the arrival observer gain L is determined by solving a Riccati equation, therefore, the number of the optimization variables is N & 1 instead of the length of the prediction horizon N. The first piece objective to be optimized is Jk&2!k instead of Jk&1!k . Theorem 5: The optimal solution to the piece objective Jk&iþ1!k is a quadratic function of the observation ^ & i þ 1Þ. xðk

Q1 :¼ CT QC þ LT1 RL1 þ ðCA þ CL1 ÞT Q0 ðCA þ CL1 Þ, '1 :¼ &2yT ðk & 1ÞQC þ 2OT1 RL1 þ 2ðCBuðk & 1Þ þ CO1 & yðkÞÞT Q0 ðCA þ CL1 Þ,

&1 :¼ kyðk & 1Þk2Q þ kO1 k2R þ kCBuðk & 1Þ þ CO1 & yðkÞk2Q0 :

^ & 1Þ and Therefore Jok&1!k is a quadratic function of xðk then we can say that the optimal solutions to Jk&2!k are piece-wise affine functions, i.e., f^ o ðk & 2Þ ¼ L2 x( þ O2 ,

^ & 2Þ 2A x Þ: ð8xðk

ð65Þ

Substitute (65) into the equation, ^ & 2Þ & yðk & 2Þk2Q Jk&2!k ¼ kCxðk

þ k f^ ðk & 2Þk2R þ Jok&1!k :

ð66Þ

Obviously the resulting expression of Jok&2!k is a quadratic function too. Repeat this procedure till i ¼ N & 1, and it can be seen that Jok&iþ1!k is always a quadratic ^ & i þ 1Þ, function of the current state observation xðk explicitly, ^ & i þ 1Þk2Qi&1 Jok&iþ1!k ¼ kxðk

^ & i þ 1Þ þ &i&1 , þ 'i&1 xðk

ð67Þ

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

1646

D. Chu et al. MHSO trajectories

where Qi&1 :¼ CT QC þ LTi&1 RLi&1 þ ðA þ Li&1 ÞT Qi&2 ðA þ Li&1 Þ, 'i&1 :¼ &2yT ðk & i þ 1ÞQC þ 2OTi&1 RLi&1

þ ð2ðBuðk & i þ 1Þ þ Oi&1 ÞT Q0 þ 'i&2 ÞðA þ Li&1 Þ,

The real state trajectory

x (k), xˆ (k)

The optimal MHSO output

A piece of full information observer

&i&1 :¼ kyðk & i þ 1Þk2Q þ kOi&1 k2R þ kBuðk & i þ 1Þ

þ Oi&1 k2Q0 þ 'i&2 ðBuðk & i þ 1Þ þ Oi&1 Þ þ &i&2 :

Note that 'i&1 and Oi&1 are the expressions of yðk & i þ 1Þ and uðk & i þ 1Þ. In other words, 'i&1 , Oi&1 collect the information of past inputs and outputs. From (65) to (67), the theorem is proven. œ Remark 8: 'i&1 collects past input and output information, and also, 'i&1 is a term of Zf^ðk&iÞ which influences the observer gain Li and the admissible state partitions (refer to (56)). So the optimal Li, equivalent to f^ðk&iÞ , is the composition of past inputs and outputs. Combining Theorems 4 and 5, we can derive the optimal solutions of all piece objectives Jk&i!k ð1 + i + NÞ and the corresponding observer gain Loi : Consequently, the current state observation can be obtained by x^ o ðkÞ ¼

N Y ^ & NÞ ðA þ Li CÞxðk j¼1

þ

N Y ðA þ Li CÞBuðk & NÞ þ , , , þ Buðk & 1Þ: j¼2

ð68Þ

0 1

2

N N+1N+2

Time

Moving horizon window is shifted one step ahead

Figure 1.

The theory of MHSO design.

It can be seen that the dimension of (69) is increasing while collecting more input and output data, but because the horizon length N is not too large, full information state observer is still practical and effective. Figure 1 illustrates the integration of full information state observer and MHSO. In the figure, the trajectory ^ of xðkÞ is composed of two segments: one spans from the initial instant to instant N, and the other starts at instant N þ 1 and proceeds to future. The two shadowed regions represent the moving horizon windows which are shifted one step ahead while iteratively implementing RMHSO. The light solid line shows the optimal trajectory derived by full information observer, the dark solid is obtained by RMHSO, and the dot-dashed line simulates the optimization of RMHSO whose prediction horizon windows are shifted one-step ahead.

4. Algorithm of robust MHSO In x 3, RMHSO is converted into a set of mp-QP problems. A series of offline observer polices are developed to reduce offline computational burden and facilitate online implementation. To perform state ^ & NÞ is predictions, the initial state observation xðk necessary. How to setup the initial conditions of RMHSO is covered in this section. 4.1 The initial setup We will use full information state observer to determine the sequence x^ 1!N , i.e., the initial setup of RMHSO. Here the problem is given as follows: ^ & yðiÞk2Q0 f^0!i :¼ arg min kCxðiÞ f^0!i

þ

i&1 X j¼0

^ j Þ & yðjÞk2Q þ k f^ ð j Þk2R , kCxð

^ ¼ Ai xð0Þ ^ þ Ai&1 Buð0Þ þ , , , Buði & 1Þ s:t: xðiÞ þ Ai&1 f^ ð0Þ þ , , , þ f^ ði & 1Þ, ^ þ iÞ 2 Ax ð0 < i + NÞ: xðk

ð69Þ

4.2 Algorithms Based on Theorems 2–4, we can develop the open-loop and closed-loop RMHSO, respectively. The two pairs are both featuring offline optimization and online implementation, so that the implementation efficiency is improved dramatically. Algorithm I (Open-loop MHSO): (1) Setup the initial observation x^ 1!N based on full information state observer, and store the optimal solutions (refer to the problem in (69)). (2) Execute closed-loop robust stability analysis. Choose eligible tuning parameters &, P and solve a Riccati equation and an semi-definition program to derive Q0, L1 and constraints imposed on estimated disturbance f^ ðk & NÞ in Theorem 1. (3) Define augmented matrices A, B, BF , C, Q, R, #, $, and % in (24). Form the mp-QP objective and the constraint parameters G1 , G2, G3, 'F, 'x, 'U (memory-consuming) for open-loop RMHSO in (30). (4) Stack the input/output measurements U, Y and online partition the admissible state set Ax , i.e.,

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

Robust moving horizon state observer deriving G~ j1 , G~ j2 , G~ j3 . ‘‘j’’ is the index of the state space partitions. (5) Derive the optimal sequence f^k&N!k based auxiliary matrices G'' , Gx'2 , G'2 , Gx'A , G'A , GxF and GF defined in (37)–(40) (memory-consuming). (6) Implement f^k&N!k from (8) and derive the current ^ optimal state observation xðkÞ: Purge the memories for intermediate matrices, partitions and optimal sequences f^k&N!k : (7) If k > t, exit. Otherwise update U, Y and go to Step 4. Here ‘‘t’’ is the prespecified observation length. In Algorithm I, all steps before Step 4 are completed offline—offline optimization, and all steps from Step 4 are done online—online implementation. This procedure is different from that of offline MPC whose state space partitions are performed offline. Algorithm II (MHSO with the rewinding prediction): (1) Steps 1 and 3 are same as those of Algorithm I. (4) Derive the optimal expressions of L1, O1 and store the parameters of Jk&1!k , i.e., Q1, '1 and &1 in (64). Set i ¼ 2, the index of rewinding optimization loops. (5) Give the optimal solutions to the mp-QP problem of the piece objective Jk&i!k , i.e., Lji , Oji , Qji , 'ji , &ji in (67), and store the corresponding state space N partitions fA1i , . . . ,Ai p g where Np is the number of partitions. (6) Identify the active partition from the set N fA1i , . . . , Ai p g, based on the measurements uðk & iÞ, yðk & iÞ and yðk & i þ 1Þ. Suppose that the jth partition is active. Keep Lji , Oji , Qji , 'ji , &ji and purge the memories for other optimal solutions corresponding to the partitions but Aji : Set i ¼ i þ 1. (7) Check whether i ¼ N, if yes store LjN!1 and reject all other intermediate solutions. Otherwise go to Step 5. (8) Implement the optimal observer gain LjN!1 from (68) and derive the current optimal state observation ^ xðkÞ: Purge the memories for intermediate matrices, partitions and optimal sequences f^k&N!k : (9) If k > t, exit. Otherwise go to Step 4. Remark 9: Comparing Algorithms I and II, the former costs more memories for intermediate solutions, and also the augmented matrices may lead to some feasibility problems. The latter utilizes rewinding optimization and reduces the computational cost, but two level iterative loops may lower the implementation efficiency. 4.3 Robust MHSO to systems with measurement noises In the above discussion, we assume the measurement noise vðkÞ ¼ 0, i.e., we use model (5) instead of (1) for

1647

the open-loop and closed-loop MHSO design. However, v(k) is ubiquitous in real plants, and how to incorporate v(k) with RMHSO design is a non-trivial problem. Motivated by Muske and Badgwell (2002) and Pannocchia (2003), this problem can be solved by introducing a noise model. For a simple case, we can just rewrite the system in (1) as zðk þ 1Þ ¼ AxðkÞ þ BuðkÞ þ Bf fðxðkÞ, dðkÞ, kÞ, yðkÞ ¼ CzðkÞ

)

,

ð70Þ

where zðkÞ :¼ ½xT ðkÞ, vT ðkÞ0T , A¼

*

A

0

0 Ad

+

, B¼

* + * + I B , Bf ¼ and C ¼ ½C, I 0: Bfd 0

So we can proceed with the above discussion based on model (70), and use the different value of Q to tune the observer performance. Because of the limitation of space, here we choose not to discuss how to derive matrices Ad, Bfd. For the interested, please refer to Muske and Badgwell (2002) and Pannocchia (2003) for details.

5. A simulation example The system is given by xðk þ 1Þ ¼ ðA þ %A ðkÞÞxðkÞ þ Bd wðkÞ yðkÞ ¼ CxðkÞ, where %A ðkÞ and wðkÞ represent system’s internal and external uncertainties, respectively. The system parameters are known as * + * + 0:99 0:2 1 A¼ , Bd ¼ , C ¼ ½1, 30, &0:1 0:3 0 and both internal and external uncertainties are bounded by 0.5, i.e., &0:5 + wðkÞ + 0:5,

and

! A ðkÞÞ + 0:5: !ð%

To reflect the different influence of internal uncertainties and external disturbances, we perform the simulations under two conditions: (1) set %A ðkÞ ¼ 0 and call the random function in MATLAB to simulate wðkÞ in order to demonstrate the influence of external disturbances, e.g., ‘‘rand’’; (2) set %A ðkÞ 6¼ 0, and call ‘‘rand’’ to create both wðkÞ and %A ðkÞ, i.e., simulate the combined internal uncertainty and external disturbance. Reformulate the uncertainties into the form of (3), we can derive the uncertainty bounds for two cases,

D. Chu et al. Table 1.

Q, R, P, &, Q0 " L1 ðLCÞ f^ðk & NÞ

Simulation parameters.

%A ðkÞ ¼ 0

%A ðkÞ 6¼ 0

I, 3I, 4I, 0:8 1.3897 0.1664 * + 0:6254 0:1514 &0:0014 0:2526 ^ & NÞ + 1:6473 ð&1Þ# 1fðk

I, 4I, 4I, 0:8 1.4073 0.1453 * + 0:5751 0:1437 0:0109 0:2395 ^ & NÞ + 3:5608 ð&1Þ# 1fðk

Two cases

x1, xhat1

10 0 Real states Closed loop MHSE Open loop MHSE

−10 −20

5

10

15

20

25

30

35

40

45

50

30

35

40

45

50

Time (s) 4

x2, xhat2

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

1648

2 0 −2

5

10

15

20

25 Time (s)

Figure 2.

Comparison of observers with external uncertainties.

"1 ¼ 0:5 and "2 ¼ 1:25. To guarantee stability, the arrival weighting Q0, the arrival observer gain L, and ^ & NÞ are determined the initial estimated disturbance fðk by solving a Riccati equation and a semi-definite optimization problem. The related parameters are given in table 1. Set the prediction horizon N ¼ 3. Two algorithms are employed in the sequel, namely, the open-loop RMHSO and closed-loop RMHSO with rewinding optimization. Figure 2 is the simulation results for the observers under Condition 1. We find that under Condition 1, both the RMHSO algorithms and the nominal MHSO can work well. The left two columns in table 2 list the means and variances of the observation errors derived by the three different types of MHSO. It can be seen that offline RMHO (our algorithms) are better than nominal MHSO, but the improvement is not remarkable. So we repeat the stimulation again and set a non-zero internal disturbance %A ðkÞ: Under Condition 2, we find that MHSO becomes unstable, so that in table 2, right columns do not give the means and

variances for this case. But offline RMHSO still works well. Figure 3 illustrates the dynamics of both openloop and closed-loop RMHSO. All simulations are performed using a laptop with a Pentium 4 processor and a 512MB-RAM. From figures 2 and 3, it is hard to say whether the closed-loop RMHSO gives better observation than open-loop RMHSO. But we can compare the simulation time-costs and memory-costs. Keeping the simulation length equal to 50, the open-loop RMHSO costs 8:4810 seconds and its data file takes 11KB of capacity, but for the closedloop one, time cost increases to 16.6330 seconds (two level iterations) and date file decreases to 1KB. The simulation results are consistent with the theoretical analysis.

6. Conclusion In this paper, we have developed two offline MHSO algorithms in the presence of system internal

Table 2.

Nominal MHSO Open-loop MHSO Closed-loop MHSO

x1, xhat1

1649

Means and variances for observation errors.

Means

Variances

Means

Variances

[1.8129, 0.1200] [1.0577, &0.1820] [0.7126, &0.1864]

[13.3990, 0.3042] [6.2067, 0.1341] [5.0215, 0.1880]

– [0.2850, 0.1709] [0.1739, &0.0513]

– [0.0097, 1.0624] [0.8863, 0.0325]

10

Real states Open-loop MHSE Closed-loop MHSE

5

0

−5

5

10

15

20

25

30

35

40

45

50

30

35

40

45

50

Time (s) 2

x2, xhat2

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

Robust moving horizon state observer

1

0

−1

5

10

15

20

25 Time (s)

Figure 3.

Comparison of observers with internal and external uncertainties.

uncertainties and external disturbances. Both algorithms feature offline optimization and online implementation, consequently the computational complexity is reduced dramatically. The open-loop RMHSO, Algorithm I, employs the augmented system matrices and past input/output signals to convert the design into mp-QP problems associated with piece-wise inequality and equality constraints. Although the equality constraints lead to more computational complexity, we show that the optimal solutions are still piece-wise affine functions of the initial state observation. The closed-loop RMHSO, Algorithm II, adopts an novel rewinding optimization pattern and simulates MHSO with an arbitrary prediction horizon by just one-step prediction. Comparing these two algorithms, it can be seen that the former suffers from high offline computational burden and needs more memories for intermediate parameters, but it leads to faster online implementation. The latter does not combine the inequality constraints with equality constraints within every optimization loop, simplifies the offline

optimization, and also requires a smaller amount of memory for intermediate parameters. However, because there exist two level iterative loops, the closedloop MHSO has slower online implementation than the open-loop MHSO. In summary, based on the advantages and disadvantages of open-loop and closed-loop RMHSO, we can develop different MHSO schemes for real processes with different physical characteristics.

References A. Bemporad, M. Morari, V. Dua and N. Pistikopoulos, ‘‘The explicit linear quadratic regulator for constrained systems’’, Automatica, 38, pp. 3–20, 2002a. A. Bemporad, F. Borrelli and M. Morari, ‘‘Model predictive control based on linear programming - the explicit solution’’, IEEE Trans. Automat. Contr., 47, pp. 1974–1985, 2002b. D. Chu, T. Chen and H.J. Marquez, ‘‘Explicit robust model predictive control using recursive closed-loop prediction’’, Int. J. Robust and Nonlinear Contr., 16, pp. 519–546, 2006.

Downloaded By: [University of Alberta] At: 20:06 20 September 2007

1650

D. Chu et al.

P. Findeisen, ‘‘Moving horizon state estimation of discrete time systems’’, Master’s thesis, University of Wisconsin-Madison (1997). H.V. Gonzalez, J.M. Flaus and G. Acuna, ‘‘Moving horizon state estimation with global convergence using interval techniques: application to biotechnological pocesses’’, J. Proc. Contr., 13, pp. 325–336, 2003. E.L. Haseltine and J.B. Rawlings, ‘‘Critical evaluation of extended kalman filtering and moving horizon estimation’’, Ind. Eng. Chem. Res., 44, pp. 2451–2460, 2005. R.E. Kalman, ‘‘A new approach to linear filtering and prediction problems’’, Trans. ASME – J. Basic Eng., 82, pp. 35–45, 1960. M.V. Kothare, V. Balakrishman and M. Morari, ‘‘Robust constrained model predictive control using linear matrix inequalities’’, Automatica, 32, pp. 1361–1379, 1996. J.H. Lee and Z. Yu, ‘‘Worst-case formulations of model predictive control for systems with bounded parameters’’, Automatica, 33, pp. 763–781, 1997. C. Lien, ‘‘Robust observer-based control of systems with state perturbations via LMI approach’’, IEEE Trans. Automat. Contr., 49, pp. 1365–1370, 2004. D.G. Luenberger, ‘‘An troduction to observers’’, IEEE Trans. Automat. Contr., 16, pp. 596–602, 1971. M.S. Mahmouda and H.K. Khalil, ‘‘Robustness of high-gain observer-based non-linear controllers to unmodeled actuators and sensors’’, Automatica, 38, pp. 361–369, 2002. H.J. Marquez and M. Riaz, ‘‘Robust state observer design with application to an industrial boiler system’’, Contr. Eng. Pract., 13, pp. 713–728, 2005. D.Q. Mayne, J.B. Rawlings and C.V. Rao, ‘‘Constrained model predictive control: stability and optimality’’, Automatica, 36 pp. 789–814, 2000. H. Michalska and D.Q. Mayne, ‘‘Moving horizon observers and observer-based control’’, IEEE Trans. Automat. Contr., 40, pp. 995–1006, 1995. M. Morari and J.H. Lee, ‘‘Model predictive control: past, present and future’’, Comput. Chem. Eng., 23, pp. 667–682, 1999.

K.R. Muske and T.A. Badgwell, ‘‘Disturbance modeling for offset-free linear model predictive control’’, J. Process Contr., 12, pp. 617–632, 2002. K.R. Muske, J.B. Rawlings and J.H. Lee, ‘‘Receding horizon recursive state estimation’’, in Proceedings of American Control Conference, San Francisco, California, 1993, pp. 900–904. G. Pannocchia, ‘‘Robust disturbance modeling for model predictive control with application to multivariable ill-conditioned processes’’, J. Process Contr., 13, pp. 693–701, 2003. I. Peterson and D.C. McFarlane, ‘‘Optimal guaranteed cost control and filtering for uncertain linear systems’’, IEEE Trans. Automat. Contr., 39, pp. 1971–1977, 1994. T. Raissi, N. Ramdani and Y. Candau, ‘‘Bounded error moving horizon state estimator for non-linear continuous-time systems: application to a bioprocess system’’, J. Process Contr., 15, pp. 537–545, 2005. C.V. Rao, J.B. Rawlings and J.H. Lee, ‘‘Constrained linear state estimation – a moving horizon approach’’, Automatica, 37, pp. 1619–1628, 2001. C.V. Rao, J.B. Rawlings and D.Q. Mayne, ‘‘Constrained state estimation for non-linear discrete-time systems: stability and moving horizon approximations’’, IEEE Trans. Automat. Contr., 48, pp. 246–258, 2003. J.B. Rawlings, ‘‘Tutorical: model predictive control technology’’, in Proceedings of American Control Conference, San Diego, California, 1989, pp. 662–676. D.G. Robertson, J.H. Lee and J.B. Rawlings, ‘‘A moving horizonbased approach for least-squares state estimation’’, A.I. ChE J., 42, pp. 2209–2224, 1996. Y. Wang, L. Xie and C. Souza, ‘‘Robust control of a class of uncertain systems’’, Syst. Contr. Lett., 19, pp. 139–149, 1992. Y. Xiong and M. Saif, ‘‘Unknown disturbance inputs estimation based on a state functional observer design’’, Automatica, 39, pp. 1389–1398, 2003. P. Zitek, ‘‘Anisochronic state observers for hereditary systems’’, Int. J. Contr., 71, pp. 581–599, 1998.