Receding horizon control via Bolza-type optimization1 - CiteSeerX

1 downloads 0 Views 103KB Size Report
proved in the case of Lipschitz continuous value function. This version of receding horizon scheme can be considered as a straightforward generalization of the ...
Systems & Control Letters 35 (1998) 195–200

Receding horizon control via Bolza-type optimization 1  Gyurkovics ∗; 2 E. Budapest University of Technology, Mathematical Institute, Muegyetem rkp. 3, Budapest, H-1521, Hungary Received 4 September 1997; accepted 29 May 1998

Abstract This paper deals with the stabilization problem of nonlinear control systems. A variant of the receding horizon control method is proposed which is based on the solution of a certain Bolza problem. The stabilizing property of the method is proved in the case of Lipschitz continuous value function. This version of receding horizon scheme can be considered as a c 1998 straightforward generalization of the so-called Fake Riccati Equation Technique for nonlinear constrained systems. Elsevier Science B.V. All rights reserved. Keywords: Receding horizon control; Predictive control; Feedback stabilization; Nonlinear control; Stability

1. Introduction The receding horizon control method was originally introduced by Kleinman [7] as an “easy way” to stabilize linear time-invariant systems in the early 1970s. The method has been since revisited by many authors (see e.g. [4, 5, 8–14] and the references therein). In the original version of the method and in a great number of its generalizations, stabilizing feedback is determined by solving an optimal control problem with a suitable Lagrange-type criterion over a xed time period ahead with the constraint the terminal state be the origin. This terminal constraint, which can be interpreted as an “in nite nal state penalty”, has been relaxed in di erent ways: in [11], the terminal state



Fax: +(361)463-1291 and e-mail: [email protected]. Preliminary version of the paper was presented at the European Control Conference ’97 (Brussels, July 1997). 2 Work supported in part by the Hungarian National Foundation for Scienti c Research, grant no. T014847. 1

is required to belong to a certain neighborhood of the origin, in [12], the nal state penalty has been omitted but an in nite time horizon has been used. Another possibility is to consider a Bolza-type optimal control problem instead of the constrained Lagrangetype one. This idea has been utilized in [8] and in [14] for linear systems, in [13] for discrete-time systems requiring the time horizon and the nal state penalty function to be large enough, and in [4] for controlane continuous-time systems. The stability analysis is made in [14] by viewing the solution of the di erential Riccati equation as the steady-state solution of a suitably de ned new algebraic Riccati equation. This approach is named Fake Riccati Equation Technique. In the present work, a version of the receding horizon control method based on Bolza-type optimization is investigated for general nonlinear control constrained continuous-time systems, and the asymptotic stability of the closed-loop system is established in the case of Lipschitz continuous value function. It will be shown that this version of the receding horizon scheme can be considered as a straightforward generalization of the

c 1998 Elsevier Science B.V. All rights reserved. 0167-6911/98/$ – see front matter PII: S 0 1 6 7 - 6 9 1 1 ( 9 8 ) 0 0 0 5 1 - 6

 Gyurkovics / Systems & Control Letters 35 (1998) 195–200 E.

196

method of Fake Algebraic Riccati Equation (FARE) [14] for nonlinear constrained systems. 2. Problem and method

kf(x; u)k6C1 (1 + kuk + kxk);

Consider the nonlinear control system x(t) ˙ = f(x(t); u(t));

A1.1. f; L ∈ C(Rn ×U) are di erentiable with respect to the rst (vector) variable and there exist such positive constants C1 ; C2 that

∀(x; u) ∈ Rn × U; (1)

where f : Rn × U → Rn ; U ⊂ Rm closed, 0 ∈ U and f(0; 0) = 0. A feedback strategy x 7→ v(x) has to be determined so that by substitution of u(t) = v(x(t)) ∈ U into Eq. (1), the resulted closed-loop system

kf(x; u) − f(x; u)k6C2 (1 + kuk)kx − xk; ∀x; x ∈ Rn ; ∀u ∈ U:

where xu (·; x0 ; t0 ) denotes the solution of Eq. (1) due to control u and with initial condition xu (t0 ; x0 ; t0 ) = x0 . The receding horizon control is de ned as follows. Fix a time length T ¿0 and consider the optimization problem

A1.2. L(0; 0) = 0 and there exists a function ’ ∈ C(U) such that L(x; u)¿’(u) for all (x; u) ∈ Rn × U; ’(u)¿0 if u 6= 0 and ’(u)=kuk → ∞ if kuk → ∞; u ∈ U. A1.3. g ∈ C 1 (Rn ); g(0) = 0; g(x)¿0 if x 6= 0, and g(x) → ∞ if kxk → ∞. ˜ A1.4. The set F(x) = {z˜ ∈ Rn+1 : z = f(x; u); zn+1 ¿ L(x; u); u ∈ U} is convex for all x ∈ Rn , where z˜ = (z 0 ; zn+1 )0 . (We note that if U is compact then the growth condition in A1.2 does not require anything.) It is wellknown (see e.g. [3]) that, under the assumptions of A1, problem P(t0 ; t1 ; x) has at least one solution for any t0 ¡t1 . Let V : [0; T ] × Rn → R be the value function of problem P(0; T; x) de ned by

P(t0 ; t1 ; x0 ): inf {J (t0 ; t1 ; u; x0 ): u ∈ L∞ ([t0 ; t1 ]; U)}

V (s; x) = inf {J (s; T; u; x): u ∈ L∞ ([s; T ]; U)}:

with t0 = 0; t1 = T; x0 = x(t), where L∞ ([a; b]; U) denotes the class of all Lebesgue measurable and essentially bounded functions u: [a; b] → U. Let the minimizing solution to problem P(0; T; x) be denoted by uˆ (·; x; 0) and the corresponding value of the cost function by Vˆ (x), i.e.

Clearly V (0; x) = Vˆ (x). Let Br (x) = {y ∈ Rr : kx − yk6}. We want to restrict ourselves to problems having locally Lipschitz continuous value function. To ensure this, the following additional assumption is required. A2.1. For any ¿0 there is an N () such that for all t0 ∈ [0; T ] and for all x0 ∈ Bn (0) at least one of the control functions optimal with respect to problem P(t0 ; T; x0 ) satis es the inequality

x(t) ˙ = f(x(t); v(x(t)))

(2)

be locally (globally) asymptotically stable about the origin. In order to obtain such a controller, let Eq. (1) be subject to the cost function J (t0 ; t1 ; u; x0 ) = g(xu (t1 ; x0 ; t0 )) Z t1 L(xu (s; x0 ; t0 ); u(s)) ds; + t0

(3)

Vˆ (x) = J (0; T; uˆ (·; x; 0); x): The receding horizon feedback v: X ⊂ Rn → U is de ned by v(x) = u(0; ˆ x; 0);

(4)

assuming that the problem is solvable for any x ∈ X and de nition (4) is correct. As a result we obtain the closed-loop system (2). 3. Monotonicity of the value function The following assumptions will be imposed.

ku(·; ˆ x0 ; t0 )k∞ 6N (): A2.2. For any positive numbers r1 ; r2 there exists a C3 ¿0 such that |L(x; u) − L(x; u)|6C3 kx − xk; ∀x; x ∈ Brn1 (0); ∀u ∈ Brm2 (0) ∩ U:

 Gyurkovics / Systems & Control Letters 35 (1998) 195–200 E.

The third assumption plays a key role in the monotonicity of the value function. A3. There exists an ¿0 such that for any x ∈ G = {x ∈ Rn : g(x)¡}

satis es the maximum principle hp(t); f(ˆx(t); u(t))i ˆ − L(ˆx(t); u(t)) ˆ = max{hp(t); f(ˆx(t); u)i − L(ˆx(t); u)} u

H (x; −3g(x))¿0;

a.e. in [0; T ]

where H : Rn × Rn → R; H (x; p) = supu {hp; f(x; u)i − L(x; u)} is the Hamilton function belonging to Eqs. (1) and (3). Remark 1 (a). Assumptions A1 and A2 are not too restrictive ones: similar conditions always have to be stated for the existence of the solution of problem P(t0 ; t1 ; x0 ) and for the Lipschitz continuity of the value function. (b) Assumption A3 can be satis ed e.g. in the following case. Let A = (@f=@x)(0; 0); B = (@f=@u)(0; 0) be a stabilizable pair, and let L be a quadratic function L(x; u) = x0 Qx+u0 Ru with Q = Q0 ¿0; R = R0 ¿0. Then it can be shown that for the function g(x) = cx0 Px assumption A3 holds true, when P is the positive definite solution of the usual algebraic Riccati equation and c¿1 (see also [11]). However, Assumption A3 can be satis ed not only in this case: let L(x; u) = 2a2 (x1 2 + x2 2 )2 + (u1 2 + u2 2 )=2, g(x) = b(x1 2 + x2 2 ) and      −x2 x1 −x2 u1 f(x; u) = + : x1 x2 x1 u2 If 0¡a6b then A3 holds true, in spite of the fact that the corresponding pair A; B is not stabilizable. The same is true if L(x; u) = (x14 +x24 )+u2 =2; g(x) = c(x12 + x22 ); c¿1:5 and  3    x1 − x23 x1 f(x; u) = + u: x2 x13 + x23 In proving of the main result of this section the following adaptation of ([1], Theorem 4.2) for the case of Bolza problems is needed. Proposition 2. Assume that A1, A2 hold true. A trajectory control pair (ˆx; u) ˆ of the control system (1) with xˆ (t) = xuˆ (t; x0 ; 0) is optimal for the problem P(0; T; x0 ) if and only if the solution p: [0; T ] → Rn of the adjoint equation 0  @f (ˆx(t); u(t)) ˆ p(t) − p(t) ˙ = @x 0  @L (ˆx(t); u(t)) ˆ ; (5) − @x p(T ) = − 3g(ˆx(T ))

197

(6)

and the generalized transversality conditions (H (ˆx(t); p(t)); −p(t)) ∈ D+ V (t; xˆ (t)) a.e. in [0; T ]; −

p(t) ∈ Dx+ V (t; xˆ (t));

(7) ∀t ∈ [0; T ];

(8)

where D+ V (t; x) and Dx+ V (t; x) denote the superdifferentials of the functions V and V (t; ·), respectively. Moreover, function H0 (t) := H (ˆx(t); p(t)) is constant in [0; T ]. It is well-known (see e.g. [16]) that for every point (s; x) ∈ [0; T ] × Rn at which V is di erentiable, it satis es the Hamilton–Jacobi–Bellman equation   @V @V (s; x) + H x; − (s; x) = 0; (9) − @t @x with the nal condition V (T; x) = g(x). When V is not di erentiable at (s; x), Eq. (9) has to be understood in viscosity sense. Let Nt0 ⊂ Rn be de ned by Nt0 = {x0 : g(xuˆ (T ; x0 ; t0 ))¡; uˆ is optimal for P(t0 ; T; x0 )}

(10)

(i.e. the set of all initial points for which at least one of the optimal trajectories terminates in G ). Lemma 3. Assume that A1 and A2 hold true. Then Vˆ is Lipschitz continuous, Vˆ (0) = 0; Vˆ (x)¿0 if x 6= 0. Moreover, for any t0 ∈ [0; T ]; Nt0 is open, and there is an 0 ¿ such that {x: Vˆ (x)¡0 } ⊂ N0 :

(11)

Proof. In the same way as in [3] it can be shown that, if assumptions A1 and A2 are valid, then the value function V is locally Lipschitz continuous. Because of its de nition Vˆ is clearly Lipschitz continuous, Vˆ (0) = 0 and Vˆ (x)¿0. V (0; x) = 0 only if xuˆ (T; x; 0) = 0 and u(t) ˆ = 0 a.e. in [0; T ]. But then xuˆ (t; x; 0) ≡ 0, therefore Vˆ (x)¿0 if x 6= 0. Consider the function G: [0; T ]×Rn → R; G(t0 ; x0 ) = g(xuˆ0 (T; x0 ; t0 )), where uˆ 0 is any optimal control for

198

 Gyurkovics / Systems & Control Letters 35 (1998) 195–200 E.

problem P(t0 ; T; x0 ). Similarly to the proof of Lipschitz continuity of the value function one can show that for any ¿0 there is a ¿0 such that from |t − t0 | + kx − x0 k¡ it follows that |g(xuˆ0 (T ; x0 ; t0 )) − g(xuˆ (T ; x; t))|¡; where uˆ is an arbitrary optimal control for problem P(t; T; x). This shows that G is well-de ned, continuous and Nt0 is open. Since V (0; x0 )¿g(xuˆ (T ; x0 ; 0)), (11) holds for some 0 ¿. Theorem 4. Suppose that A1 and A2 are valid. Then A3 holds true if and only if for any t0 ∈ [0; T ) and for any x0 ∈ Nt0 there exists a ¿0 such that for all t1 ; t2 ; t0 6t1 ¡t2 ; |t0 − ti |¡; (i = 1; 2); function V satis es the inequality V (t2 ; x0 ) − V (t1 ; x0 )¿0:

(12)

Moreover, if G = Rn ; then Eq. (12) holds true for all t0 6t1 ¡t2 6T . Proof. Suciency. Since V is Lipschitz continuous, it is almost everywhere di erentiable. Let t0 ∈ (0; T ) and let x0 ∈ Nt0 be arbitrary. We have already seen that (t0 ; x0 ) has such a neighbourhood B(t0 ; x0 ) that for any (t; x) ∈ B(t0 ; x0 ), any optimal trajectory xˆ = xuˆ (·; t; x) for problem P(t; T; x) terminates in G . Since B(t0 ; x0 ) has positive measure, it contains such points at which V is di erentiable. Assume that (t; x) ∈ B(t0 ; x0 ) is a point of di erentiability of V . We show rst that @ V (t; x)¿0: @t Let (ˆx; u) ˆ be an optimal trajectory control pair for problem P(t; T; x). Applying Proposition 2 for (t; x) instead of (0; x0 ), one concludes that function ˆ p(t)) is constant, H0 : [t; T ] → R, H0 (t) = H (x(t); where p : [t; T ] → Rn is the solution of the adjoint ˆ ); p(T )) equation (5) and (6). Thus, H0 (t) = H (x(T = H (x(T ˆ ); −3g(x(T ˆ ))) . Since x(T ˆ ) ∈ G , assumption A3.2 gives that H0 (t)¿0. On the other hand, applying again Proposition 2 for (t; x) instead of (0; x0 ), by Eq. (8), we have ˆ −p(s) ∈ Dx+ V (s; x(s));

∀s ∈ [t; T ]:

ˆ thus But V is di erentiable at (t; x) = (t; x(t)), Dx+ V (t; x) = {@V=@x(t; x)}, therefore −p(t) =(@V =@x) (t; x). Hence Eq. (9) gives that 0 = −(@V =@t)(t; x)

+H (x; p(t)), or @V (t; x) = H0 (t)¿0: @t Let us consider now an arbitrary point (t˜; x) ˜ ∈ B(t0 ; x0 ). Next, we shall show that for any (pt ; px ) ∈ @V (t˜; x), ˜ pt ¿0 holds true. Here @V (t˜; x) ˜ denotes the generalized gradient of V at (t˜; x). ˜ Let D∗ V (t; x) denote the set of all cluster points of gradients 3V (tn ; xn ), when (tn ; xn ) converge to (t; x) and V is di erentiable at (tn ; xn ). It is known (see e.g. [2]) that @V (t; x) = co D∗ V (t; x). Let (t˜n ; x˜n ) converge to (t˜; x) ˜ while V is di erentiable at (t˜n ; x˜n ). Then (t˜n ; x˜n ) ∈ B(t0 ; x0 ) at least if n is large enough, therefore (@V =@t)(t˜n ; x˜n )¿0. Hence for ˜ and, consequently, for any any (pt ; px ) ∈ D∗ V (t˜; x) ˜ we have pt ¿0. (pt ; px ) ∈ @V (t˜; x), Let t1 ¡t2 be such that (t1 ; x0 ); (t2 ; x0 ) ∈ B(t0 ; x0 ) and t0 6t1 . Then from the mean value theorem ([2], Theorem 2.3.7) we know that for some  ∈ (t1 ; t2 ) V (t2 ; x0 ) − V (t1 ; x0 ) ∈ h@V (; x0 ); (t2 − t1 ; 0)i = {pt (t2 − t1 ): (pt ; px ) ∈ @V (; x0 )}: Since pt ¿0 in this relation, we obtain that V (t2 ; x0 ) − V (t1 ; x0 )¿0: Necessity. Assume that the conditions of the theorem are ful lled but A3 does not hold: because of continuity, this means that there are an x ∈ G and a 1 ¿0 so that Bn1 (x) ⊂ G and for all  ∈ Bn1 (x) we have H (; −3g())¡0. Using assumption A2.1 one can prove that for small enough T −t 0 and for x0 ∈ Bn1 =2 (x) x(t) ˆ := x uˆ(t; x0 ; t 0 ) ∈ Bn1 (x);

t ∈ [ t 0 ; T ];

therefore x0 ∈ Nt 0 . Let (x; ˆ u) ˆ be an optimal trajectorycontrol pair for problem P(t 0 ; T; x0 ). By Proposition 2 (applied with the interval [0; T ] replaced by [t 0 ; T ] and x0 by x0 ) (H (x(t ˆ 1 ); p(t1 )); −p(t1 )) ∈ D+ V (t1 ; x(t ˆ 1 )) a.e. t1 ∈ [ t 0 ; T ]; where p is the corresponding solution of the adjoint equation (5) and (6). Consider a t1 where this inclusion is ful lled. Because of Eq. (12), @+ V (t1 ; x(t ˆ 1 ))(1; 0) V (t1 + h; x(t ˆ 1 )) − V (t1 ; x(t ˆ 1 )) = lim sup ¿0; h h→0+

 Gyurkovics / Systems & Control Letters 35 (1998) 195–200 E.

199

where @+ V (t; x) is the upper Dini derivative of V at (t; x) in the direction  ∈ R n+1 . Since

ˆ − L(x(t); ˆ v(x(t))): ˆ @+ Vˆ (x(t))6

D+ V (t; x) = {(pt ; px ) ∈R n+1 : ∀ ∈R n+1 ; @+ V (t; x)

Proof. According to Eq. (13) we have

(14)

@+ Vˆ (x(t)) ˆ x(t)))− ˆ Vˆ (x(t))] ˆ ˆ = lim sup 1 [Vˆ (x(t)+F(

6 h(pt ; px ); i};

→0+

for all (pt ; px ) ∈ D+ V (t1 ; x(t ˆ 1 )) we have pt ¿0. Therefore H (x(t ˆ 1 ); p(t1 )) = H0 (t1 )¿0. Since H0 (t1 ) ˆ ); p(T )) = H (x(T ˆ ); −3g(x(T ˆ )), = H0 (T ) = H (x(T this contradicts to the indirect assumption.

ˆ 0))−V (0; x(t))] ˆ 6lim sup 1 [V (; x uˆ(; x(t); →0+

ˆ 0)) + lim sup 1 [V (0; x uˆ(; x(t); →0+

−V (; x uˆ(; x(t); ˆ 0))] 1 + lim sup [Vˆ (x(t) ˆ + F(x(t))) ˆ →0+ 

Applying this theorem for linear systems with quadratic criterion, we get ([14], Theorem 3) (with time reversal) as a corollary. Corollary 5. If f(x; u) = Ax + Bu; L(x; u) = x0 Qx + u0 Ru and g(x) = x0 P0 x with Q = Q0 ¿0; R = R0 ¿0, P0 = P00 ¿0; then the solution P(·) of the corresponding Riccati di erential equation is monotonically nondecreasing if and only if P0 BR−1 B0 P0 − Q − A0 P0 − P0 A¿0.

−V (0; x uˆ(; x(t); ˆ 0))];

(15)



where x (·; x(t); ˆ 0) is an optimal trajectory for problem P(0; T; x(t)). ˆ Let us estimate the rst term on the right-hand side of Eq. (15). Making use of the Principle of Optimality and the continuity of u(·; ˆ x(t); ˆ 0) at 0, it can be proved that 1 ˆ 0)) − V (0; x(t))] ˆ lim sup [V (; x uˆ(; x(t); →0+ 

4. Asymptotic stability To show asymptotic stability we need the following assumption. A4.1. The function L(·; 0) is of the form L(x; 0) = Q(h(x)), where Q(0) = 0, Q(z)¿0, if z 6= 0, h : R n → Rp is continuous and such that (a) h(0) = 0, h(x) 6= 0 for all nonzero x satisfying f(x; 0) = 0; (b) the system x(t) ˙ = f(x(t); 0), y(t) = h(x(t)) is weakly observable (see e.g. [6]). A4.2. The optimal control u(·; ˆ x; 0) for problem P(0; T; x) is continuous from the right at 0 and u(0; ˆ ·; 0) is continuous. ˆ denote the upper right-hand-side Let @+ Vˆ (x(t)) Dini derivative of the function Vˆ along an absolutely continuous solution x(·) ˆ of the di erential equation (2). In [17] it has been shown that – being Vˆ Lipschitz continuous – 1 ˆ x(t)))− ˆ Vˆ (x(t))]; ˆ ˆ =lim sup [Vˆ (x(t)+F( @+ Vˆ (x(t)) →0+ 

= −L(x(t); ˆ v(x(t))): ˆ

(16) uˆ

Secondly, if  is small enough, then x (; x(t); ˆ 0) ∈N0 , therefore from Theorem 4 it follows that 1 ˆ 0)) lim sup [V (; x uˆ(; x(t); →0+  −V (0; x uˆ(; x(t); ˆ 0))]60:

(17)

Finally, because of the Lipschitz continuity of V , 1 ˆ + F(x(t))) ˆ lim sup [V (0; x(t) →0+  − V (0; x uˆ(; x(t); ˆ 0))]

Z 

1 f(x uˆ(s; x(t); ˆ 0); u(s; ˆ x(t); ˆ 0)) 6 lim sup LV

 →0+ 0

−f(x(t); ˆ u(0; ˆ x(t); ˆ 0))] ds (18)

= 0;

(13) where the notation F(x) = f(x; v(x)) is used.

ˆ 0); since 0 is a Lebesque point of f(x uˆ(·; x(t); u(·; ˆ x(t); ˆ 0)). From Eqs. (15) – (18) immediately follows Eq. (14).

Lemma 6. Suppose that Assumptions A1–A4 are valid. Then, for any x(t) ˆ ∈ N0

Theorem 7. Suppose that Assumptions A1–A4 hold true. Then the solution x(t) ˆ ≡ 0 of Eq. (2) is locally

200

 Gyurkovics / Systems & Control Letters 35 (1998) 195–200 E.

asymptotically stable with the region of attraction A ⊃ {x : Vˆ (x)¡0 }. Proof. Let E = {x ∈ R n : L(x; v(x)) = 0}. From Assumption A1.2 it follows that v(x) = 0 if x ∈ E. By Assumption A4.1 one can prove that the only positive half trajectory contained entirely in E is x(t) ≡ 0. Using Lemmas 3 and 6, the Barbashin–Krasowsky theorem (see e.g. [15], Theorem 2.1.2.) modi ed for Lipschitz continuous Liapunov functions gives the result. Remark 8. Observe that the proposed version of the receding horizon control reduces to the Fake Algebraic Riccati Technique of [14] if it is applied to linear time invariant systems with quadratic criterion. In this case the conditions that ensure the asymptotic stability of the closed-loop system are equivalent to that of [14]. Thus this method can be considered as a straightforward generalization of FARE of [14] for nonlinear systems. To show global asymptotic stability we need the following additional assumption. A5.1. Set N0 given by Eq. (10) with t0 = 0 is the whole space, i.e. N0 = R n . A5.2. There exist constants K¿0 and R¿0 such that kf(x; u)k6KL(x; u) for every (x; u), kxk2 + kuk2 ¿R2 , x ∈ R n ; u ∈ U. Lemma 9. Suppose that Assumptions A1–A5 are satis ed. Then Vˆ is radially unbounded; i.e. Vˆ (x) → ∞ if kxk → ∞. The proof of Lemma 9 is completely analogous with that of Lemma 3 in [4] (one has to observe that it is enough to obtain the estimation for any of the optimal trajectory control pairs for problem P(0; T; x0 )), therefore it is not given here. Theorem 10. Suppose that assumptions A1–A4 hold true. Then system (2) is globally asymptotically stable about the origin. Proof. The proof is an immediate consequence of Lemmas 3, 6, 9 and the Barbashin–Krasowsky theo-

rem modi ed for Lipschitz continuous Liapunov functions ([15], Theorem 2.1.2, Corollary 2.1.4, see also, [17], Theorem 14.4).

References [1] P. Cannarsa, H. Frankowska, Some characterizations of optimal trajectories in control theory, SIAM J. Control Optim. 29 (1991) 1322–1347. [2] F.H. Clarke, Optimization and Nonsmooth Analysis, WileyInterscience, New York, 1983. [3] W.H. Fleming, W.H. Rishel, Deterministic and Stochastic Optimal Control, Springer, New York, 1975. [4] E. Gyurkovics, Receding horizon control for the stabilization of nonlinear uncertain systems described by di erential inclusions, J. Math. Systems, Estimation Control 6 (1996). [5] E. Gyurkovics, Receding horizon control for nonlinear systems via Bolza-type optimization, Proc. 4th European Control Conf. ’97, No. 566, Brussels, July 1997. [6] R. Herman, A.J. Krener, Nonlinear controllability and observability, IEEE Trans. Automat. Control 22 (1977) 728–740. [7] D.L. Kleinman, An easy way to stabilize a linear constant system, IEEE Trans. Automat. Control 15 (1970) 692. [8] W.H. Kwon, A.M. Bruckstein, T. Kailath, Stabilizing statefeedback design via the moving horizon method, Int. J. Control 37 (1983) 631– 643. [9] D.Q. Mayne, H. Michalska, Receding horizon control of nonlinear systems, IEEE Trans Automatic Control 35 (1990) 814 –824. [10] H. Michalska, D.Q. Mayne, Receding horizon control of nonlinear systems without di erentiability of the optimal value function, Systems and Control Lett. 16 (1991) 123–130. [11] H. Michalska, D.Q. Mayne, Robust receding horizon control of constrained nonlinear systems, IEEE Trans. Automat. Control 38 (1993) 1623–1633. [12] E.S. Meadows, J.B. Rawlings, Receding horizon control with an in nite horizon, Proc. American Control Conf. San Francisco, CA, June 1993, pp. 2926–2930. [13] T. Parisini, R. Zoppoli, A receding horizon regulator for nonlinear systems and a neural approximation, Automatica 31 (1995) 1443–1451. [14] M.-A. Poubelle, R.R. Bitmead, M.R. Gevers, Fake algebraic Riccati techniques and stability, IEEE Trans. Automat. Control 33 (1988) 379–381. [15] N. Rouche, P. Habets, L. Laloy, Stability Theory by Liapunov’s Direct Method, Springer, New York, 1977. [16] E.D. Sontag, Mathematical Control Theory, Springer, New York, 1990. [17] T. Yoshizawa, Stability Theory by Liapunov’s Second Method, The Mathematical Society of Japan, 1966.