Adaptive Neural Network Tracking Control for A Class

3 downloads 0 Views 354KB Size Report
vol. 40, pp. 1220–1233, 1995. [19] J. Suykens, J. Vandewalle, and B. D. Moor, “Lur'e systems with ... Control Systems Magazine, vol. 12(2), pp. 23–30, 1992.
7

3URFHHGLQJVRIWKHWK0HGLWHUUDQHDQ&RQIHUHQFHRQ &RQWURO $XWRPDWLRQ-XO\$WKHQV*UHHFH

Adaptive Neural Network Tracking Control for A Class of MIMO Nonlinear Systems with Measurement Error Artemis K. Kostarigka, George A. Rovithakis Dept. of Electrical & Computer Engineering Aristotle University of Thessaloniki 54124, Thessaloniki, Greece Abstract— An adaptive neural network tracking controller is designed, capable of stabilizing multi-input multioutput nonlinear, affine in the control dynamical systems with measurement error. Uniform ultimate boundedness of the tracking error of the actual system state is guaranteed, as well as boundedness of all other signals in the closed loop. Possible division by zero is avoided with the use of a novel resetting procedure, capable of guaranteeing the boundedness away from zero of certain signals.

I. I NTRODUCTION Neural network theory has emerged rapidly during the last decade, challenging many scientists in the area of control, since it provided the means to manipulate highly nonlinear systems, functioning in volatile environments. After the first insight of Narendra and Parthasarathy [1], many scientific works have focused on the use of neural networks for controlling highly uncertain and possibly unknown nonlinear dynamical systems. The key idea was to exploit their universal approximation capabilities, substituting every unknown nonlinearity by a neural network model of known structure but containing a number of unknown parameters (synaptic weights) plus a modeling error term. The unknown synaptic weights may appear both linearly or non-linearly, thus popularizing the neural network framework as one of the most powerful control design tool. Representative works include [2]–[17]. In the case of full state measurement the problem of controlling highly uncertain systems has found satisfactory solution. Unfortunately, this is not the case when we have erroneous state measurement. The majority of existing techniques are based on the complete knowledge of system nonlinearities and the absence of any modeling error terms. Only recently the problem of adaptively stabilizing unknown nonlinear systems with measurement error has been considered. Existing results are constrained to certain system classes such as linear [18], partially linear [19], feedback linearizable [20] or nonlinear systems represented by piecewise interpolation of several linear models [21]. Furthermore, in [13], the neural adaptive regulation under measurement noise problem for affine in the control nonlinear dynamical systems, has been treated under a restrictive regressor decomposition assumption. In this paper we generalize [13] by considering the tracking problem and by relaxing the regressor decomposition assumption. In this respect, we design an adaptive

neural network controller, capable of stabilizing multiinput multi-output nonlinear affine in the control dynamical systems with measurement error. More specifically, by making use of Lyapunov stability theory, uniform ultimate boundedness of the tracking error of the actual system state is guaranteed, as well as boundedness of all other signals in the closed loop. An important aspect of the proposed control scheme is that knowledge of an upper bound on the measurement error is not required. Thus, the proposed controller can be applied to a sufficiently general class of nonlinear systems (those that are affine in the control) admitting a wide class of output error signals. To avoid possible division by zero, the developed controller is equipped with a novel resetting procedure, capable of guaranteeing the boundedness away from zero of certain signals. Finally, algebraic relations including design constants values and measurement error bounds are provided, to guarantee the validness of the neural network approximation capabilities at all times. The paper is organized as follows: In Section II we review basic definitions and preliminary results. Section III is focused on designing an adaptive neural network tracking controller for nonlinear control systems with measurement error. Simulation results are provided in Section IV. Finally, we conclude in Section V. II. D EFINITIONS AND P ROBLEM S TATEMENT Consider the system

x_ = f (x; u) ; x 2 Rn ; u 2 Rm y = h(x; u) ; y 2 Rl

(1)

Definition 1: [22] The solutions of (1) are uniformly ultimately bounded (u.u.b.) (with bound B) if there exists a B > 0 and if corresponding to any a > 0 and t0 2 R+ ; there exists a T = T (a) > 0 (independent of t0 ) such that jx0 j < a implies jx(t; t0 ; x0 )j < B for all

t  t0 + T:

A. Linear in the weights neural networks In the present work we shall be using linear in the weights neural networks of the form

z > = w> S (v) (2) where v 2 Rn2 and z 2 Rn1 are the neural net input and output respectively, w is an L dimensional vector of

7

3URFHHGLQJVRIWKHWK0HGLWHUUDQHDQ&RQIHUHQFHRQ &RQWURO $XWRPDWLRQ-XO\$WKHQV*UHHFH

synaptic weights and S (v ) is an L  n1 matrix of regressor terms. The regressor terms may contain high order connections of sigmoid functions [14], [15], radial basis functions (RBFs) with fixed centers and widths [23]– [25], shifted sigmoids [26], [27], thus forming high-order neural networks (HONNs), RBFs and Shifted Sigmoidal Neural Networks, respectively. Another class of linear in the weights neural nets is the CMAC network which mainly uses B-splines in S (v ) [28]. An important and well known property shared among the aforementioned neural approximating structures is the following (see also the references above): Density Property: For each continuous function f : Rn2 ! Rn1 ; and for every "0  0; there exist an integer  L L and an optimal weight  > synaptic  vector w 2 R such > that supv2 f (v ) w S (v )  "0 ; where  Rn2 is a given compact set. This property implies that if the number of the regressor terms L is sufficiently large, then there exist weight values w such that w> S (v) can approximate f >(v) to any degree of accuracy, in a compact set. B. Problem Statement Let us consider the system:

locally Lipschitz and unknown vector fields. The state measurement is corrupted by an unknown, bounded, time varying and continuous measurement error signal (t) 2 L1 ; thus making x(t) not available for control. Define the tracking error as e = x xr ; where xr (t) is a bounded reference trajectory with x_ r (t) also bounded. Differentiating e(t) also yields:

e_ = f (x) + g(x)u(t) x_ r (t)

(5)

Assumption 1: For the error system (5), the solution can be forced to be uniformly ultimately bounded with respect to an arbitrarily small neighborhood of e = 0: Owing to Assumption 1, there exists a control Lyapunov function V (e) : Rn ! R+ satisfying

a1 (jej)  V (e)  a2 (jej) (6) with a1 ; a2 2 K1 ; and a control input u 2 Rm such that >V V_ = [f (x) + g(x)u x_ r (t)℄  0; 8e 2 A (7) e where A = fe 2 Rn : jej > "g with " > 0: From (3), (4)

we obtain:

>V [f (x) + g(x)u x_ r (t)℄ e > = eV [f (y ) + g(y )u x_ r (t)℄ > > = eV [f (y ) x_ r (t)℄ + eV g(y ) u | {z } | {z }

V_ =

G(y;;e)

G(y; ; e)

can be trivially

F (y; ; e; x_ r ) = F1 (y; x_ r ) + F2 (y; ; e; x_ r )

(9)

G(y; ; e) = G1 (y) + G2 (y; ; e)

(10)

where F1 (y; x_ r ) and G1 (y ) contain only measurable quantities. Employing the inequality:

3 (jej) = 3 (jy xr j)  31 (jj)+ 32 (jy xr j)

(11)

with 31 ; 32 K1 functions, as well as the assumption (which will be proved in the sequel), that u(t) 2 L1 ; the components also containing the unmeasured quantities ; e can be manipulated as follows:

F2 (y; ; e; x_ r ) + G2 (y; ; e)u jF2 (y; ; e; x_ r )j + jG2 (y; ; e)jjuj jF2 (y; ; e; x_ r )j + jG2 (y; ; e)jub

 |

{z

F G(y;;e;x_ r )

}

 1 (jyj) + 2 (jj) + 3 (jej) + 4 (jx_ r j)  1 (jyj) + 4 (jx_ r j) + 2 (jj) + 3 (jy  xr j)  1 (jyj) + 4 (jx_ r j) + | (jj) +{z (jj}) + 32 (jy xr j) 2

x_ (t) = f (x) + g(x)u(t) (3) y(t) = x(t) + (t) (4) n m where x(t) 2 R is the state, u(t) 2 R is the control input while f (x); gi (x); i =  1; 2; :::; m; with  g(x) = g1 (x) g2 (x) : : : gm(x) are continuous,

F (y;;e;x_ r )

Functions F (y; ; e; x_ r ) and decomposed as follows:

31

 (jj))

(12) with u b denoting the constant but unknown upper bound on ju(t)j and i (); i = 1; : : : ; 4 are K1 functions. Using (9), (10) and (12), equation (8) becomes

V_

 F1 (y; x_ r ) + G1 (y)u + 1 (jyj) + 4 (jx_ r j) +  (jj) + 32 (jy xr j) (13) Since F1 (y; x_ r ); G1 (y ); 1 (jy j); 4 (jx_ r j) and 32 (jy xr j) are considered unknown, the idea is to approximate

them by suitable neural approximators. More specifically, it turns out that there exist continuous functions !i (); i = 1; : : : ; 5 (denoting the approximation errors) and constant but unknown weight vectors wi ; i = 1; : : : ; 5 such that:

F1 (y; x_ r ) G1 (y)

1 (jyj)

4 (jx_ r j)

32 (jy xr j)

= w1> S1 (y; x_ r ) + !1 (y; x_ r ) (14) = w2> S2 (y) + !2 (y) (15)  = w3 S3 (jyj) + !3 (jyj) (16)  = w4 S4 (jx_ r j) + !4 (jx_ r j) (17)  = w5 S5 (jy xr j) + !5 (jy xr j) (18)

From the Density Property it also follows that on a generic compact set  Rn ; the approximation errors can be suitably bounded as j!i ()j  !  i ; i = 1; : : : ; 5 where !  i ; i = 1; : : : ; 5 are unknown but small bounds. III. M AIN R ESULTS Let us consider the Lyapunov function candidate

L(e; w~1 ; w~2 ; w~3 ; w~4 ; w~5 ) = kV (e) + (8)

5 X

1 jw~ j2 2 i i=1

(19)

where V (e) is the robust Lyapunov function defined in Section II, k > 0 is a design constant and w ~i ; i = 1; : : : ; 5 are parameter errors defined as w ~i = w^i wi ; i =

7

3URFHHGLQJVRIWKHWK0HGLWHUUDQHDQ&RQIHUHQFHRQ &RQWURO $XWRPDWLRQ-XO\$WKHQV*UHHFH

1; : : : ; 5; where w^i ; i = 1; : : : ; 5 are estimates of the unknown parameters wi ; i = 1; : : : ; 5: The following theorem summarizes the main results of this work. Theorem 1: Consider the error system (5) and the state feedback control law: w^> S (y; x_ r ) + k1 w^3> S3 (jyj) u(t) = h(y; w^2 )> 1 1 jh(y; w^2 )j2 1 w^> S (jx_ j) + w^> S5 (jy xr j) 5 h(y; w^2 )> k 4 4 r jh(y; w^2 )j2 1 ((jy xr j)) (20) h(y; w^2 )> k jh(y; w^2 )j2 where h(y; w ^2 ) = w^2> S2 (y) and (); () 2 K; K1

respectively. The update laws:

w^_ 1 w^_ 2 w^_ 3 w^_ 4 w^_ 5

= = = = =

k1 w^1 + kS1 (y; x_ r ) k2 w^2 + kS2 (y)u(t) k3 w^3 + S3 (jyj) k4 w^4 + S4 (jx_ r j) k5 w^5 + S5 (jy xr j)

with the design constants ki that: 1) 2)

(21) (22) (23) (24) (25)

> 0; i = 1; : : : ; 5; guarantee

u(t) is a priori bounded (u(t) 2 L1 ); provided that h(y; w^2 ) 6= 0 8 t  0; the tracking error e(t) is u.u.b.

Proof: 1) From (20) we obtain

x_ r )j 1 jw^3 jjS3 (jyj)j ju(t)j  jw^1jhjj(Sy;1 (wy; ^2 )j + k jh(y; w^2 )j + k1 jw^j4hjj(Sy;4w(^jx_)rjj)j + k1 jw^5 jjjhS(5y;(jyw^ )jxr j)j 2 2 1

((jy xr j)) + k jh(y; w^ )j 2 Notice that jS1 (y; x_ r )j; jS3 (jy j)j; jS4 (jx_ r j)j; jS5 (jy xr j)j; ((jy xr j)) are bounded by construction. The

estimates w ^i ; i = 1; 3; 4; 5 are generated by (26),(28) and (30) respectively, which are in a bounded input bounded output form. Thus w ^i ; i = 1; 3; 4; 5 are also bounded. Furthermore jh(y; w ^2 )j is claimed bounded away from zero. Hence u(t) 2 L1 : In a later stage we shall present a resetting mechanism which is imposed on w ^2 to guarantee its uniform boundedness as well as

jh(y; w^2 )j 6= 0 8 t  0:

2) Differentiating (19) along the solutions of (3) and using (13) we have

5

X L_ = kV_ + w~i> w~_ i i=1  kF1 (y; x_ r ) + kG1 (y)u + ~1 (jyj) + ~4 (jx_ r j) 5 X +~  (jj) + ~32 (jy xr j) + w~i> w~_ i i=1

where ~i () = k i (); i = 1; 4; 32; : Employing the neural network approximators (14)-(18), L_ becomes:

L_  kw1> S1 (y; x_ r ) + kw2> S2 (y)u + w3> S3 (jyj) +w4> S4 (jx_ r j) + w5> S5 (jy xr j) + ~ (jj) +k!(y; xr ; x_ r ; u) +

5 X i=1

w~i> w~_ i

where ! (y; xr ; x_ r ; u) = !1 + !2 u + !3 + !4 + !5 : Owing to the Density Property and the boundedness of u(t) we may straightforwardly conclude that j! (y; xr ; x_ r ; u)j   with  > 0 an unknown but small constant. After adding and subtracting the term   k w^1> S1 (y; x_ r ) + w^2> S2 (y)u +w^3> S3 (jyj) + w^4> S4 (jx_ r j) + w^5> S5 (jy xr j)

we obtain:

L_  kw^1> S1 (y; x_ r ) + kw^2> S2 (y)u + w^3> S3 (jyj) +w^4> S4 (jx_ r j) + w^5> S5 (jy xr j) kw~1> S1 (y; x_ r ) kw~2> S2 (y)u w~3> S3 (jyj) w~4> S4 (jx_ r j) w~5> S5 (jy xr j) 5 X +~  (jj) + k + w~i> w~_ i i=1

 kw^1> S1 (y; x_ r ) + kw^2> S2 (y)u + w^3> S3 (jyj) +w^4> S4 (jx_ r j) + w^5> S5 (jy xr j)

 +w~1> w~_ 1 kS1 (y; x_ r ) + w~2> w~_ 2 kS2 (y)u     +w~3> w~_ 3 S3 (jyj) + w~4> w~_ 4 S4 (jx_ r j)   +w~5> w~_ 5 S5 (jy xr j) + ~ (jj) + k

choosing the update laws

w^_ 1 w^_ 2 w^_ 3 w^_ 4 w^_ 5 where ki ;

= = = = =

k1 w^1 + kS1 (y; x_ r ) k2 w^2 + kS2 (y)u(t) k3 w^3 + S3 (jyj) k4 w^4 + S4 (jx_ r j) k5 w^5 + S5 (jy xr j)

(26) (27) (28) (29) (30)

i = 1; : : : ; 5 are positive constants, we obtain:

L_  kw^1> S1 (y; x_ r ) + kw^2> S2 (y)u + w^3> S3 (jyj) +w^4> S4 (jx_ r j) + w^5> S5 (jy xr j) 5 X ki w~i> w^i + ~ (jj) + k (31) i=1

~> w^ = 12 jw~j2 + 12 jw^j2 12 jw j2 After using the identity w and dropping the negative terms k2i jw ~i j2 k2i jw^i j2 ; i = 1; : : : ; 5; L_ finally becomes

L_  kw^1> S1 (y; x_ r ) + kw^2> S2 (y)u + w^3> S3 (jyj) +w^4> S4 (jx_ r j) + w^5> S5 (jy xr j) +~  (jj) +  (32)

7

3URFHHGLQJVRIWKHWK0HGLWHUUDQHDQ&RQIHUHQFHRQ &RQWURO $XWRPDWLRQ-XO\$WKHQV*UHHFH

P  = k + 5i=1 k2i jwi j2 : Defining h(y; w^2 ) = w^2> S2 (y) and considering the control law w^> S (y; x_ r ) + k1 w^3> S3 (jyj) u(t) = h(y; w^2 )> 1 1 jh(y; w^2 )j2 1 w^> S4 (jx_ r j) + w^> S5 (jy xr j) 5 h(y; w^2 )> k 4 jh(y; w^2 )j2 1 ((jy xr j)) h(y; w^2 )> k (33) jh(y; w^2 )j2 where (); () are K; K1 functions respectively, (32)

with

yields:

L_  ((jy(t) xr (t)j)) + ~ (jj) + 

(34)

Taking advantage of the weak triangular inequality [29]

( + )  (( ))+ ( Æ ( Id) 1 ( )); which is valid for any functions 2 K;  2 K1 and any nonnegative real numbers ; ; we get

(jej) = (jy xr j)  ((jy xr j)) + ( Æ ( Id) 1 (jj)) Hence, ((jy xr j))  (jej)+ ( Æ ( Id) 1 (j j)) and (34) becomes

L_  (jej)+ ( Æ (

Id) 1 (j j))+ ~ (j j) +  (35) |

{z

}

M

L_  0:

Id) 1 (j j) + 1 (M ) then

If je(t)j   Æ ( Thus, the tracking error e is uniformly ultimately bounded with respect to the set

E=



e2R

n

:

jej   Æ (

Id)

1

jj) +

(

1



(M )

: (36)

properties stated in Theorem 1 are maintained 8t 2 [t1 t2 ℄ where [t1 t2 ℄ is any interval in [0 1 ) for which tr 2= [t1 t2 ℄. To further guarantee that the resetting procedure may not drive w ^2i to infinity owing to possibly infinite resettings, we argue that since w ^2i (0) 2 L1 and w ^ ( t ) 2 L , 8 t 2 [ t t ℄  [0 1 ), it suffices 2 i 1 1 2 P1 ^2 )g'( )" 2 L1 : However, this  =1 sgnfS2i (y )h(y; w is true owing to '( ) being bounded 8  0 and

lim !+1 '( ) = 0:

Remark 1: The value of Æb should be carefully selected since it is strongly connected to the control signal magnitude. A small Æb reduces the minimum achievable value of jh(y; w ^2 )j, which in turn enlarges u. B. Guaranteeing Approximation Capabilities The results presented are valid as long as e 2  Rn where is the compact set in which the approximation capabilities of the linear-in-the-weights neural networks hold. To verify that e 2 ; 8t  0 we observe that there exist functions (); () as well as design constants k; ki ; i = 1; : : : ; 5 to guarantee that E  ; for a given

: Hence we argue that if we start with an initial condition e(0) 2 E ; then owing to the uniform ultimate boundedness of e; e(t) 2 E  ; 8t  0: If however e(0) 2 =E ; the following analysis holds: Considering again equation (35) but keeping the negative terms we have dropped in (32) we get

L_ 

(jej) + ( Æ ( 5 X ki i=1

A. Resetting Procedure In this subsection we modify the resetting procedure initially proposed in [16] to guarantee that jh(y; w ^2 )j = jw^2> S2 (y)j 6= 0; 8t  0. Initially, the value of w^2 is chosen so that jh(y; w ^2 )j > Æb , where Æb is a user defined constant. Let us assume that at time t = tr , jh(y; w ^2 )j = Æb . The following modification on the weight w^2i that correspond to a nonzero S2 (y ) coordinate is proposed to ensure that jh(y; w ^2 )j > Æb , t = t+r : w^2i (t+ ^ 2i (tr ) + sgn fhj (tr )g sgn fS2i (y )g '( ) r)=w

(37)

where S2i (y ) is the S2 (y ) coordinate that multiplies w^2i ; hj (tr ); j 2 f1; 2; : : : ; mg is the j element of the h vector which includes w ^2i and sgn fg is the sign function. Furthermore ( ) is a positive, bounded function sharing the property lim !1 ( ) = 0 with  being a positive

integer denoting the number of times the resetting procedure has been activated. To guarantee that jh(t+ r )j > jh(tr )j it suffices to show that jhj (t+r )j > jhj (tr )j: After performing the aforementioned resetting procedure at the t = t+r we have:

jhj (t+ )j > jhj (tr )j + jS2i (y)j'( ) > jhj (tr )j r

(38)

Clearly, (38) implies jh(t+ r )j > Æb thus guaranteeing jh(y; w^2 )j 6= 0, 8t  0. Then the control law (33) and the update laws (26)-(30) apply and thus the stability

Id) 1 (j j)) + ~ (j j) +  |

{z

}

M

2 jw~i j

2

(39)

From the definition of (jej); V (jej) we argue that there exist a positive but unknown constant  such that

(jej)  V (jej) ; 8e 2  Rn (40) Defining = min f ; k1 ; k2 ; k3 ; k4 ; k5 g (39) becomes L_  L + ( Æ ( Id) 1 (jj)) + ~ (jj) +   L + ( Æ ( Id) 1 (b )) + ~ (b ) +  (41) |

{z

}

~ M

where b represents an upper bound of the measurement error signal  (t) 2 L1 ; from which it is easy to conclude

L(t) 

M~ +

"

L0

#

M~ e

t

(42)

where is a positive constant. From (42), (6) and the definition of L we obtain "

#

1 M~ 1 M~ 1 (jej)  V (t)  L(t)  + L0 e k k k



~

h

~i

1 M e t Hence je(t)j  1 1 M k + k L0

from which we straightforwardly conclude:



8 >
M ~

:

  ~ a1 1 M k

if

L0  M ~

je(t)j  >

t

8t  0 (43)

7

3URFHHGLQJVRIWKHWK0HGLWHUUDQHDQ&RQIHUHQFHRQ &RQWURO $XWRPDWLRQ-XO\$WKHQV*UHHFH

1.6

0.04

1.4

0.035 0.03

1.2

0.025 1

k e (t)k2

x(t)/ xr (t)

0.02

0.8 0.015 0.6

0.01

0.4

0.2

0.005

0

0.5

1

1.5

2

2.5

0

3

0

0.2

0.4

0.6

0.8

1

time (sec)

time (sec)

Fig. 1. State response along with reference signal x(t)=xr (t) (solid line : state trajectory, dashed line : reference trajectory ) for k = 0:1; (?) = sinh(10(?)); and (?) = 10(1 e 10(?) ):

L

k

k

Fig. 3. Error norm e(t) 2 for k = 0:1; (?) = 6(1 e 2 (solid line : (?) = sinh(0:8(?)); : (?) = sinh(?); (?) = sinh(10(?)) )

0.8

6(?) )

 :

0.035

0.7 0.03 0.6 0.025 0.5

k e (t)k2

e (t)

0.02

0.4 0.3

0.015

0.2

0.01

0.1 0.005 0 0

0.5

1

1.5

2

2.5

0

3

time (sec)

Fig. 2.

Tracking error e(t) for k = 0:1; (?) = sinh(10(?)); and e 10(?) ):

(?) = 10(1

where a1 1 () is a K1 function since a1 () is a K1 P function and L0 = kV (0) + 5i=1 21 jw ~i (0)j2 : Equation (43) n clearly shows that there exists a ko for ~ which E1 = e 2 Rn : je(t)j  a1 1 Lk0 ; L0 > M 

n : Moreover, for ~  the set E2 = ~o M e 2 Rn : je(t)j  a1 1 M ; L  ; considering 0 k

~ = ( Æ ( Id) 1 (b )) + ~ (b ) + ; there that M exists a pair of functions (; ) as well as constants k; ki ; i = 1; : : : ; 5; guaranteeing E2  for given and for every bounded measurement error signal j (t)j  b : Thus in any case e(t) 2 ; 8t  0: IV. S IMULATION R ESULTS

Lets consider the scalar system x_ = 2x x3 + x2 u with the task to track the bounded reference trajectory xr (t) = 1 + 0:5 sin(4t) while subjected to a time varying, continuous and bounded measurement error  (t) =

0:01 sin(100 t):

To implement the control and update laws derived in Section III we have used the following

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

time (sec)

L

k

k

Fig. 4. Error norm e(t) 2 for k = 0:1; (?) = sinh(10(?)) 2 (solid line : (?) = 6(1 e 6(?) ); : (?) = 8(1 e 8(?) ); : (?) = 10(1 e 10(?) ) )





of regressor terms: S1 (y; x_ r ) =  sa (y) sb (x_ r ) sa (y)3 sb (x_ r )3 > ; S ( y ) 2  =  >  sa (y) sb (y)  ; s 3 (jyj) ; S3 (jyj) = S 4 (jx_ r j) =  s 4 (jx_ r j) ; S5 (jy xr j) = s 5 (jy xr j) and sigmoid functions of the form: sa (?) = 1+exp( 2((2 ?)+0:01)) 1; sb (?) = 1+exp( 0:36((:8?)+0:001)) 1:9; s 3 (?) = 4 2; s 4 (?) = 1+exp(140:1(?)) 7:1; 1+exp( 0:1(?)) 8 s 5(?) = 1+exp( 0:1(?)) 3:7: The parameters k; ki ; i = 1; : : : ; 5 have been selected as k = 0:1; ki = 0:001; i = 1; : : : ; 5 while

(?) = 10(1 e 10(?) ); (?) = sinh(10(?)): The initial state condition was set to x0 = 0:2; while the neural network weights where initialized as follows: w10 =  1  1  1 1 > ; w20 =  10  10 > ; 1 ; w40 = 1 ; w50 = 1 : w30 = matrices 

Finally, concerning the resetting procedure, the following constants have been used: Æb = 0:001; " = 1;

7

3URFHHGLQJVRIWKHWK0HGLWHUUDQHDQ&RQIHUHQFHRQ &RQWURO $XWRPDWLRQ-XO\$WKHQV*UHHFH

0.018

0.016

0.014

0.012

0.01

k e (t)k2 0.008

0.006

0.004

0.002

0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

time (sec)

Fig. 5.

L2   

k

k2 for (?) = 10(1

Error norm e(t)

sinh(10(?)) (solid line : 0:01; : k = 0:001)

k

= 0:1;

:

k

e

= 0:05;

10(?) );

(?) =

 :k=

with '( ) = 12 : The aforementioned HONN structure and design parameters were selected according to a trial and error procedure. Theorem 1 provides guidelines for a qualitative selection. The performance of the proposed control scheme is demonstrated in Figure 1, where the state trajectory is plotted (solid line) along with the reference signal xr (t) (dashed line). Figure 2 shows the corresponding tracking error that is clearly decaying to a neighborhood of zero, exponentially fast. In Figures 3, 4 and 5 the L2 error norm is plotted clearly demonstrating that improved performance may be obtained by the appropriate choice of (?) and (?); as well as by decreasing k: V. C ONLCUSIONS We have presented an adaptive neural network controller for a sufficiently wide class of multi-input multioutput, nonlinear systems, which achieves tracking performance in the presence of continuous and bounded measurement errors. The proposed adaptive scheme is based on the estimation of certain signals that constitute the derivative of an unknown Lyapunov function, through the use of linear in the weights neural networks acting as universal approximators. Validity of the control input is provided by a resetting scheme, that prohibits possible division by zero. Uniform ultimate boundedness of the tracking error of the actual state is guaranteed, as well as the boundedness of all other signals in the closed loop. R EFERENCES [1] K. S. Narendra and K. Parthasarathy, “Identification and control of dynamical systems using neural networks,” IEEE Transactions on Neural Networks, vol. 1, pp. 4–27, 1990. [2] R. M. Sanner and J. J. Slotine, “Gaussian networks for direct adaptive control,” IEEE Transactions on Neural Networks, vol. 3, pp. 837–863, 1992. [3] F. C. Chen and C. C. Liu, “Adaptively controlling nonlinear continuous-time systems using multilayer neural networks,” IEEE Transactions On Automatic Control, vol. 39, pp. 1306–1310, 1994.

[4] F. C. Chen and H. K. Khalil, “Adaptive control of a class of nonlinear discrete time systems using neural networks,” IEEE Transactions On Automatic Control, vol. 40, pp. 791–801, 1995. [5] N. Sadegh, “A perceptron network for functional identification using radial gaussian networks,” IEEE Transactions On Neural Networks, vol. 4, pp. 982–988, 1993. [6] F. L. Lewis, K. Liu, and A. Yesildirek, “Neural net robot controller with guaranteed tracking performance,” IEEE Transactions On Neural Networks, vol. 6, pp. 703–715, 1995. [7] S. Jaganathan and F. L. Lewis, “Multilayer discrete-time neural net controller with guaranteed tracking performance,” IEEE Transactions On Neural Networks, vol. 7, pp. 107–130, 1996. [8] S. S. Ge and C. C. Hang, “Direct adaptive neural network control of robots,” Int. J. Syst. Sci., vol. 27, pp. 553–542, 1996. [9] C. C. Liu and F. C. Chen, “Adaptive control of nonlinear continuous time systems using neural networks: General relative degree and MIMO cases,” International Journal of Control, vol. 58, pp. 311–335, 1993. [10] A. Yesildirek and F. L. Lewis, “Feedback linearization using neural networks,” Automatica, vol. 31, pp. 1659–1664, 1995. [11] M. M. Polycarpou and P. A. Ioannou, “Modelling, identification and stable adaptive control of continuous-time nonlinear dynamical systems using neural networks,” Proceedings of ACC’92, pp. 36– 40, 1992. [12] M. M. Polycarpou, “Stable adaptive neural control scheme for nonlinear systems,” IEEE Transactions on Automatic Control, vol. 41, pp. 447–451, 1996. [13] G. A. Rovithakis, “Robust neural adaptive stabilization of unknown systems with measurement noise,” IEEE Transactions On Systems, Man and Cybernetics-Part B:Cybernetics, vol. 29, pp. 453–458, 1999. [14] G. A. Rovithakis and M. A. Christodoulou, Adaptive Control with Recurrent High-Order Neural Networks. New York: SpringerVerlag, 2000. [15] G. A. Rovithakis, “Performance of a neural adaptive tracking controller for multi-input nonlinear dynamical systems in the presence of additive and multiplicative external disturbances,” IEEE Transactions On Systems, Man and Cybernetics-Part A:Systems and Humans, vol. 30, pp. 720–730, 2000. [16] ——, “Stable adaptive neuro control design via Lyapunov function derivative estimation,” Automatica, vol. 37, pp. 1213–1221, 2001. [17] ——, “Robust redesign of a neural network controller in the presence of unmodeled dynamics,” IEEE Transactions on Neural Networks, vol. 15, pp. 1482–1490, 2004. [18] D. F. Chichka and J. L. Speyer, “An adaptive controller based on disturbance attenuation,” IEEE Transactions on Automatic Control, vol. 40, pp. 1220–1233, 1995. [19] J. Suykens, J. Vandewalle, and B. D. Moor, “Lur’e systems with multilayer perceptron and recurrent neural networks: Absolute stability and dissipativity,” IEEE Transactions on Automatic Control, vol. 44, pp. 770–774, 1999. [20] C.-J. Chien and C.-Y. Yao, “Iterative learning of model reference adaptive controller for uncertain nonlinear systems with only output measurement,” Automatica, vol. 40, pp. 855–864, 2004. [21] C.-S. Tseng, B.-S. Chen, and H.-J. Uang, “Fuzzy tracking control design for nonlinear dynamic systems via t-s fuzzy model,” IEEE Transactions on Fuzzy Systems, vol. 3, pp. 381–392, 2001. [22] P. A. Ioannou and J. Sun, Robust Adaptive Control. New Jersey, USA: Prentice Hall, 1996. [23] M. M. Gupta and D. H. Rao, Neuro-control systems: theory and applications. New York: IEEE, 1994. [24] D. A. White and D. A. Sofge, Handbook of intelligent control: neural, fuzzy and adaptive approaches. New York: IEEE, 1994. [25] T. Poggio and F. Girosi, “Regularization algorithms for learning that are equivalent to multilayer networks,” Science, vol. 247, pp. 978–982, 1990. [26] N. E. Cotter, “The stone-weierstrass theorem and its applications to neural networks,” IEEE Transactions on Neural Networks, vol. 1, pp. 290–295, 1990. [27] G. Cybenko, “Approximations by superpositions of a sigmoidal function,” Mathematics of Control, Signals and Systems, vol. 2, pp. 303–314, 1989. [28] S. H. Lane, D. A. Handelman, and J. J. Gelfand, “Theory and developement of higher-order CMAC neural networks,” IEEE Control Systems Magazine, vol. 12(2), pp. 23–30, 1992. [29] Z.-P. Jiang, A. Teel, and L. Praly, “Small-gain theorem for iss systems and applications,” Math. Control Signals Systems, vol. 7, pp. 95–120, 1994.