A New Adaptive Control Algorithm for Systems with Multilinear Parametrization Mariana Netto, Anuradha Annaswamy, Said Mammar and S´ebastien Glaser Abstract— Adaptive control of nonlinearly parametrized (NLP) systems is an unknown field, where few results have been proposed up to now. In this paper, we propose a new adaptive control algorithm for systems with multilinear parametrization, that belong to the class of nonlinear parametrizations. The proposed controller is a non certainty equivalence one where only the original parameters, without then the need of overparametrization, are adapted. An important feature of the proposed approach is that its convergence properties are not based on projections inside the hypercube to where the parameters are known to lie. Simulations show the efficacy of the approach highlighting this fact, that is, that convergence holds even for the case where the adapted parameters do not belong to the hypercube where the true parameters are known to be.

I. I NTRODUCTION An assumption that has been systematically adopted in the adaptive control design is the linearity in the parameters [1], [2]. That is, it is assumed that the unknown parameters enter linearly in the dynamic equations describing the plant. However, it is well known that there exist many practical examples whose corresponding models are nonlinearly parametrized. Among the systems that have nonlinear parametrizations, we can cite a large number of processes from chemical industry and biotechnology, for instance, distillation columns, chemical reactors, separation processes, bioreactors, etc. [3]. Nonlinearly parametrized systems also appear when using a visual servoing system to control the motion of a robot [4], [5], [6], [7], [8], [9] and in friction compensation problems [10], [11], [12], [13]. Nonlinear parametrizations appear even in very simple RLC circuits and have a large extent of utility. For instance, they can be useful to avoid unstable pole-zero cancellation [14]. For linear systems, a standard procedure to deal with the nonlinear parametrization is to overparametrize the system in order to obtain a linear parametrization, but this can generate robustness degradation due to the need of a search in a bigger parameter space. The inability to efficiently incorporate prior knowledge in a restricted parameter estimation is another point to consider. Furthermore, when the system is nonlinear, M. Netto and S. Glaser are with LCPC/INRETS - LIVIC Laboratoire sur les Interactions V´ehicule-Infrastructure-Conducteur, 14, Route de la Mini`ere, Bˆat 824, 78000, Versailles, France ((netto,glaser)@lcpc.fr) A. Annaswamy is with Adaptive Control Laboratory, Department of Mechanical Egineering, Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A. ([email protected]) S. Mammar is with Universit´e d’Evry Val d’Essonne - LSC/CNRSFRE 2494, 40 rue du Pelvoux CE1455, 91025, Evry, Cedex, France ([email protected])

linearization via overparametrization is possible only in some very special cases. In spite of the fact that the great majority of the results in adaptive systems have been obtained for linear parametrizations (LP), in recent years there appeared some extensions to nonlinear cases, in particular there are some interesting developments for dynamical systems with convex parametrizations [15],[16], [17] and [18]. In [19], motivated by our ability to handle the convexly parametrized case, we then asked ourselves the question of when a non–linearly (non–convexly) parametrized system can be reparametrized –without overparametrization– so as to obtain a convex parametrization? We provided an answer to this question for nonlinear scalar plants where the drift vector field belongs to a class similar to the one considered in [3], that contains some examples of practical interest. This convexification result is then extended for a more general class of nonlinear parametrizations being then applied to the control of a fedbatch fermentation process [20]. While [16] deals with concave and convex parametrizations, [15] deals with general nonlinear parametrizations that occur in additive manner. [21] includes applications of [16] to magnetic bearings and chemical reactors. [22] treats the parameter convergence in NLP systems. Important results concerning the adaptive control of systems with NP parameterizations can be also found in [23]. In [24], by introducing a novel kind of Lyapunov function, a direct adaptive controller is developed for achieving asymptotic tracking control. Also, in [25] an interesting work involving a nonlinearly parametrized controller (with a linearly parametrized plant) not based on certainty equivalence and with a non-separable Lyapunov function is presented. It is important to cite the work based on interval analysis that can be applied to nonlinearly parametrized systems [26]. In this work we present a new algorithm for the adaptive control of systems with multilinear parametrizations, that belong to the class of nonlinear parametrizations (see [20]). This class is given by p n n 1i 2i θ θ φi (Xp (t)) + u(t) (1) X˙ p (t) = Ap Xp (t) + b 1

2

i=1

where θ1 , θ2 are scalar unknown parameters, and the goal is to design an adaptive controller where the number of parameter estimates does not depend on p. Such multilinear parameterizations occur in many applications. For example, in [27] it has been shown that the physical component values of RLC electrical circuits or their mechanical analogs

appear in the numerator and denominator polynomial of the system transfer function in a multilinear way. Also, multilinear parametrizations appear in aircraft control, in robotics problems and in the control of switched reluctance (SR) motors. In [28], an important work for the identification of parameters appearing in a multilinear way in transfer functions was presented (see also [29], [30] and [31]). However, more estimates than purely those of the unknown parameters are needed. We note that for the class of systems in (1), an adaptive controller can be designed using principles of LP systems. However, this controller will necessarily consist of p parameter estimates and hence is overparameterized if p > 2. Our goal in this paper is to treat this system as multilinear in θ1 and θ2 and derive a controller that contains only two parameter estimates. Our approach proposed in this paper begins with a reparametrization (similar to that in [32]) of the multilinear parametrization to obtain a convex parametrization. After this first step, we construct a concave function that lies above the convex one in the interval of interest. The new idea here is that this allows us to mix up the concavity and convexity inequalities and consequently it makes it possible to deal with the region of the state space where the gradient search does not go in the right direction. The idea is that, when we are in the “nice” region, we use a certainty equivalence based control with the gradient law for the adaptation. When we leave this region, we abandon the certainty equivalence and inject instead, a control based on the concave function and the gradient of this function for the adaptation law. The idea of constructing a concave function that lies above another function was used in [15] for the construction of solutions to a min-max optimization problem, which was also in the context of the adaptive control of nonlinearly parametrized systems. The paper is organized as follows. In Section II we state the problem and present our main result, that is, the new algorithm for the adaptive control of systems with multilinear parametrization. In Section III some simulations that confirm the efficacy of the approach are presented. We wrap up the paper in Section IV with the conclusions.

II. A N EW A DAPTIVE C ONTROLLER FOR SYSTEMS WITH MULTILINEAR PARAMETRIZATION

A. Problem statement We consider the class of plants containing multilinear parametrizations in (1) where Xp ∈ IRn is the plant state, u(t) ∈ IR is the control input, θ1 , θ2 are scalar unknown parameters, Ap ∈ IRn×n is unknown and b ∈ IRn is known with (Ap , b) controllable, and φi are nonlinear functions of Xp . Also, it is assumed that kj < θj < kuj , j = 1, 2, with sign(kj ) = sign(kuj ) , |kj | > 0, |kuj | > 0, n1i ≥ 0, n2i ≥ 0. Our control goal is to find an input u such that the closed– loop system has globally bounded solutions and so that Xp

tracks the state Xm of an asymptotically stable reference model specified as X˙ m = Am Xm +br, with det(sI −Am ) = (s + k)R(s), k > 0, r bounded and with Ap + bβ T = Am for some unknown vector β ∈ IRn . As highlighted above, we want to carry out this goal by adapting uniquely the original parameters, avoiding then an overparametrization of the system. B. Proposed Adaptive Controller We now present our adaptive control algorithm for the class of nonlinearly parametrized plants in (1). We will first convexify this parametrization following which, in order to construct the control and the adaptive laws, we make use of concave functions that lie above these convex functions, making it possible to mix concavity with the convex inequality. We begin by presenting a definition and two Lemmas that are instrumental for the proof. Definition 2.1: A function s (θ) : IR → IR is said to be logarithmically convex or log-convex if log s is convex. Lemma 2.1: Consider S (θ, x) = s1 (θ, x) s2 (θ, x) · · · sn (θ, x) , si : IR × IRn → IR, where si (θ, x) are log-convex with respect to θ, ∀x . Then S (θ, x) is convex with respect to θ, ∀x. Proof: Log-convexity is closed under multiplication [33]. Then, S (θ, x) is also log-convex. Since a log-convex function is also convex [33], thenS (θ, x) is convex. γ1 Lemma 2.2: Define γ = ∈ IR2 . Then γ2 L1) f (γ) = eγ1 n1 eγ2 n2 , with n1 , n2 ∈ IR, is convex with respect to γ. 1 1 L2) g(γ) = aγ12 + b cγ22 + d , where a, b, c, d ∈ IR≥0 1 , is concave with respect to γ, for γi ≥ 0, i = 1, 2. Proof: L1) eγi ni are log-convex functions. From Lemma 2.1, the multiplication of log-convex functions is convex. Then f (γ) = eγ1 n1 eγ2 n2 is convex. 1 1 1 1 L2) g(γ) = acγ12 γ22 +adγ12 +bcγ22 +bd. We can show, by evaluating the Hessian, that γ1p1 γ2p2 , with γi ≥ 0, p1 +p2 ≤ 1 1 and pi ≥ 0, is concave. Also, γi2 , with γi ≥ 0 , is concave. Since a sum of concave functions is also concave then the proof is completed. We now present our multilinear adaptive controller for the system in (1): The control input is chosen as u=−

p

Si (ˆ γ , φ¯i , ec )φ¯i (Xp ) + βˆT Xp + r

(2)

i=1

where Si (ˆ γ , φ¯i , ec ) = 1R I

≥0

fi (ˆ γ ), γ ), gi (ˆ

if if

φ¯i (Xp )ec ≤ 0 φ¯i (Xp )ec > 0

denote the nonnegative real numbers {x ∈ R I : x ≥ 0}

(3)

The adaptive law for the parameters in the multilinear parameterization is given by p . γˆ = Γγ Li (ˆ γ , φ¯i , ec )φ¯i (Xp ) ec (4) i=1

where Γγ ∈ IR2×2 is the adaptation gain matrix, ec is a composite scalar error which is defined as ec = hT E, with hT (sI − Am )−1 b = 1/(s + k), k is as defined in Section II.A and E = Xp − Xm , ∂f (γ) i if φ¯i (Xp )ec ≤ 0 ˆ , ∂γ |γ ¯ Li (ˆ γ , φi , ec ) = (5) ∂gi (γ) if φ¯i (Xp )ec > 0 ˆ , ∂γ |γ where fi (γ) and gi (γ) are given by fi (γ) gi (γ) with

2

2

with k¯1 , k¯2 , k¯u1 and k¯u2 as defined above. We use then the following reparametrization, similar to that in [32]: |θj | 1 ln ¯ γj = cj kj with

¯ ku cj = ln ¯ j kj

that implies

= ec1 n1i γ1 ec2 n2i γ2 1 1 = (ec1 n1i − 1) γ12 + 1 (ec2 n2i − 1) γ22 + 1

< |θ1 | < k¯u1 < |θ2 | < k¯u

k¯1 k¯

|θj | = k¯j ecj γj

Then, (1) becomes

k¯u cj = ln( ¯ j ), j = 1, 2 kj

X˙ p = Ap Xp + b

p

c1 n1i γ1 c2 n2i γ2

e

e

φ¯i + u

(7)

i=1

and

where

φ¯i (Xp ) =

k¯n11i k¯n22i

[sign(θ1 )]

n1i

[sign(θ2 )]

n2i

n1i n2i n n k¯2 (sign(θ1 )) 1i (sign(θ2 )) 2i φi φ¯i = k¯1

φi (Xp )

if kj > 0 k¯j = kj ; k¯uj = kuj

¯ ¯

if kj < 0 kj = kuj ; kuj = kj γ1 γˆ1 θ1 ; γ= ; γˆ = θ= θ2 γ2 γˆ2

Note that the reparametrization provides also that γ1 and γ2 satisfy

ln( |θj| ¯ ), j = 1, 2. Since k j Ap is unknown, an additional parameter βˆ is adjusted as2

and γˆj is the estimation of γj =

1 cj

˙ βˆ = −Γβ ec Xp

X˙ p = Ap Xp +b

" p X

|θ1 |

n1i

|θ2 |

n2i

(sign(θ1 ))

n1i

(sign(θ2 ))

n2i

2 Although we have introduced these estimates, they concern uniquely the uncertainty of the matrix Ap and not the two parameters in the multilinear parameterization. In the presented simulated example in section III, since Ap (scalar in this example) is known,»no estimate β is needed. – γ ˆ1 3 Note that, here, X , e and γ ˆ = are functions of the time t. p c γ ˆ2 We have suppressed the argument t from them for sake of simplicity. 4 In the following, for sake of simplicity, we will omit the argument X p from the functions φ¯i (Xp ) and φi (Xp ).

γ1 < 1

0

0 for all t ≥ t0 and φi are bounded functions of their arguments, then the closed-loop signals are globally bounded, γˆ ∈ L∞ , βˆ ∈ L∞ and e → 0. Proof: We will first carry out a reparameterization for system (1) so that we obtain a convex parametrization. For this, we first rewrite (1) as4

0

e˙c

p X ˆ ˜ fi (γ) − Si (ˆ γ , φ¯i , ec ) φ¯i + β˜T Xp (9)

=

−kec +

=

p X ˆ ˜ Li (ˆ Γγ γ , φ¯i , ec )φ¯i ec

i=1

.

γ˜ ˙ β˜

(10)

i=1

=

−Γβ ec Xp

(11)

where γ˜ = γˆ − γ, β˜ = βˆ − β and Γγ > 0, Γβ > 0 are the adaptation gain matrices. Consider the Lyapunov function candidate 1 2 ˜ ec + γ˜ T Γ−1 ˜ + β˜T Γ−1 (12) V = γ γ β β 2

Its time-derivative is V˙

p

= −ke2c + ec { −

fi (γ) − Si (ˆ γ , φ¯i , ec )−

i=1 Li (ˆ γ , φ¯i , ec )T (γ − γˆ ) φ¯i }

(13)

Before continuing the proof we show some properties concerning the functions fi and gi . These functions satisfy Property 1) fi (γ) − fi (ˆ γ) −

∂fi (γ) T |γˆ (γ − γˆ ) ∂γ for γj , γˆj ≥ 0

∂gi (γ) T Property 2) fi (γ) − gi (ˆ γ) − |γˆ (γ − γˆ ) ∂γ for 0 ≤ γj ≤ 1, γˆj ≥ 0

≥

0, (14)

≤

0, (15)

Property 1) comes from the fact that fi (γ) are convex ∀γ ≥ 0 (see Lemma 2.2 and [33]). Concerning Property 2), since, from Lemma 2.1, gi (γ) are concave, we have: T

γ) + gi (γ) ≤ gi (ˆ

∂gi (γ) |γˆ (γ − γˆ ) , for γj , γˆj ≥ 0 (16) ∂γ

Now, by construction, we have: fi (γ) ≤ gi (γ), for 0 ≤ γj ≤ 1

(17)

From (16) and (17), it holds: T

∂gi (γ) |γˆ (γ − γˆ ) , ∂γ for 0 ≤ γj ≤ 1, γj , γˆj ≥ 0

fi (γ) ≤ gi (ˆ γ) +

To invoke Barbalat’s Lemma and prove that ec → 0 (and consequently that E → 0 (see Lemma 3 in [16]) we need to prove that e˙ c ∈ L∞ . We have, also from this Lemma together with (12) and (19), that Xp ∈ L∞ . Then, φ¯i are bounded (recall that φi , and consequently φ¯i , are bounded functions of their arguments). This, together with the fact that γˆ ∈ L∞ , βˆ ∈ L∞ (from γ˜ ∈ L∞ , β˜ ∈ L∞ ) and ec ∈ L∞ , implies, from (9), that e˙ c ∈ L∞ . We have then, from Barbalat’s Lemma, that e → 0. Remark 2.1: An important remark concerning the convergence properties of the presented approach has to be made. Note that, in spite of the need for the knowledge of the intervals to which the parameters belong, the convergence properties of the approach are not based on projections inside these intervals. We only need to project the adapted parameter inside the positive orthant rather than in a hypercube, so that the convexity and concavity properties of the nonlinearities hold. As a matter of fact, it will be shown in the simulations that the convergence holds even if the initial conditions for the adapted parameters are very far from the hypercube in which the parameters are known to lie. Remark 2.2: The function g(γ) 1 1 g(γ) = (ec1 n1 − 1) γ12 + 1 (ec2 n2 − 1) γ22 + 1 gγ1 (γ1 )

(18)

that directly implies Property 2). We will now use Properties 1) and 2) to prove that V˙ ≤ −ke2c . We recall that after the reparametrization, the reparametrized parameters γj satisfy 0 < γj < 1 and that the functions fi (γ) and gi (γ), by construction, satisfy fi (γ) ≤ gi (γ) for 0 < γj < 1 (this fact will be detailed hereafter). This will allow us to use Property 2). We introduce the following notation : ⎧

T i (γ)

⎪ ⎨ fi (γ) − fi (ˆ γ ) − ∂f∂γ

(γ − γˆ ), if φ¯i ec ≤ 0

Tγˆ ∆i = ⎪ i (γ)

⎩ fi (γ) − gi (ˆ γ ) − ∂g∂γ

(γ − γˆ ), if φ¯i ec > 0

is a concave function g(γ) such that g(γ) ≥ f (γ) = ec1 n1 γ1 ec2 n2 γ2 for all γ satisfying 0 ≤ γj ≤ 1 (see Figure 1). c2 n2 γ 2

fγ ( γ )=e 2

f γ ( γ1 ) = e

γ˜ ∈ L∞ β˜ ∈ L∞ ec ∈ L∞ ∩ L2

c2 n2

g (γ 2 ) = (e

gγ (γ 1 ) = (e 1

1/2

- 1 ) γ2 + 1

2

1/2

- 1 ) γ1 + 1

1

1

0

γ

1

0

1

γ2

1

The functions fγ1 (γ1 ), fγ2 (γ2 ), gγ1 (γ1 )and gγ2 (γ2 ).

It can be easily verified that

From Properties 1) and 2) and the fact that 0 < γj < 1, we can note that ∆i ≥ 0 if φ¯i ec ≤ 0 and ∆i ≤ 0 if φ¯i ec > 0. This implies that ∆i φ¯i ec ≤ 0. Then, we have that

Consequently, from (12) and (19), we have

c1 n1

e

i=1

V˙ ≤ −ke2c

e2

γ

c1 n1

Fig. 1.

∆i φ¯i ec

c n2

1

Then, V˙ from (13) can be rewritten as V˙ = −ke2c +

2

c1 n1 γ 1

γ ˆ

p

gγ2 (γ2 )

(19)

fγ1 (γ1 ) < fγ2 (γ2 )

0, fγ2 (γ2 ) > 0, gγ1 (γ1 ) > 0, gγ2 (γ2 ) > 0 for 0 ≤ γ1 ≤ 1 and 0 ≤ γ2 ≤ 1 we have f (γ) = fγ1 (γ1 )fγ2 (γ2 ) for 0 ≤ γj ≤ 1, j = 1, 2.

≤ gγ1 (γ1 )gγ2 (γ2 ) = g(γ),

TABLE I F UNCTIONS fi AND gi FOR THE EXAMPLE .

3.5 x xstar

3 2.5

i

fi (γ)

gi (γ)

1

ec1 γ1

[(ec1 − 1)γ1 + 1]

2

ec1 γ1 e2c2 γ2

[(ec1 − 1)γ1 + 1][(e2c2 − 1)γ2 + 1]

3

e2c1 γ1 ec2 γ2

[(e2c1 − 1)γ12 + 1][(ec2 − 1)γ22 + 1]

4

e3c2 γ2

[(e3c2 − 1)γ22 + 1]

1 2

1

2

x, xstar

1 2

1 2 1

1.5 1 0.5

1

0 −0.5 −1

0

0.1

0.2

0.3

0.4

0.5

Time (sec)

Example: Consider the scalar plant: x˙ = 5x + θ1 φ1 + θ1 θ22 φ2 + θ12 θ2 φ3 + θ23 φ4 + u

(20)

where 1

I. I NTRODUCTION An assumption that has been systematically adopted in the adaptive control design is the linearity in the parameters [1], [2]. That is, it is assumed that the unknown parameters enter linearly in the dynamic equations describing the plant. However, it is well known that there exist many practical examples whose corresponding models are nonlinearly parametrized. Among the systems that have nonlinear parametrizations, we can cite a large number of processes from chemical industry and biotechnology, for instance, distillation columns, chemical reactors, separation processes, bioreactors, etc. [3]. Nonlinearly parametrized systems also appear when using a visual servoing system to control the motion of a robot [4], [5], [6], [7], [8], [9] and in friction compensation problems [10], [11], [12], [13]. Nonlinear parametrizations appear even in very simple RLC circuits and have a large extent of utility. For instance, they can be useful to avoid unstable pole-zero cancellation [14]. For linear systems, a standard procedure to deal with the nonlinear parametrization is to overparametrize the system in order to obtain a linear parametrization, but this can generate robustness degradation due to the need of a search in a bigger parameter space. The inability to efficiently incorporate prior knowledge in a restricted parameter estimation is another point to consider. Furthermore, when the system is nonlinear, M. Netto and S. Glaser are with LCPC/INRETS - LIVIC Laboratoire sur les Interactions V´ehicule-Infrastructure-Conducteur, 14, Route de la Mini`ere, Bˆat 824, 78000, Versailles, France ((netto,glaser)@lcpc.fr) A. Annaswamy is with Adaptive Control Laboratory, Department of Mechanical Egineering, Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A. ([email protected]) S. Mammar is with Universit´e d’Evry Val d’Essonne - LSC/CNRSFRE 2494, 40 rue du Pelvoux CE1455, 91025, Evry, Cedex, France ([email protected])

linearization via overparametrization is possible only in some very special cases. In spite of the fact that the great majority of the results in adaptive systems have been obtained for linear parametrizations (LP), in recent years there appeared some extensions to nonlinear cases, in particular there are some interesting developments for dynamical systems with convex parametrizations [15],[16], [17] and [18]. In [19], motivated by our ability to handle the convexly parametrized case, we then asked ourselves the question of when a non–linearly (non–convexly) parametrized system can be reparametrized –without overparametrization– so as to obtain a convex parametrization? We provided an answer to this question for nonlinear scalar plants where the drift vector field belongs to a class similar to the one considered in [3], that contains some examples of practical interest. This convexification result is then extended for a more general class of nonlinear parametrizations being then applied to the control of a fedbatch fermentation process [20]. While [16] deals with concave and convex parametrizations, [15] deals with general nonlinear parametrizations that occur in additive manner. [21] includes applications of [16] to magnetic bearings and chemical reactors. [22] treats the parameter convergence in NLP systems. Important results concerning the adaptive control of systems with NP parameterizations can be also found in [23]. In [24], by introducing a novel kind of Lyapunov function, a direct adaptive controller is developed for achieving asymptotic tracking control. Also, in [25] an interesting work involving a nonlinearly parametrized controller (with a linearly parametrized plant) not based on certainty equivalence and with a non-separable Lyapunov function is presented. It is important to cite the work based on interval analysis that can be applied to nonlinearly parametrized systems [26]. In this work we present a new algorithm for the adaptive control of systems with multilinear parametrizations, that belong to the class of nonlinear parametrizations (see [20]). This class is given by p n n 1i 2i θ θ φi (Xp (t)) + u(t) (1) X˙ p (t) = Ap Xp (t) + b 1

2

i=1

where θ1 , θ2 are scalar unknown parameters, and the goal is to design an adaptive controller where the number of parameter estimates does not depend on p. Such multilinear parameterizations occur in many applications. For example, in [27] it has been shown that the physical component values of RLC electrical circuits or their mechanical analogs

appear in the numerator and denominator polynomial of the system transfer function in a multilinear way. Also, multilinear parametrizations appear in aircraft control, in robotics problems and in the control of switched reluctance (SR) motors. In [28], an important work for the identification of parameters appearing in a multilinear way in transfer functions was presented (see also [29], [30] and [31]). However, more estimates than purely those of the unknown parameters are needed. We note that for the class of systems in (1), an adaptive controller can be designed using principles of LP systems. However, this controller will necessarily consist of p parameter estimates and hence is overparameterized if p > 2. Our goal in this paper is to treat this system as multilinear in θ1 and θ2 and derive a controller that contains only two parameter estimates. Our approach proposed in this paper begins with a reparametrization (similar to that in [32]) of the multilinear parametrization to obtain a convex parametrization. After this first step, we construct a concave function that lies above the convex one in the interval of interest. The new idea here is that this allows us to mix up the concavity and convexity inequalities and consequently it makes it possible to deal with the region of the state space where the gradient search does not go in the right direction. The idea is that, when we are in the “nice” region, we use a certainty equivalence based control with the gradient law for the adaptation. When we leave this region, we abandon the certainty equivalence and inject instead, a control based on the concave function and the gradient of this function for the adaptation law. The idea of constructing a concave function that lies above another function was used in [15] for the construction of solutions to a min-max optimization problem, which was also in the context of the adaptive control of nonlinearly parametrized systems. The paper is organized as follows. In Section II we state the problem and present our main result, that is, the new algorithm for the adaptive control of systems with multilinear parametrization. In Section III some simulations that confirm the efficacy of the approach are presented. We wrap up the paper in Section IV with the conclusions.

II. A N EW A DAPTIVE C ONTROLLER FOR SYSTEMS WITH MULTILINEAR PARAMETRIZATION

A. Problem statement We consider the class of plants containing multilinear parametrizations in (1) where Xp ∈ IRn is the plant state, u(t) ∈ IR is the control input, θ1 , θ2 are scalar unknown parameters, Ap ∈ IRn×n is unknown and b ∈ IRn is known with (Ap , b) controllable, and φi are nonlinear functions of Xp . Also, it is assumed that kj < θj < kuj , j = 1, 2, with sign(kj ) = sign(kuj ) , |kj | > 0, |kuj | > 0, n1i ≥ 0, n2i ≥ 0. Our control goal is to find an input u such that the closed– loop system has globally bounded solutions and so that Xp

tracks the state Xm of an asymptotically stable reference model specified as X˙ m = Am Xm +br, with det(sI −Am ) = (s + k)R(s), k > 0, r bounded and with Ap + bβ T = Am for some unknown vector β ∈ IRn . As highlighted above, we want to carry out this goal by adapting uniquely the original parameters, avoiding then an overparametrization of the system. B. Proposed Adaptive Controller We now present our adaptive control algorithm for the class of nonlinearly parametrized plants in (1). We will first convexify this parametrization following which, in order to construct the control and the adaptive laws, we make use of concave functions that lie above these convex functions, making it possible to mix concavity with the convex inequality. We begin by presenting a definition and two Lemmas that are instrumental for the proof. Definition 2.1: A function s (θ) : IR → IR is said to be logarithmically convex or log-convex if log s is convex. Lemma 2.1: Consider S (θ, x) = s1 (θ, x) s2 (θ, x) · · · sn (θ, x) , si : IR × IRn → IR, where si (θ, x) are log-convex with respect to θ, ∀x . Then S (θ, x) is convex with respect to θ, ∀x. Proof: Log-convexity is closed under multiplication [33]. Then, S (θ, x) is also log-convex. Since a log-convex function is also convex [33], thenS (θ, x) is convex. γ1 Lemma 2.2: Define γ = ∈ IR2 . Then γ2 L1) f (γ) = eγ1 n1 eγ2 n2 , with n1 , n2 ∈ IR, is convex with respect to γ. 1 1 L2) g(γ) = aγ12 + b cγ22 + d , where a, b, c, d ∈ IR≥0 1 , is concave with respect to γ, for γi ≥ 0, i = 1, 2. Proof: L1) eγi ni are log-convex functions. From Lemma 2.1, the multiplication of log-convex functions is convex. Then f (γ) = eγ1 n1 eγ2 n2 is convex. 1 1 1 1 L2) g(γ) = acγ12 γ22 +adγ12 +bcγ22 +bd. We can show, by evaluating the Hessian, that γ1p1 γ2p2 , with γi ≥ 0, p1 +p2 ≤ 1 1 and pi ≥ 0, is concave. Also, γi2 , with γi ≥ 0 , is concave. Since a sum of concave functions is also concave then the proof is completed. We now present our multilinear adaptive controller for the system in (1): The control input is chosen as u=−

p

Si (ˆ γ , φ¯i , ec )φ¯i (Xp ) + βˆT Xp + r

(2)

i=1

where Si (ˆ γ , φ¯i , ec ) = 1R I

≥0

fi (ˆ γ ), γ ), gi (ˆ

if if

φ¯i (Xp )ec ≤ 0 φ¯i (Xp )ec > 0

denote the nonnegative real numbers {x ∈ R I : x ≥ 0}

(3)

The adaptive law for the parameters in the multilinear parameterization is given by p . γˆ = Γγ Li (ˆ γ , φ¯i , ec )φ¯i (Xp ) ec (4) i=1

where Γγ ∈ IR2×2 is the adaptation gain matrix, ec is a composite scalar error which is defined as ec = hT E, with hT (sI − Am )−1 b = 1/(s + k), k is as defined in Section II.A and E = Xp − Xm , ∂f (γ) i if φ¯i (Xp )ec ≤ 0 ˆ , ∂γ |γ ¯ Li (ˆ γ , φi , ec ) = (5) ∂gi (γ) if φ¯i (Xp )ec > 0 ˆ , ∂γ |γ where fi (γ) and gi (γ) are given by fi (γ) gi (γ) with

2

2

with k¯1 , k¯2 , k¯u1 and k¯u2 as defined above. We use then the following reparametrization, similar to that in [32]: |θj | 1 ln ¯ γj = cj kj with

¯ ku cj = ln ¯ j kj

that implies

= ec1 n1i γ1 ec2 n2i γ2 1 1 = (ec1 n1i − 1) γ12 + 1 (ec2 n2i − 1) γ22 + 1

< |θ1 | < k¯u1 < |θ2 | < k¯u

k¯1 k¯

|θj | = k¯j ecj γj

Then, (1) becomes

k¯u cj = ln( ¯ j ), j = 1, 2 kj

X˙ p = Ap Xp + b

p

c1 n1i γ1 c2 n2i γ2

e

e

φ¯i + u

(7)

i=1

and

where

φ¯i (Xp ) =

k¯n11i k¯n22i

[sign(θ1 )]

n1i

[sign(θ2 )]

n2i

n1i n2i n n k¯2 (sign(θ1 )) 1i (sign(θ2 )) 2i φi φ¯i = k¯1

φi (Xp )

if kj > 0 k¯j = kj ; k¯uj = kuj

¯ ¯

if kj < 0 kj = kuj ; kuj = kj γ1 γˆ1 θ1 ; γ= ; γˆ = θ= θ2 γ2 γˆ2

Note that the reparametrization provides also that γ1 and γ2 satisfy

ln( |θj| ¯ ), j = 1, 2. Since k j Ap is unknown, an additional parameter βˆ is adjusted as2

and γˆj is the estimation of γj =

1 cj

˙ βˆ = −Γβ ec Xp

X˙ p = Ap Xp +b

" p X

|θ1 |

n1i

|θ2 |

n2i

(sign(θ1 ))

n1i

(sign(θ2 ))

n2i

2 Although we have introduced these estimates, they concern uniquely the uncertainty of the matrix Ap and not the two parameters in the multilinear parameterization. In the presented simulated example in section III, since Ap (scalar in this example) is known,»no estimate β is needed. – γ ˆ1 3 Note that, here, X , e and γ ˆ = are functions of the time t. p c γ ˆ2 We have suppressed the argument t from them for sake of simplicity. 4 In the following, for sake of simplicity, we will omit the argument X p from the functions φ¯i (Xp ) and φi (Xp ).

γ1 < 1

0

0 for all t ≥ t0 and φi are bounded functions of their arguments, then the closed-loop signals are globally bounded, γˆ ∈ L∞ , βˆ ∈ L∞ and e → 0. Proof: We will first carry out a reparameterization for system (1) so that we obtain a convex parametrization. For this, we first rewrite (1) as4

0

e˙c

p X ˆ ˜ fi (γ) − Si (ˆ γ , φ¯i , ec ) φ¯i + β˜T Xp (9)

=

−kec +

=

p X ˆ ˜ Li (ˆ Γγ γ , φ¯i , ec )φ¯i ec

i=1

.

γ˜ ˙ β˜

(10)

i=1

=

−Γβ ec Xp

(11)

where γ˜ = γˆ − γ, β˜ = βˆ − β and Γγ > 0, Γβ > 0 are the adaptation gain matrices. Consider the Lyapunov function candidate 1 2 ˜ ec + γ˜ T Γ−1 ˜ + β˜T Γ−1 (12) V = γ γ β β 2

Its time-derivative is V˙

p

= −ke2c + ec { −

fi (γ) − Si (ˆ γ , φ¯i , ec )−

i=1 Li (ˆ γ , φ¯i , ec )T (γ − γˆ ) φ¯i }

(13)

Before continuing the proof we show some properties concerning the functions fi and gi . These functions satisfy Property 1) fi (γ) − fi (ˆ γ) −

∂fi (γ) T |γˆ (γ − γˆ ) ∂γ for γj , γˆj ≥ 0

∂gi (γ) T Property 2) fi (γ) − gi (ˆ γ) − |γˆ (γ − γˆ ) ∂γ for 0 ≤ γj ≤ 1, γˆj ≥ 0

≥

0, (14)

≤

0, (15)

Property 1) comes from the fact that fi (γ) are convex ∀γ ≥ 0 (see Lemma 2.2 and [33]). Concerning Property 2), since, from Lemma 2.1, gi (γ) are concave, we have: T

γ) + gi (γ) ≤ gi (ˆ

∂gi (γ) |γˆ (γ − γˆ ) , for γj , γˆj ≥ 0 (16) ∂γ

Now, by construction, we have: fi (γ) ≤ gi (γ), for 0 ≤ γj ≤ 1

(17)

From (16) and (17), it holds: T

∂gi (γ) |γˆ (γ − γˆ ) , ∂γ for 0 ≤ γj ≤ 1, γj , γˆj ≥ 0

fi (γ) ≤ gi (ˆ γ) +

To invoke Barbalat’s Lemma and prove that ec → 0 (and consequently that E → 0 (see Lemma 3 in [16]) we need to prove that e˙ c ∈ L∞ . We have, also from this Lemma together with (12) and (19), that Xp ∈ L∞ . Then, φ¯i are bounded (recall that φi , and consequently φ¯i , are bounded functions of their arguments). This, together with the fact that γˆ ∈ L∞ , βˆ ∈ L∞ (from γ˜ ∈ L∞ , β˜ ∈ L∞ ) and ec ∈ L∞ , implies, from (9), that e˙ c ∈ L∞ . We have then, from Barbalat’s Lemma, that e → 0. Remark 2.1: An important remark concerning the convergence properties of the presented approach has to be made. Note that, in spite of the need for the knowledge of the intervals to which the parameters belong, the convergence properties of the approach are not based on projections inside these intervals. We only need to project the adapted parameter inside the positive orthant rather than in a hypercube, so that the convexity and concavity properties of the nonlinearities hold. As a matter of fact, it will be shown in the simulations that the convergence holds even if the initial conditions for the adapted parameters are very far from the hypercube in which the parameters are known to lie. Remark 2.2: The function g(γ) 1 1 g(γ) = (ec1 n1 − 1) γ12 + 1 (ec2 n2 − 1) γ22 + 1 gγ1 (γ1 )

(18)

that directly implies Property 2). We will now use Properties 1) and 2) to prove that V˙ ≤ −ke2c . We recall that after the reparametrization, the reparametrized parameters γj satisfy 0 < γj < 1 and that the functions fi (γ) and gi (γ), by construction, satisfy fi (γ) ≤ gi (γ) for 0 < γj < 1 (this fact will be detailed hereafter). This will allow us to use Property 2). We introduce the following notation : ⎧

T i (γ)

⎪ ⎨ fi (γ) − fi (ˆ γ ) − ∂f∂γ

(γ − γˆ ), if φ¯i ec ≤ 0

Tγˆ ∆i = ⎪ i (γ)

⎩ fi (γ) − gi (ˆ γ ) − ∂g∂γ

(γ − γˆ ), if φ¯i ec > 0

is a concave function g(γ) such that g(γ) ≥ f (γ) = ec1 n1 γ1 ec2 n2 γ2 for all γ satisfying 0 ≤ γj ≤ 1 (see Figure 1). c2 n2 γ 2

fγ ( γ )=e 2

f γ ( γ1 ) = e

γ˜ ∈ L∞ β˜ ∈ L∞ ec ∈ L∞ ∩ L2

c2 n2

g (γ 2 ) = (e

gγ (γ 1 ) = (e 1

1/2

- 1 ) γ2 + 1

2

1/2

- 1 ) γ1 + 1

1

1

0

γ

1

0

1

γ2

1

The functions fγ1 (γ1 ), fγ2 (γ2 ), gγ1 (γ1 )and gγ2 (γ2 ).

It can be easily verified that

From Properties 1) and 2) and the fact that 0 < γj < 1, we can note that ∆i ≥ 0 if φ¯i ec ≤ 0 and ∆i ≤ 0 if φ¯i ec > 0. This implies that ∆i φ¯i ec ≤ 0. Then, we have that

Consequently, from (12) and (19), we have

c1 n1

e

i=1

V˙ ≤ −ke2c

e2

γ

c1 n1

Fig. 1.

∆i φ¯i ec

c n2

1

Then, V˙ from (13) can be rewritten as V˙ = −ke2c +

2

c1 n1 γ 1

γ ˆ

p

gγ2 (γ2 )

(19)

fγ1 (γ1 ) < fγ2 (γ2 )

0, fγ2 (γ2 ) > 0, gγ1 (γ1 ) > 0, gγ2 (γ2 ) > 0 for 0 ≤ γ1 ≤ 1 and 0 ≤ γ2 ≤ 1 we have f (γ) = fγ1 (γ1 )fγ2 (γ2 ) for 0 ≤ γj ≤ 1, j = 1, 2.

≤ gγ1 (γ1 )gγ2 (γ2 ) = g(γ),

TABLE I F UNCTIONS fi AND gi FOR THE EXAMPLE .

3.5 x xstar

3 2.5

i

fi (γ)

gi (γ)

1

ec1 γ1

[(ec1 − 1)γ1 + 1]

2

ec1 γ1 e2c2 γ2

[(ec1 − 1)γ1 + 1][(e2c2 − 1)γ2 + 1]

3

e2c1 γ1 ec2 γ2

[(e2c1 − 1)γ12 + 1][(ec2 − 1)γ22 + 1]

4

e3c2 γ2

[(e3c2 − 1)γ22 + 1]

1 2

1

2

x, xstar

1 2

1 2 1

1.5 1 0.5

1

0 −0.5 −1

0

0.1

0.2

0.3

0.4

0.5

Time (sec)

Example: Consider the scalar plant: x˙ = 5x + θ1 φ1 + θ1 θ22 φ2 + θ12 θ2 φ3 + θ23 φ4 + u

(20)

where 1