Optimal Dynamic Inversion Control Design for a Class of Nonlinear ...

5 downloads 2646 Views 387KB Size Report
Keyword: Optimal dynamic inversion, DISTRIBUTED PARAMETER SYSTEMS, ... Combining the principles of dynamic inversion and optimization theory, two ...
Control Theory & Applications

Optimal Dynamic Inversion Control Design for a Class of Nonlinear Distributed Parameter Systems with Continuous and Discrete Actuators

Journal: Manuscript ID: Manuscript Type: Date Submitted by the Author: Complete List of Authors:

Keyword:

IEE Proc. Control Theory & Applications draft Research Paper n/a Padhi, Radhakant; Indian Institute of Science, Aerospace Engineering Balakrishnan, S.N.; University of Missouri-Rolla, Aerospace Engineering Optimal dynamic inversion, DISTRIBUTED PARAMETER SYSTEMS, TEMPERATURE CONTROL

IEE Proceedings Review Copy Only

Page 1 of 25

Control Theory & Applications

Optimal Dynamic Inversion Control Design for a Class of Nonlinear Distributed Parameter Systems with Continuous and Discrete Actuators

Radhakant Padhi 1 and S. N. Balakrishnan 2 1 2

Department of Aerospace Engineering, Indian Institute of Science – Bangalore, India

Department of Mechanical and Aerospace Engineering, University of Missouri – Rolla, USA

Abstract Combining the principles of dynamic inversion and optimization theory, two stabilizing state feedback control design approaches are presented for a class of nonlinear distributed parameter systems. One approach combines the dynamic inversion with variational optimization theory and it can be applied when there is a continuous actuator in the spatial domain. This approach has more theoretical significance in the sense that the convergence of the controller can be proved and it does not lead to any singularity in the control computation as well. The other approach, which can be applied when there are a number of discrete actuators located at distinct places in the spatial domain, combines dynamic inversion with static optimization theory. This approach has more relevance in practice, since such a scenario appears naturally in many practical problems because of implementation concern. These new techniques can be classified as “design-then-approximate” techniques, which are in general more elegant than the “approximate-thendesign” techniques. However, unlike the existing design-then-approximate techniques, the new techniques presented here do not demand involved mathematics (like infinite dimensional operator theory). To demonstrate the potential of the proposed techniques, a real-life temperature control problem for a heat transfer application is solved, first assuming a continuous actuator and then assuming a set of discrete actuators.

Keywords:

Dynamic inversion, Optimal dynamic inversion, Distributed parameter systems,

Temperature control, Design-then-approximate

1 2

Asst. Professor, Email: [email protected] , Tel: +91-80-2293-2756, Fax: +91-80-2360-0134 Professor, Email: [email protected], Tel: +1-573-341-4675, Fax: +1-573-341-4607

IEE Proceedings Review Copy Only

Control Theory & Applications

1. Introduction There are wide class of problems (e.g. heat transfer, fluid flow, flexible structures etc.) for which a lumped parameter modeling is inadequate and a distributed parameter system (DPS) approach is necessary. Control design for distributed parameter systems is often more challenging as compared to lumped parameter systems and it has been studied both from mathematical as well as engineering point of view. An interesting brief historical perspective of the control of such systems is found in [Lasiecka]. In a broad sense, existing control design techniques for distributed parameter systems can be attributed to either “approximate-thendesign (ATD)” or “design-then-approximate (DTA)” categories. An interested reader can refer to [aBurns] for discussions on the relative merits and limitations of the two approaches. In the ATD approach the idea is to first come up with a low-dimensional reduced (truncated) model, which retains the dominant modes of the system. This truncated model (which is often a finite-dimensional lumped parameter model) is then used to design the controller. One such potential approach, which has become fairly popular, first comes up with problem-oriented basis functions using the idea of proper orthogonal decomposition (POD) (through the “snapshot solutions”) and then uses those in a Galerkin procedure to come up with a low-dimensional reduced lumped parameter approximate model (which usually turns out to be a fairly good approximation). Out of numerous literatures published on this topic and its use in control system design, we cite [Annaswamy, Arien, Banks, bBurns, Christofides, Holmes, Padhi, Ravindran, Singh] for reference. For linear systems, such an approach of designing the POD based basis function leads to the optimal representation of the PDE system in the sense that it captures the maximum energy of the system with least number of basis functions as compared to any other set of orthogonal basis functions [Holmes]. For nonlinear systems, however, such a useful result does not exist. Even though the POD based model reduction idea has been successfully used for numerous linear and nonlinear DPS in both linear as well as nonlinear problems, there are a few important shortcomings in the POD approach: (i) the technique is problem dependent and not generic; (ii) there is no guarantee that the snap-shot solutions will capture all dominant modes of the system and, most important, (iii) it is usually difficult to have a set of ‘good’ snap-shot

2

IEE Proceedings Review Copy Only

Page 2 of 25

Page 3 of 25

Control Theory & Applications

solutions for the closed-loop system prior to the control design. This is a serious limiting factor for applying this technique in the closed-loop control design. Because of this reason, some attempts are being made in recent literature to adaptively redesign the basis functions (and hence the controller) in an iterative manner. An interested reader can see [Annaswamy, Arien, Ravindran] for a few ideas in this regard. In the DTA approach, on the other hand, the usual procedure is to use infinite dimensional operator theory to come up with the control design in the infinite dimensional space first [Curtain]. For implementation purpose, this controller is then approximated to a finite dimensional space by truncating an infinite series, reducing the size of feedback gain matrix etc. An important advantage of this approach is that it takes into account the full system dynamics in designing the controller, and hence, usually performs better [aBurns]. However, to the best of the knowledge of the authors, these operator theory based DTA approaches are mainly limited to linear distributed parameter systems [Curtain] and some limited class of problems like spatially invariant systems [Bameih]. Moreover the mathematics of the infinite dimensional operator theory is usually involved, which is probably another reason why it has not been able to become popular among practicing engineers. One of the main contributions of this paper is that it presents two generic control design approaches for a class of nonlinear distributed parameter systems, which are based on the DTA philosophy. Yet they are fairly straightforward, quite intuitive and reasonably simple, making it easily accessible to practicing engineers. The only approximation needed here is rather the spatial grid size selection for the control computation/implementation (which can be quite small, since the computational requirements are very minimal). In the control design literature for lumped parameter systems, a relatively simple, straightforward and reasonably popular method of nonlinear control design is the technique of dynamic inversion (e.g. [Enns], [Lane], [Ngo]), which is essentially based on the philosophy of feedback linearization [Slotine]. In this approach, first an appropriate coordinate transformation is carried out to make the system dynamics take a linear form (in the transformed coordinates). Then linear control design tools are used to synthesize the controller. Even though the idea sounds elegant, it turns out that this method is quite sensitive to modeling and parameter inaccuracies, which has been a potential limiting factor for its usage in practical applications for

3

IEE Proceedings Review Copy Only

Control Theory & Applications

quite some time. However, a lot of research has been carried out in the recent literature to address this critical issue. One way of addressing the problem is to augment the dynamic inversion technique with the H ∞ robust control theory [Ngo]. Another way is to augment this control with neural networks (trained online) so that the inversion error is cancelled out for the actual system ([Kim], [McFarland]). With the availability of these augmenting techniques, dynamic inversion has evolved as a potential nonlinear control design technique. Using the fundamental idea of dynamic inversion and combining it with the variational and static optimization theories [Bryson], two formulations are presented in this paper for designing the control system for one-dimensional control-affine nonlinear distributed parameter systems. We call this merger as “optimal dynamic inversion” for obvious reasons. Out of the two techniques presented here, one assumes a continuous actuator in the spatial domain (we call this as ‘continuous controller’). The other technique assumes a number of actuators located at discrete locations in the spatial domain (which we call as a ‘discrete controller’). The continuous controller formulation has a better theoretical significance in the sense that the convergence of the controller to its steady-state profile can be proved with the evolution of time. In the process, unlike the discrete controller formulation, it does not lead to any singularity in the required computations either. On the other hand, the discrete controller formulation has more relevance in practice in the sense that such a scenario appears naturally in many (probably all) practical problems (a continuous controller is probably never realizable). To demonstrate the potential of the proposed techniques, a real-life temperature control problem for a heat transfer application is solved, applying both the continuous as well as the discrete control design ideas. A few salient points with respect to the new techniques presented here are as follows. First, even though the optimization idea is used, the new approach is fundamentally different from optimal control theory. The main driving idea here is rather dynamic inversion, which guarantees stability of the closed loop (the rate of decay of the error rather depends on the selected gain matrix and not on the cost function weights). In addition, this objective is achieved with a minimum control effort (in a weighted L2 or l2 norm sense), where the cost function plays an important role in the sense that it not only leads to a minimum control effort, but also distributes the task among various available controllers (which are located at different locations in the spatial domain). Second, the technique leads to a state feedback control solution in closed 4

IEE Proceedings Review Copy Only

Page 4 of 25

Page 5 of 25

Control Theory & Applications

form (hence, unlike optimal control theory, it does not demand any computationally intensive procedure in the control computation). Finally, even though they can be classified into DTA category, the techniques presented do not demand the knowledge of complex mathematical tools like infinite dimensional operator theory. Hence, we hope that the techniques will be quite useful to practicing engineers.

2. Problem Description 2.1 System Dynamics with Continuous Controller In the continuous controller formulation, we consider the following system dynamics x = f ( x, x′, x′′, ... ) + g ( x, x′, x′′, ... ) u

(1)

where the state x ( t , y ) and controller u ( t , y ) are continuous functions of time t ≥ 0 and spatial variable y ∈ [ 0, L ] . x represents ∂x / ∂t and x′, x′′ represent ∂x / ∂y , ∂ 2 x / ∂y 2 respectively. We assume that appropriate boundary conditions (e.g. Dirichlet, Neumann etc.) are available to make the system dynamics description Eq.(1) complete. Both x ( t , y ) and u ( t , y ) are considered to be scalar functions. The control variable appears linearly, and hence, the system dynamics is in the control affine form. Furthermore, we assume that the function g ( x, x′, x′′, ... ) is bounded away from zero, i.e. g ( x, x′, x′′, ... ) ≠ 0 ∀t , y . In this paper, we do not take into account those situations where control action enters the system dynamics through the boundary actions (i.e. boundary control problems are not considered). 2.2 System Dynamics with Discrete Controllers In the discrete controller formulation, we assume that a set of discrete controllers um are located at ym ( m = 1, •

, M ) locations, with the following assumptions:

The width of the action of the controller located at ym is wm .

5

IEE Proceedings Review Copy Only

Control Theory & Applications



Page 6 of 25

In the interval [ ym − wm / 2, ym + wm / 2] ⊂ [ 0, L ] , the controller um ( t , y ) is assumed to have a constant magnitude. Outside this interval, um = 0 . However, the interval wm may or may not be small.



There is no overlapping of the controller located at ym with its neighboring controllers.



No controller is placed exactly at the boundary, i.e. the control action does not affect the system through boundary actions.

For this case the system dynamics can be written as follows x = f ( x, x′, x′′,

M

) + ∑ g ( x, x′, x′′, ) u m =1

m

(2)

2.3 Goal for the Controller The goal for the controller in both continuous and discrete actuator cases is same; i.e. the controller should make sure that the state variable x ( t , y ) → x* ( t , y ) as t → ∞ for all y ∈ [ 0, L ] , where x* ( t , y ) is a known (possibly time-varying) profile in the domain [ 0, L ] , which is continuous in y and satisfies the spatial boundary conditions.

3. Synthesis of the Controllers 3.1 Synthesis of Continuous Controller First, we define an output (an integral error) term as follows

z (t ) =

2 1 L  x ( t , y ) − x* ( t , y )  dy ∫ 0 2

(3)

Note that when z ( t ) → 0 , x ( t , y ) → x* ( t , y ) everywhere in y ∈ [ 0, L ] . Next, following the principle of dynamic inversion [Enns, Lane, Ngo, Slotine], we attempt to design a controller such that the following stable first-order equation (in time) is satisfied

6

IEE Proceedings Review Copy Only

Page 7 of 25

Control Theory & Applications

z+k z =0

(4)

where, k > 0 serves as a gain; an appropriate value of k has to be chosen by the control designer. To have a better physical interpretation, one may choose k = (1/ τ ) , where τ > 0 serves as a “time constant” for the error z ( t ) to decay. Using the definition of z from Eq.(3), Eq.(4) leads to

k

∫ ( x − x )( x − x )dy = − 2 ∫ ( x − x ) L

*

*

0

L

* 2

0

dy

(5)

Substituting for x from Eq.(1) in Eq.(5) and simplifying we arrive at

∫ ( x − x ) g ( x, x′, x′′, ... ) u dy = γ L

*

0

where γ

−∫

L

0

2

L ( x − x )  f ( x, x′, x′′, ... ) − x  dy − k2 ∫0 ( x − x* ) dy

*

(6)

*

Note that the value for u ( t , y ) satisfying Eq.(6) will eventually guarantee that z ( t ) → 0 as t → ∞ . However, since Eq.(6) is in the form of an integral, there is no unique solution can be obtained for u ( t , y ) from it. To obtain a unique solution, however, we have the freedom of putting an additional goal. We take advantage of this fact and aim to obtain a solution for u ( t , y ) that will not only satisfy Eq.(6), but at the same time, will also minimize the cost function J=

2 1 L r ( y ) u ( t , y )  dy ∫ 2 0

(7)

In other words, we wish to minimize the cost function in Eq.(7), subjected to the constraint in Eq.(6). An implication of choosing this cost function is that the aim is to obtain the control solution u ( t , y ) that will lead to x ( t , y ) → x* ( t , y ) with minimum control effort. In Eq.(7), r ( y ) > 0,

y ∈ [ 0, L ] is the weighting function, which needs to be chosen by the control

designer. This weighting function gives the designer the flexibility of putting relative importance of the control magnitude at different spatial locations. Note that the choice of

7

IEE Proceedings Review Copy Only

Control Theory & Applications

r ( y ) = c ∈ R+

Page 8 of 25

∀y ∈ [ 0, L ] means the control magnitude is given equal importance at all spatial

locations. Following the technique for constrained optimization [Bryson], we first formulate the following augmented cost function J =

1 L 2  L ( x − x* ) g u dy − γ  r u dy λ +  ∫0  2 ∫0

(8)

where λ is a Lagrange multiplier, which is a free variable needed to convert the constrained optimization problem to a free optimization problem. In Eq.(8), we have two free variables, namely u and λ . We have to minimize J by appropriate selection of these variables. The necessary condition of optimality is given by [Bryson]

δJ =0

(9)

where δ J represents the first variation of J . However, we know that

δ J = ∫ [ ru ] δ u dy + λ  ∫ ( x − x* )g δ u dy  + δλ  ∫ ( x − x* )g u dy − γ  L

L



0

=∫

L

0

L



0



0

L  ru + λ ( x − x ) g δ u dy + δλ  ∫ ( x − x* )g u dy − γ     0 



(10)

*

From Eqs.(9) and (10), we obtain



L

0

 ru + λ ( x − x* ) g  δ u dy + δλ ( x − x* ) g u dy − γ  = 0    

(11)

Since Eq.(11) must be satisfied for all variations δ u and δλ , the followings equations should be satisfied simultaneously ru + λ ( x − x* ) g = 0

(12)

∫ ( x − x ) g u dy = γ

(13)

L

*

0

8

IEE Proceedings Review Copy Only

Page 9 of 25

Control Theory & Applications

Note that Eq.(13) is nothing but Eq.(6a). Solving for u from Eq.(12) we get

u = − ( λ / r ) ( x − x* ) g

(14)

Substituting the above expression for u in Eq.(13) and solving for λ we get −γ

λ=

( x − x* ) g 2

(15)

2



L

r

0

dy

Substituting this expression for λ back in Eq.(14), we finally obtain

u=

γ ( x − x* ) g

(x − x ) g r ( y) ∫ r ( y) * 2

L

0

As a special case, if r ( y ) = c ∈

+

(16)

2

dy

(i.e. equal weightage is given to the controller at all spatial

locations) and g ( x, x′, x′′, ... ) = β ∈ R , then Eq.(16) simplifies to

u=

γ ( x − x* )

β ∫ ( x − x* ) dy L

2

(17)

0

It may be noticed that when x ( t , y ) = x* ( t , y ) (i.e. perfect tracking occurs), there is some computational difficulty in the sense that a zero seems to appear in the denominator of Eqs.(1617), which leads to singularity in the control solution u , i.e. u → ∞ . However, even though this seems to be obvious, it does not happen. To see this, we will show that when x ( t , y ) → x* ( t , y ) , u ( t , y ) → u * ( t , y ) , where u * ( t , y ) is defined as the control required to keep x ( t , y ) at x* ( t , y )

(see Eq.(19)). Before showing this, however, we need a non-trivial expression for u * ( t , y ) . For that, when x ( t , y ) → x* ( t , y ) , ∀y ∈ [ 0, L ] , from Eq.(1) that we can write

9

IEE Proceedings Review Copy Only

Control Theory & Applications

x* = f * + g *u *

where

f*

(

f x* , x*′ , x*′′ ,

),

g*

(

g x* , x*′ , x*′′ ,

Page 10 of 25

)

(18)

From Eq.(18), we can write the control solution as u* ( t , y ) = −

1  f * − x*  g* 

(19)

Note that the solution u * ( t , y ) in Eq.(19) will always be of finite magnitude, since for the

(

class of DPS considered here, g * = g x* , x*′ , x*′′ ,

) is always bounded away from zero. Also

note that in actual implementation of the controller, we may rarely encounter the condition x ( t , y ) = x* ( t , y ) ∀y ∈ [ 0, L ] , since it is very difficult to meet. However, this expression is useful

in the convergence analysis of the controller. Next, we state and prove the following convergence result. Theorem u ( t , y ) in Eq.(16) converges to u * ( t , y ) in Eq.(19) when x ( t , y ) → x* ( t , y ) ∀y ∈ [ 0, L ] .

Proof: First we notice that at any point y0 ∈ ( 0, L ) , the control solution in Eq.(16) can be written as 2 k L  L  −  x ( y0 ) − x* ( y0 )  g ( y0 )  ∫  x ( y ) − x* ( y )   f ( y ) − x* ( y )  dy + ∫  x ( y ) − x* ( y )  dy  0 2 0   u ( y0 ) = 2 2 *   −   x y x y g y L ( ) ( )  ( ) r ( y0 ) ∫  dy 0 r ( y)

(20)

We want to analyze this solution for the case when x ( t , y ) = x* ( t , y ) for all y ∈ [ 0, L ] . Without loss of generality, we analyze the case in the limit when x ( t , y ) → x* ( t , y ) , for y ∈ [ y0 − ε / 2,

y0 + ε / 2] ⊂ [ 0, L ] , ε → 0 and x ( t , y ) = x* ( t , y ) everywhere else. In such a

limiting case, let us denote u ( t , y0 ) as u ( t , y0 ) , which is given by

10

IEE Proceedings Review Copy Only

Page 11 of 25

Control Theory & Applications

u ( t , y0 ) =

ε ε 2 k y0 +  y0 +  −  x ( t , y0 ) − x* ( t , y0 )  g ( t , y0 )  ∫ ε2  x ( t , y ) − x* ( t , y )   f ( t , y ) − x* ( t , y )  dy + ∫ ε2  x ( t , y ) − x* ( t , y )  dy  y0 − y − 0 2 2  2 

r ( y) ∫

y0 + y0 −

ε 2

ε

2

2

 x ( t , y ) − x* ( t , y )   g ( t , y )  dy r ( y) 2

2  k  −  x ( t , y0 ) − x ( t , y0 )  g ( t , y0 )   x ( t , y0 ) − x* ( t , y0 )   f ( t , y0 ) − x* ( t , y0 )  ε +  x ( t , y0 ) − x* ( t , y0 )  ε  2   = 2 2 *  x ( t , y0 ) − x ( t , y0 )   g ( t , y0 )  r ( y0 )  ε r ( y0 ) *

=

(21)

−1  f ( t , y0 ) − x* ( t , y0 )  g ( t , y0 ) 

= u * ( t , y0 )

Moreover, this happens ∀y0 ∈ ( 0, L ) . Hence u ( t , y ) → u * ( t , y ) as x ( t , y ) → x* ( t , y ) , ∀y ∈ [ 0, L ] . This completes the proof. Final Control Solution for Implementation Combining the results in Eqs.(16) and (17), we finally write the control solution as  1  * * * − g *  f − x  , if x ( t , y ) = x ( t , y ) ∀y ∈ [ 0, L ]   *   * γ (x − x ) g u =  , otherwise   * 2 2  r ( y ) L ( x − x ) g dy  ∫ 0   r y ( )  

(22)

Even though u ( t , y ) → u * ( t , y ) when x ( t , y ) → x* ( t , y ) ∀y ∈ [ 0, L ] , in the numerical implementation of the controller, it is advisable to exercise the caution as outlined in Eq.(22) to avoid numerical problems in computer programming. One can notice in the development of Eq.(22) that there was no need of approximating the system dynamics to come up with the closed form control solution. However, to compute/implement the control, there is a requirement for choosing a suitable grid in the spatial domain. Hence, the technique proposed can be classified into the ‘design-then-approximate’ category. Note that a finer grid can be selected to compute u * ( t , y ) since the only computation that depends on the grid size in Eq.(22) is a numerical integration, which does not demand intensive computations.

11

IEE Proceedings Review Copy Only

Control Theory & Applications

Page 12 of 25

3.2 Synthesis of Discrete Controllers

In this section we concentrate on the case when we have only a set of discrete controllers (as described in Section 2.2). In such a case, following the development in continuous formulation (Section 3.1), we arrive at the following equation



L

0

M

( x − x* ) g ( x, x′, x′′, ... ) ∑ um ( ym , wm ) dy = γ

(23)

m =1

where γ is as defined in Eq.(6b). Expanding Eq.(23), we can write  y1 + w21  *  ∫y − w1 ( x − x ) g dy  u1 +  1 2 

 yM + w2M  +  ∫ wM ( x − x* ) g dy  uM = γ  yM − 2 

(24)

For convenience, we define Im



wm 2 w ym − m 2 ym +

( x − x ) g dy, *

m = 1,… , M

(25)

Then from Eqs.(24) and (25), we can write + I M uM = γ

I1 u1 +

(26)

Eq.(26) will eventually guarantee that z ( t ) → 0 as t → ∞ . However, note that Eq.(26) is a single equation with M variables

um , m = 1,… , M and hence we have infinitely many

solutions. To obtain a unique solution, we aim to obtain a solution that will not only satisfy Eq.(26), but at the same time will also minimize the following cost function J=

1 r1 w1 u12 + ( 2

+ rM wM uM2 )

(27)

In other words, we wish to minimize the cost function in Eq.(27), subjected to the constraint in Eq.(26). An implication choosing this cost function is that we wish to obtain the solution that will lead to minimum control effort. In Eq.(27), choosing appropriate values for r1 ,… , rM > 0

12

IEE Proceedings Review Copy Only

Page 13 of 25

Control Theory & Applications

gives a control designer the flexibility of putting relative importance of the control magnitude at different spatial locations ym , m = 1,… , M . Following the principle of constrained optimization [Bryson], we first formulate the following augmented cost function J =

1 ( r1 w1 u12 + 2

+ rM wM uM2 ) + λ ( I1 u1 + ... + I M uM ) − γ 

(28)

where λ is a Lagrange multiplier, which is a free variable needed to convert the constrained optimization problem to a free optimization problem. In Eq.(28) we have λ

and

um , m = 1,… , M as free variables, with respect to which the minimization has to be carried out. The necessary condition of optimality [Bryson] leads to the following equations ∂J = 0, m = 1,… , M ∂um

(29)

∂J =0 ∂λ

(30)

rm wm um + I m λ = 0, m = 1,… , M

(31)

+ I M uM = γ

(32)

Expanding Eqs.(29) and (30) leads to

I1 u1 +

Solving for u1 , … , uM from Eq.(31), substituting those in Eq.(32) and solving for λ we get

λ=

−γ M

∑ I / (r w ) m =1

2 m

m

m

Eqs.(31) and (33) lead to the following expression

13

IEE Proceedings Review Copy Only

(33)

Control Theory & Applications

um =

Im γ M

rm wm ∑ I / ( rm wm ) m =1

As a special case, when r1 = w1 =

,

Page 14 of 25

m = 1,… , M

(34)

2 m

= rM (i.e. equal important to minimization of all controllers) and

= wM (i.e. widths of all controllers are same), we have um =

where I

[ I1

I mγ I 22

(35)

I M ] . Note that in case we have a number of controllers being applied over T

different control application widths (i.e. um , m = 1,… , M are different), we can still use the simplified formula in Eq.(35), if it leads to satisfactory system response by choosing r1 , such that r1w1 =

, rM

= rM wM .

Singularity in Control Solution and Revised Goal: From Eqs.(34) and (35), it is clear that when

I

2 2

→ 0 (which happens when all of

I1 ,… , I M → 0 ) and γ → 0 , there is a problem of singularity in the control computation in the sense that um → ∞ (this happens since the denominators of Eqs.(34-35) go to zero faster than the corresponding numerators). Note that if the number of controllers M is large, probably the occurrence of such a singularity is a rare possibility, since all of I1 ,… , I M → 0 simultaneously is rather a strong condition. Nevertheless such a case may arise during transition. More important, this issue of control singularity will always arise when x ( t , y ) → x* ( t , y ) , ∀y ∈ [ 0, L ] (which is the primary goal of the control design). This happens possibly because we have only limited control authority (controllers are available only in a subset of the spatial domain), whereas we have aimed to achieve a much bigger goal of tracking the state profile ∀y ∈ [ 0, L ] - something that is beyond the capability of the controllers. Hence whenever such a case arises (i.e. when all of I1 ,… , I M → 0 or, equivalently,

I

2

→ 0 ), to avoid the issue of control singularity, we

propose to redefine the goal as follows.

14

IEE Proceedings Review Copy Only

Page 15 of 25

Control Theory & Applications

First, we define X

[ x1 ,

, xM ] , X * T

 x1* ,

T

, x*M  and the error vector E

( X − X ) . Next, *

we aim to design a controller such that E → 0 as t → ∞ . In other words, we aim to guarantee that the values of the state variable at the node points ( ym , m = 1,… , M ) track their corresponding desired values. We do this, we select a positive definite gain matrix K such that: E+K E=0

(36)

One way of selecting such a gain matrix K is to choose it a diagonal matrix with mth diagonal element being km = (1/ τ m ) where τ m > 0 is the desired time constant of the error dynamics. In such a case, the mth channel of Eq.(36) can be written as em + km em = 0

(37)

Expanding the expressions for em and em and solving for um ( m = 1, … , M ), we obtain um = where xm

x ( t , ym ) , xm*

1 *  xm − f m − km ( xm − xm* )   gm  x * ( t , ym ) , f m

f ( t , ym ) , g m

(38a) g ( t , ym )

(38b)

Final Control Solution for Implementation Combining the results in Eqs.(34) and (38), we finally write the control solution as  1  * *   g  xm − f m − km ( xm − xm )  , m  um =  Im γ , M  2  rm wm ∑ I m / ( rm wm ) m =1 

 < tol    otherwise   

if I

2

(39)

where tol represents a tolerance value. An appropriate value for this tuning variable can be fixed by the control designer. Note that some discontinuity/jump in the control magnitude is expected when the switching takes place. However, this jump can be minimized by judiciously selecting a proper tolerance value.

15

IEE Proceedings Review Copy Only

Control Theory & Applications

Page 16 of 25

One can notice that there was no need of approximating the system dynamics (like reducing it to a low-order lumped parameter model) to come up with the closed form control solution in Eq.(39). However, like the continuous controller formulation, to compute/implement the control, there is a requirement for choosing a suitable grid in the spatial domain. Hence, this technique can also be classified into the ‘design-then-approximate’ category. In this case too, a finer grid can be selected to compute um , m = 1,

, M since the only computation that depends

on the grid size in Eq.(39) is a series of numerical integrations, which do not demand intensive computations.

4. A Motivating Nonlinear Problem 4.1 Mathematical Model

The problem used to demonstrate the theories presented in Section 3 is a real-life problem. It involves the heat transfer in a fin of a heat exchanger, as depicted in Figure 1.

Figure 1: Pictorial representation of the physics of the problem

First we develop a mathematical model from the first principles of heat transfer [Miller]. Using the law of conservation of energy in an infinitesimal volume at a distance y having length ∆y , we write

16

IEE Proceedings Review Copy Only

Page 17 of 25

Control Theory & Applications

Qy + Qgen = Qy +∆y + Qconv + Qrad + Qchg

(40)

where Qy is the rate of heat conducted in, Qgen is the rate of heat generated, Qy + ∆y is the rate of heat conducted out, Qconv is the rate of heat convected out, Qrad is the rate of heat radiated out and Qchg is the rate of heat change. Next, from the laws of physics for heat transfer [Miller], we can

write the following expressions  ∂T  Qy = − kA    ∂y 

(41a)

Qgen = S A ∆y

(41b)

(

Qconv = h P ∆y T − T∞1

)

(

Qrad = ε σ P ∆y T 4 − T∞42

(41c)

)

(41d)

 ∂T  Qchg = ρ C A ∆y    ∂t 

(41e)

In Eqs.(41a-e), T ( t , y ) represents the temperature (this is the state x ( t , y ) in the context of discussion in Section 3), which is a function of both time t and spatial location y . S ( t , y ) is the rate of heat generation per unit volume (this is the control u in the context of discussion in Section 3) for this problem. The meanings of various parameters and their numerical values used and are given in Table 1. Table 1: Definitions and numerical values of the parameters Parameter

Meaning

Numerical value

k

Thermal conductivity

180 W / ( m oC )

A

Cross sectional area

2 cm 2

P

Perimeter

9 cm

h

Convective heat transfer coefficient

5 W / ( m 2 0C )

T∞1

Temperature of the medium in immediate surrounding of the surface 17

the

IEE Proceedings Review Copy Only

30 °C

Control Theory & Applications

Page 18 of 25

T∞2

Temperature at a far away place in the direction normal to the surface

−40 °C

ε

Emissivity of the material

0.2

σ

Stefan-Boltzmann constant

5.669 ×10−8 W / m 2 K 4

ρ

Density of the material

2700 kg / m3

C

Specific heat of the material

860 J / ( kg °C )

The values representing of the properties of the material were chosen assuming Aluminum. The area A and perimeter P have been computed assuming the a fin of dimension 40cm × 4cm × 0.5cm . Note that we have made a one-dimensional approximation for the

dynamics, assuming uniform temperature in the other two dimensions being arrived at instantaneously. Using Taylor series expansion and considering a small ∆y → 0 , we can write  ∂Qy  Qy +∆y ≈ Qy +   ∆y  ∂y 

(42)

Using Eqs.(41a-e) and (42) in Eq.(40) and simplifying, we can write  1  ∂T k  ∂ 2T  P  = h T − T∞1 + ε σ T 4 − T∞42  +   2 − S   ∂t ρ C  ∂y  Aρ C  ρC 

(

)

( k / ρC ) ,

For convenience, we define α1

α2

(

)

− ( Ph ) / ( Aρ C ) ,

α3

− ( Pεσ ) / ( Aρ C )

(43)

and

β 1/ ( ρ C ) , we can rewrite Eq.(43) as  ∂ 2T ∂T = α1  2 ∂t  ∂y

 4 4  + α 2 T − T∞1 + α 3 T − T∞2 + β S 

(

)

(

)

(44)

Along with Eq.(44), we consider the following boundary conditions Ty = 0 = Tw ,

∂T ∂y

18

=0 y=L

IEE Proceedings Review Copy Only

(45)

Page 19 of 25

Control Theory & Applications

where Tw is the wall temperature. We have assumed insulated boundary condition at the tip with the assumption that either there is some physical insulation at the tip or the heat loss at the tip due to convection and radiation is negligible (mainly because of its low surface area). The goal for the controller was to make sure that the actual temperature profile T ( t , y ) → T * ( y ) , where we chose T * ( y ) to be a constant (with respect to time) temperature profile. T * ( y ) was generated by using the following expression

T * ( y ) = Tw + (Tw − Ttip )

−ζ y

(46)

In Eq.(46) we chose the wall temperature Tw = 150 0C , fin tip temperature Ttip = 130 0C and the

decaying parameter ζ = 20 . The selection such a T * ( y ) from Eq.(46) was motivated by the fact that it leads to a smooth continuous temperature profile across the spatial dimension y . This selection of T * ( y ) satisfies the boundary condition at

y = 0 exactly and at

y=L

approximately, with a very small (rather negligible) approximation error. Note that the system dynamics is in control-affine form and g ( x, x′, x′′,

) = β ≠ 0 . Moreover, there is no boundary

control action. This is compatible with the class of DPS for which we have developed the control synthesis theories in Section 3. In the discrete controller case, the system dynamics in Eq.(44) will get modified to  ∂ 2T ∂T = α1  2 ∂t  ∂y

 4 4  + α 2 T − T∞1 + α 3 T − T∞2 + β 

(

)

(

)

M

∑S m =1

(47)

m

However, the boundary conditions remain same as in Eq.(45). 4.2 Synthesis of Continuous Controller

In our simulation studies with the continuous controller formulation, we selected the control gain as k = 1/ τ , whereτ = 30 sec . We assumed r ( y ) as a constant c ∈

+

,

and hence,

were able to use the simplified formula for the control in Eq.(17). Hence a numerical value for r ( y ) was not necessary for the simulation studies.

19

IEE Proceedings Review Copy Only

Control Theory & Applications

Page 20 of 25

First we chose an initial condition (profile) for the temperature as obtained from the expression T ( 0, y ) = Tm + x ( 0, y ) , where Tm = 150 0C (a constant value) serves as the mean temperature and x ( 0, y ) represents the deviation from Tm . Taking A = 50 we computed x ( 0, y ) as x ( 0, y ) = ( A / 2 ) + ( A / 2 ) cos ( −π + 2π y / L ) . Applying the controller as synthesized in Eq.(22), we simulated the system in Eqs.(44)-(45) from time t = t0 = 0 to t = t f = 5 min . The results obtained are as in Figure 2(a,b). We can see from Figure 2(a) that the goal of tracking T * ( y ) is met without any problem. The associated control (rate of energy input) profile S ( t , y )

obtained is as shown in Figure 2(b). It is important to note that even as T ( t , y ) → T * ( y ) , there is no control singularity. In fact the control profile develops (converges) towards the steady-state control profile (see Eq.(19)).

Figure 2(a): Evolution of the temperature (state) profile from a sinusoidal initial condition

Figure 2(b): Rate of energy input (control) for the evolution of temperature profile in Figure 2(a)

Next, to demonstrate that similar results will be obtained for any arbitrary initial condition of the temperature profile T ( 0, y ) , we considered a number of random profiles for T ( 0, y ) and carried out the simulation studies. The random profiles using the relationship T ( 0, y ) = Tm + x ( 0, y ) , where x ( 0, y ) was generated using the concept of Fourier Series, such

that it satisfies x ( 0, y ) ≤ k1 x 2

x

max

,

x′

max

and

x′′

max

2 max

, x′ ( 0, y ) ≤ k2 x′ 2

2 max

and x′′ ( 0, y ) ≤ k3 x′′ 2

2 max

. The values for

were computed using an envelope profile xenv ( y ) = A sin (π y / L ) . The

20

IEE Proceedings Review Copy Only

Page 21 of 25

Control Theory & Applications

norm used is the L2 norm defined by x

( ∫ x ( y ) dy ) L

1/ 2

2

0

. We selected the value of parameter A

as 50 and selected k1 = 2 , k2 = k3 = 10 . For more details about the philosophy of generation of these random profiles, the reader is referred to [Padhi]. The results obtained from such a random initial condition are as in Figure 3(a,b). Once again, we clearly notice that the objective of T ( t , y ) → T * ( y ) is met. We also notice that the control (rate of energy input) magnitude is not

high and, more important, the control profile develops towards and converges to the steady-state control profile as computed from Eq.(19).

Figure 3(a): Evolution of the temperature (state) profile from a random initial condition

Figure 3(b): Rate of energy input (control) for the evolution of temperature profile in Figure 3(a)

4.3 Synthesis of Discrete Controllers

In our simulation studies with the discrete controller formulation, we selected the control gain as k = (1/ τ ) , where τ = 30 sec . While checking condition to switch the controller, the tolerance value was selected as tol = 0.001 . After switching, we used the control gain K = diag ( k1

w1 =

kM )

and

selected

= wM = 2 cm and assumed r1 =

numerical values for r1 ,

km = 1/ τ m ,

τm =τ

for

m = 1,… , M .

We

took

= rM . Because of this there was no need to select

, rM . To begin with we selected M = 5 (five controllers), located at

equal spacing.

21

IEE Proceedings Review Copy Only

Control Theory & Applications

Page 22 of 25

First we chose the same sinusoidal initial condition (profile) for the temperature as used for the continuous controller formulation. Applying the controller as synthesized in Eq.(39), we simulated the system model in Eqs.(47) and (45) from time t = t0 = 0 to t = t f = 5 min . The results obtained are as in Figure 4(a,b). We can see from Figure 4(a) that the goal of tracking T * ( y ) is roughly met.

The associated control (rate of energy input) profile S ( t , y ) obtained is as shown in Figure 13. The figure shows that the required control magnitude is not very high in the entire spatial domain [ 0, L ] and for all time t ∈ t0 , t f  . Note that as compared to the continuous case, the control effectiveness is smaller in the discrete case (control is applied only at a small subspace of the entire spatial domain). However, since we aimed the same decaying rate for the state error, in tune with the intuition one can observe that the magnitude of the discrete controllers are higher as compared to the continuous formulation (see Figures 2(b) and 4(b)).

Figure 4(a): Evolution of the temperature (state) profile from a sinusoidal initial condition

Figure 4(b): Rate of energy inputs (controllers) for the evolution of temperature profile in Figure 4(a)

We notice a few small problems in the results in Figures 4(a,b). First, there are small jumps in the control histories when the control switching takes place (at about 2.5 min). Moreover, we see some weaving nature of the state profile as T ( t , y ) → T * ( y ) , and hence, the goal for control design is not met to a satisfactory level. Both of these probably happened because we assumed a small number of discrete controllers. One way of minimizing this effect is

22 IEE Proceedings Review Copy Only

Page 23 of 25

Control Theory & Applications

to increase the number of controllers. Next, we selected ten controllers (instead of five) and carried out the simulation again. The results are shown in Figure 5(a,b). It is quite clear from this figure that the weaving nature is substantially smaller and the goal T ( t , y ) → T * ( y ) , ∀y ∈ [ 0, L ] is met with more accuracy. Also note that as compared to the case with five controllers, here the control effectiveness is higher and subsequently the magnitudes of the controllers are smaller (compare Figures 4(b) and 5(b)).

Figure 5(a): Evolution of the temperature (state) profile from a sinusoidal initial condition

Figure 5(b): Rate of energy inputs (controllers) for the evolution of temperature profile in Figure 5(a)

To demonstrate that similar results will be obtained for any arbitrary initial condition of the temperature profile T ( 0, y ) , next we considered a number of random profiles for T ( 0, y ) (generated the same way as in Section 4.2) and carried out the simulation studies. The results obtained from such a random initial condition are quite satisfactory in the sense that the tracking objective was met. To contain the length of the paper, however, we do not include those results.

5. Conclusions Based on the newly proposed optimal dynamic inversion theory, two stabilizing state feedback control design approaches are presented for a class of nonlinear distributed parameter systems. One approach combines the dynamic inversion with variational optimization, whereas

23 IEE Proceedings Review Copy Only

Control Theory & Applications

Page 24 of 25

the other one (which is more relevant in practice) can be applied when there are a number of discrete actuators located at distinct places in the spatial domain. These new techniques can be classified as “design-then-approximate” methods, which are in general more elegant than the “approximate-then-design” methods. The formulation leads to a closed form control solution, and hence, is not computationally intensive. To demonstrate the potential of the proposed techniques, a real-life temperature control problem for a heat transfer application is solved, first assuming a continuous actuator and then assuming a set of discrete actuators and promising numerical results are obtained.

References 1. Annaswamy A., Choi J. J., Sahoo D., Active Closed Loop Control of Supersonic Impinging Jet Flows Using POD Models, Proceedings of the 41st IEEE Conference on Decision and Control, Las Vegas, 2002.

2. Arian E., Fahl M. and Sachs E. W., Trust-region Proper Orthogonal Decomposition for Flow Control, NASA/CR-2000-210124, ICASE Report No. 2000-25.

3. Bameih B., The Structure of Optimal Controllers of Spatially-invariant Distributed Parameter Systems. Proceedings of the Conference on Decision & Control, 1997, 1056-1061. 4. Banks H. T., Rosario R. C. H and Smith R. C., Reduced-Order Model Feedback Control Design: Numerical Implementation in a Thin Shell Model, IEEE Transactions on Automatic Control, Vol. 45, 2000, 1312-1324.

5. Bryson A. E. and Ho Y. C., Applied Optimal Control, London: Taylor and Francis, 1975. 6.

a

Burns J.A. and King, B.B., Optimal sensor location for robust control of distributed parameter

systems. Proceedings of the Conference on Decision and Control, 1994, 3967- 3972. 7.

b

Burns J. and King B. B., A Reduced Basis Approach to the Design of Low-order Feedback

Controllers for Nonlinear Continuous Systems, Journal of Vibration and Control, Vol.4, 1998, 297323.

8. Christofides P. D., Nonlinear and Robust Control of PDE Systems – Methods and Applications to Transport-Reaction Processes, Birkhauser , Boston, 2000.

24 IEE Proceedings Review Copy Only

Page 25 of 25

Control Theory & Applications

9. Curtain R. F. and Zwart H. J., An Introduction to Infinite Dimensional Linear Systems Theory, Springer-Verlag, New York, 1995.

10. Enns, D., Bugajski, D., Hendrick, R. and Stein, G., Dynamic Inversion: An Evolving Methodology for Flight Control Design, International Journal of Control, Vol.59, No.1,1994, pp.71-91. 11. Holmes P., Lumley J. L. and Berkooz G., Turbulence, Coherent Structures, Dynamical Systems and Symmetry, Cambridge University Press, 1996, 87-154. 12. Kim, B. S. and Calise, A. J., “Nonlinear Filight Control using Neural Networks”, AIAA Journal of Guidance, Control, and Dynamics, Vol. 20, No. 1, 1997, pp.26-33.

13. Lasiecka I., Control of Systems Governed by Partial Differential Equations: A Historical Perspective, Proceedings of the 34th Conference on Decision and control, 1995, 2792-2796.

14. Lane, S. H. and Stengel, R. F., Flight Control Using Non-Linear Inverse Dynamics, Automatica, Vol.24, No.4, 1988, pp.471-483.

15. McFarland, M. B., Rysdyk, R. T., and Calise A. J., “Robust Adaptive Control Using Single-Hiddenlayer Feed-forward Neural Networks,” Proceeding of the American Control Conference, 1999, pp. 4178-4182.

16. Miller A. F., Basic Heat and Mass Transfer, Richard D. Irwin Inc., MA, 1995. 17. Ngo, A. D., Reigelsperger, W. C. and Banda, S. S., Multivariable Control Law Design for A Tailless Airplanes, Proceedings of the AIAA Conference on Guidance, Navigation and Control, 1996, AIAA96-3866.

18. Padhi R. and Balakrishnan S. N., Proper Orthogonal Decomposition Based Optimal Neurocontrol Synthesis of a Chemical Reactor Process Using Approximate Dynamic Programming, Neural Networks, Vol. 16, 2003, pp. 719-728.

19. Ravindran S. S., Adaptive Reduced-Order Controllers for a Thermal Flow System Using Proper Orthogonal Decomposition, SIAM Journal on Scientific Computing, Vol.23, No.6, 2002, pp.19241942.

20. Slotine, J-J. E. and Li, W., Applied Nonlinear Control, Prentice Hall, 1991.

25 IEE Proceedings Review Copy Only