Optimality Condition of a Nonsmooth Switching Control ... - Springer Link

4 downloads 0 Views 174KB Size Report
multistage problems in nonsmooth optimization. In such ... These multistage processes will ... Example 2: Consider a rocket with two types of engines that work ...
ISSN 0146-4116, Automatic Control and Computer Sciences, 2008, Vol. 42, No. 2, pp. 94–101. © Allerton Press, Inc., 2008. .

Optimality Condition of a Nonsmooth Switching Control System1 Sh. F. Maharramov Department of Mathematics, Faculty of Science and Arts, Yasar University, Izmir 35500, Turkey e-mail: [email protected] Received May 15, 2007; in final form July 31, 2007

Abstract—In this paper, a survey and refinement of recent results in discrete optimal control theory are presented. The step control problem depending on a parameter is investigated. Nonsmoothness of the cost function ϕ is assumed and new versions of the discrete maximum principle for the step control problem are derived. Key words: optimal control problems, subdifferential, superdifferential, maximum principle, step control system DOI: 10.3103/S0146411608020077

1

1. INTRODUCTION Some applied problems in fields such as the economy, military defense, and chemistry are inherently multistage problems in nonsmooth optimization. In such problems, there are several stages which are characterized by their own equations, controls, phase coordinates, constants, etc. Usually, these stages can be connected to each other by additional conditions. Here, problems will be considered where these relations are given by switching points which are controlled by a given parameter. These multistage processes will be called step control systems or discrete systems with varying structure. Example 1 (see [8]): A car moves according to the law x˙ = y, y˙ = ug1(y), u ∈ U at the time interval ∆1 = [t0, t1], and according to x˙ = y, y˙ = ug2(y), u ∈ σ(y(t1)) at the time interval ∆2 = [t1, T]. The initial and final time moments t0 and T are fixed, while instant t1 is not fixed. The set U = [0, 1] and the functions g1, g2, σ are positive and differentiable in R1. The car starts from the origin (x0, y0) = (0, 0). The state variables x and y are assumed to be continuous on the whole interval ∆ = [0, T]. It is required to maximize x(T). To find the necessary optimality conditions, we have to build Hamilton–Pontryagin functions for each step and derive the optimality condition at the switching moment t1 and steps ∆1 and ∆2. In this example, switching moments are interesting to us because at the switching point we have to derive the optimality condition. By using increment formula and conjugate systems, we can get the necessary condition for this step system. Example 2: Consider a rocket with two types of engines that work consecutively. The work of the second engine depends on the first one. Moreover, the rocket moves from one controlling area to a second one that changes all the structure (controls, functions, conditions, etc.). For the smooth case, some articles were published previously [1, 4, 8, 11, 18, 19]. In [1, 6, 9, 10], the authors found the necessary optimality condition of first order and investigated singular control, time with delay, sufficient optimality condition as a Krotov type for a discrete switching optimal control problem. In [4], the author does not make any assumptions about the number of switches or about the mode sequence; they are determined by the solution of the problem. Sufficient and necessary optimality conditions for optimality are formulated for the second optimization problem. If they exist, bang-bang-type solutions of the embedded optimal control problem are solutions of the original problem. Otherwise, suboptimal solutions are obtained via the Chattering lemma by the author. In [18], the author develops a computational method for solving an optimal control problem which is governed by a switched dynamical time system with time delay. Then, we derive the required gradient of the cost function which is obtained via solving a number of delay differential equations forward in time. On this basis, the author solved this problem as a 1 The

text was submitted by the author in English.

94

OPTIMALITY CONDITION

95

mathematical programming one. All these results are dedicated in the smooth case to the optimal switching control problem (in all these papers, the cost functional is smooth). In the present paper, the author’s main aim is to formulate necessary optimality conditions for the nonsmooth case and the switching points which depend on certain parameters, by using the Frechet superdifferential (see, e.g., [2, 13, 14]). To start our discussion, first we have describe certain points about nonsmooth analysis. 2. TOOLS OF NONSMOOTH ANALYSIS Given a nonempty set Ω ⊂ Rn, consider the associated distance function dist ( x; Ω ) = inf x – w u∈Ω

and define Euclidean projector of x to Ω by Π(x; Ω):={w ∈ Ω| ||x – w|| = dist(x; Ω)}. If the set Ω is closed, then the set Π(x; Ω) is nonempty for every x ∈ Rn. This nonconvex cone to closed sets and corresponding subdifferential of lower semicontinuous extended real-valued functions satisfying these requirements were introduced by Mordukhovich at the beginning of 1975. The initial motivation came from the intention to derive necessary optimality conditions for optimal control problems with endpoint geometric constraints by passing to the limit from free endpoint control problems, which are much easier to handle. This was published in [15] (first in Russian and then translated into English), where the original normal cone definition was given in finite dimensional spaces by N ( x; Ω ):= Lim sup [ cone ( x – Π ( x; Ω ) ) ] , x→x

via the Euclidean projector, while the basic subdifferential ∂ϕ( x ) was defined geometrically via the normal cone to the epigraph of ϕ. Here, it is assumed that ϕ is a real valued finite function and basic subdifferential is defined ∂ϕ ( x ):= { x* ∈ R ( x*, – 1 ) ∈ N ( ( x, ϕ ( x ) ); epiϕ ) }. n

Here, epi ϕ := {(x, µ) ∈ Rn + 1|µ ≥ ϕ(x)} and is called the epigraph of a given extended real valued function. Note that this cone is nonconvex (see, [2, 13, 14]) and for the locally Lipschitzian functions the convex hull of subdifferential has a Clarke generalized gradient, ϕ k (x0) = co∂ϕ(x0). If ϕk is lower semicontinuous around x, then its basic subdifferential can be shown by ∂ϕ ( x ) = Lim sup ∂ˆ ϕ ( x ). 0

ϕ x→x

Here, ⎧ ⎫ ϕ ( u ) – ϕ ( x ) – 〈 x*, u – x〉 0 n ∂ˆ ϕ ( x ):= ⎨ x* ∈ R Liminf ------------------------------------------------------------- ≥ 0 ⎬ u–x ⎩ ⎭ is the Frechet subdifferential. By using plus-minus symmetric constructions, we can write ∂ ϕ ( x ):= – ∂( – ϕ ) ( x ), +

0

0

+ 0 0 ∂ˆ ϕ ( x ):= – ∂( – ϕˆ ) ( x ),

which are called basic superdifferential and Frechet superdifferential, respectively. Here, 0 0 ⎧ ⎫ ϕ ( x ) – ϕ ( x ) – 〈 x*, x – x 〉 + 0 n ∂ˆ ϕ ( x ):= ⎨ x* ∈ R Limsup ----------------------------------------------------------------≥ 0 ⎬. 0 x–x ⎩ ⎭

For a locally Lipschitzian function, subdifferential and superdifferential may be different. For example, if we take ϕ(x) = |x| on R, then ∂ϕ(0) = [–1, 1] but ∂ˆ ϕ(0) = {–1, 1}. If ϕ is Lipschitz continuous around point x0, then the strict differentiability of the function ϕ at x0 (see [2, 13]) is equivalent to ∂ϕ(x0) = ∂+ϕ(x0) = {∇ϕ(x0)}. If ∂ϕ(x0) = ∂ˆ ϕ(x0), then this function is lower regular at x0. Symmetrically, we can give upper regularity of the function at the point by using the definitions of superdifferential and Frechet superdifferential. Also, if the extended-real-valued function is Lipschitz conAUTOMATIC CONTROL AND COMPUTER SCIENCES

Vol. 42

No. 2

2008

96

MAHARRAMOV

tinuous around the given point and upper regular at this point, then the Freshet superdifferential is not empty. Furthermore, it is equal to the Clarke generalized subdifferential at this point (for a proof, see [16]). By using all these nonsmooth analysis tools, we will try to find the superdifferential form of the necessary optimality condition for the step discrete system. 3. NECESSARY OPTIMALITY CONDITION Consider a controlling process which is described by the following discrete system with varying structure. Minimize 3

S ( u, v ) =

∑ ϕ ( x (t )) i

i

(3.1)

i

i=1

subject to xi(t + 1) = fi(t, xi(t), ui(t)), t ∈ Ti = {ti – 1, ti – 1 + 1, …, ti – 1}, x1 ( t0 ) = g1 ( v 1 ) x i ( t i – 1 ) = g i ( x i – 1 ( t i – 1 ), v i ),

i = 1, 2, 3,

(3.2)

⎫ ⎬ i = 2, 3 ⎭

(3.3)

ui(t) ∈ Ui ⊂ Rr, t ∈ Ti, i = 1, 2, 3. vi ∈ Vi,

(3.4)

i = 1, 2, 3.

(3.5)

Here, vi, i = 1, 2, 3, are q-dimensional controlling parameters and Vi ⊆ Rq, i = 1, 2, 3, i.e., vi ∈ Vi, i = 1, 2, 3. For these equations, it is clear that the system’s conditions are described in three stages (for a rocket entering from space into the atmosphere and then into water). At any stage, the system is described by its equation, controls, switching points, and controlling parameters for switching points. In the case where there are no switching points, we can apply the Pontryagin maximum principle to any part of the system, but in this Rn case it is difficult. For this, we have to get new conditions of the switching points. In this problem, g1 : Rq n q n are assumed to be at least twice continuously differentiable vector-valued functions; gi : R × R R are n r Rn are given at least twice continuously differentiable vector-valued functions, i = 2, 3; fi : R × R × R given continuous, at least twice continuously partially differentiable vector-valued functions with respect to x; ϕi : Rn R are given at least functions. We do not assume any smoothness on the cost functional ϕi, i = 1, 2, 3; ui(t): R Ui ⊂ Rr are controls and vi ∈ Vi ⊂ Rq are controlling parameters. The sets Ui, Vi are assumed to be nonempty and bounded. The pair ( u i ( t ), v i ) which takes volume from these sets is called 0

0

admissible control. A pair ( u i ( t ), v i ) with the properties (3.4) and (3.5) is called admissible. The triple 0

0

( u i ( t ), v i , x i ( t ) ) is an admissible process. For the fixed admissible control ( u i ( t ), v i ), we introduce the following notation: 0

0

0

0

0

0'

H i ( t, x i, u, Ψ i ) = Ψ i ( t ) f i ( t, x i, u i ), 0

∆ ui H i [ t ] ≡ H i ( t, x i ( t ), u i ( t ), Ψ i ( t ) ) – H i ( t, x i ( t ), u i ( t ), Ψ i ( t ) ), 0

0

∂ H i ( t , x i ( t ) , u i ( t ), ψ i ( t ) ) ∂H i [ t ] -------------- = -------------------------------------------------------------, ∂x i ∂x i 0

0

0

0

0

0

∆ v 1 g 1 [ v 1 ] ≡ g 1 ( v 1 ) – g 1 ( v 1 ), 0

∆ v i g i ( x i – 1 ( t i – 1 ), v i ) ≡ g i ( x i – 1 ( t i – 1 ), v i ) – g i ( x i – 1 ( t i – 1 ), v i ), 0

0

0

0

0

i = 2, 3,

0'

L 1 ( v 1, Ψ 1 ( t 0 – 1 ) ) = Ψ 1 ( t 0 – 1 )g 1 ( v 1 ), 0

0'

L 2 ( x 1 ( t 1 ), v 2, Ψ 2 ( t 1 – 1 ) ) = Ψ 2 ( t 1 – 1 )g 2 ( x 1 ( t 1 ), v 2 ), 0

0'

L 3 ( x 2 ( t 2 ), v 3, Ψ 3 ( t 2 – 1 ) ) = Ψ 3 ( t 2 – 1 )g 3 ( x 2 ( t 2 ), v 3 ). 0

AUTOMATIC CONTROL AND COMPUTER SCIENCES

Vol. 42

No. 2

2008

OPTIMALITY CONDITION

Theorem 1: Assume that ϕi : Rn

97

0 R is finite at x i (ti) and ∂ˆ ϕ(x0(ti)  0). If the sets

f i ( t i x i ( t ), U i ) = { α i : α i = f i ( t i x i ( t ), u i ), u i ∈ U }, 0

i = 1, 2, 3,

0

g1(V1) = {α4 : α4 = g1(v1), v1 ∈ V1}, g i ( x i – 1 ( t i – 1 ), V i ) = { α i : α i = g i ( x i – 1 ( t i – 1 ), v i ), v i ∈ V }, 0

0

i = 2, 3

are convex, then, for the optimality of an admissible control (u0(t), v0) in the problem described by (3.1)– (3.5), it is necessary that for any x * ∈ ∂ˆ ϕ(x0(t )) the following conditions be true: i

i

Discrete maximum principle for the control ti – 1



∆ ui ( t ) H i [ t ] ≤ 0,

for all u i ( t ) ∈ U i ,

i = 1, 2, 3,

t ∈ T i.

(3.6)

t = ti – 1 0

Discrete maximum principle for the controlling parameter v i , i = 1, 2, 3 max L 1 ( v 1, ψ 1 ( t 0 – 1 ) ) = L 1 ( v 1, ψ 1 ( t 0 – 1 ) ), 0

0

0

(3.7)

v 1 ∈ V1

max L i ( x i – 1 ( t i – 1 ), v i, ψ i ( t i – 1 – 1 ) ) = L i ( x i – 1 ( t i – 1 ), v i , ψ i ( t i – 1 – 1 ) ), 0

0

0

0

0

v i ∈ Vi

i = 2, 3,

(3.8)

where ψ(·) is adjoint trajectory and satisfying (3.11) systems. If the set f i(t, x0, U) is convex, then the necessary optimality condition is global over all ui ∈ Ui. Proof. In the control problem, one of the methods to get the necessary optimality conditions is to use the increment formula. For this, we have to calculate the increment formula, find a conjugate system for the corresponding problems, and use an analog of needle variations in the continuous case. The rest of the increment formula can be estimated using the step method. For the optimal pair (u0(t), v0), we can write the increment of the functional in the following form: 3

∆S ( u , v ) = 0

0

∑ [ ϕ ( x ( t ) ) – ϕ ( x ( t ) ) ] ≥ 0. i

i

i

i

0 i

i

i=1 + 0 Using nonsmooth analysis tools, for any x *i ∈ ∂ˆ ϕ ( x i ( t i ) ) we can write 0 0 0 ϕ i ( x i ( t i ) ) – ϕ i ( x i ( t i ) ) ≤ 〈 x *i , ∆x i ( t )〉 + 0 ( ∆x i ( t ) ).

Then the increment of the functional takes the following form: 3

∆S ( u , v ) = 0

0

∑ 〈 x*, ∆x ( t )〉 + 0 ( ∆x ( t ) ). i

0 i

0 i

i=1

Let multiply both sides of the Eq. (3.1) ψi(t) and sum up from i = 1 to 3. By using this and the definition of nonsmooth analysis and Taylor’s increment formula after some calculation, we can write the increment of the functional at an arbitrary admissible pair (ui(t), vi) as AUTOMATIC CONTROL AND COMPUTER SCIENCES

Vol. 42

No. 2

2008

98

MAHARRAMOV 3

ti – 1

3

3

∑ [ 〈 x*, ∆x ( t )〉 ] + ∑ ∑

∆S ( u , v ) = 0

0

i

0' ψi ( t

i

i=1

– 1 )∆x i ( t ) –

i = 1 t = ti – 1

ti – 1

∑∑

[ H i ( t, x i ( t ), u i ( t ), ψ i ( t ) ) 0

i = 1 t = ti – 1

3

– H i ( t, x i ( t ), u i ( t ), ψ i ( t ) ) ] + 0

0

0

∑ψ

0' i ( ti

0'

0'

– 1 )∆x i ( t i ) – ψ 1 ( t 0 – 1 )∆ v 1 g 1 ( v 1 ) – ψ 2 ( t 1 – 1 ) 0

i=1

× [ g 2 ( x 1 ( t 1 ), v 2 ) –

0 g 2 ( x 1 ( t 1 ),

0'

v 2 ) ] – ψ 3 ( t 2 – 1 ) [ g 3 ( x 2 ( t 2 ), v 3 ) – g 2 ( x 2 ( t 2 ), v 3 ) ] 0

3

∑ [ϕ ( x (t )) –

=

i

i

i

+

i=1 3



ti – 1

3

0 ϕi ( xi ( t ) ) ]

0

∑∑

– 1 )∆x i ( t ) –

i = 1 t = ti – 1

∑∑

∆ ui H i [ t ]

(3.9)

i = 1 t = ti – 1

ti – 1

∑∑

ti – 1

3

0' ψi ( t

0

3

[ H i ( t , x i ( t ) , u i ( t ),

0 ψi ( t ) )

– H i ( t, x i ( t ), u i ( t ),

0 ψi ( t ) ) ]

+

i = 1 t = ti – 1

∑ψ

0' i ( ti

– 1 )∆x i ( t i )

i=1

– ∆ v 1 L 1 ( v 1, ψ 1 ( t – 1 ) ) – ∆ v 2 L 2 ( x 1 ( t ), v 2, ψ 2 ( t 1 – 1 ) ) – ∆ v 3 L 3 ( x 2 ( t 2 ), v 3, ψ 3 ( t 2 – 1 ) ) 0

0

0

0

0

0

0

0

– [ L 2 ( x 1 ( t 1 ), v 2 , ψ 2 ( t 1 – 1 ) ) – L 2 ( x 1 ( t 1 ), v 2 , ψ 2 ( t 1 – 1 ) ) ] 0

0

0

– [ L 3 ( x 2 ( t 2 ), v 3, ψ 3 ( t 2 – 1 ) ) – L 3 ( x 2 ( t 2 ), v 3, ψ 3 ( t 2 – 1 ) ) ], 0

0

0

where by definition ti – 1

3

η 1 ( u , v ; ∆u, ∆v ) = 0

0

∑∑

i = 1 t = ti – 1 0 ∂∆ v 2 L 2 ( x 1 ( t 1 ),

0 v 2,

∂∆ ui H' [ t ] ----------------------∆x i ( t ) – o 3 ( ∆x 1 ( t 1 ) ) – o 4 ( ∆x 2 ( t 2 ) ) ∂x

0 ψ2( t1

– 1)) ∂∆ v 3 L 3 ( x 2 ( t 1 ), v 3, ψ 2 ( t 2 – 1 ) ) -∆x 2 ( t 2 ) – ------------------------------------------------------------------------∆x 1 ( t 1 ) – ----------------------------------------------------------------------∂x 1 ∂x 2 0

3

+



3

(i) o1

∆x i ( t i ) –

i=1

0

(3.10)

ti – 1

∑∑

(i)

o 2 ∆x i ( t ) .

i = 1 t = ti – 1

Here oi(·), i = 1, …, 8 are defined by the expansions ∂ϕ 'i ( x i ( t i ) ) (i) - ∆x i ( t i ) + o 1 ( ∆x i ( t i ) ), ϕ i ( x i ( t i ) ) – ϕ i ( x i ( t i ) ) = ------------------------∂x i 0

i = 1, 3 ,

∂H 'i ( t, x i ( t ), u i ( t ), ψ i ( t ) ) 0 0 0 H i ( t, x i ( t ), u i ( t ), ψ i ( t ) ) – H i ( t, x i ( t ), u i ( t ), ψ i ( t ) ) = -----------------------------------------------------------∂x i 0

(i)

× ∆x i ( t ) + o 2 ( ∆ i ( t ) ),

0

i = 1, 3 ,

L 2 ( x 1 ( t 1 ), v 2 , ψ 2 ( t 1 – 1 ) ) – L 2 ( x 1 ( t 1 ), v 2 , ψ 2 ( t 1 – 1 ) ) 0

0

0

∂ L '2 ( x 1 ( t 1 ), v 2, ψ 2 ( t 1 – 1 ) ) - ∆x 1 ( t 1 ) + o 3 ( ∆x 1 ( t 1 ) ), = --------------------------------------------------------------∂x 1 0

0

L 3 ( x 2 ( t 2 ), v 3 , ψ 3 ( t 2 – 1 ) ) – L 3 ( x 2 ( t 2 ), v 3 , ψ 3 ( t 2 – 1 ) ) 0

0

0

∂ L '3 ( x 2 ( t 2 ), v 3, ψ 3 ( t 2 – 1 ) ) - ∆x 2 ( t 2 ) + o 4 ( ∆x 2 ( t 2 ) ). = --------------------------------------------------------------∂x 2 0

0

AUTOMATIC CONTROL AND COMPUTER SCIENCES

Vol. 42

No. 2

2008

OPTIMALITY CONDITION

99

Now take Ψ i (t), i = 1, 2, 3, as solutions of the following linear difference equations: 0

⎫ ⎪ ⎪ ⎪ 0 0 0 ∂ L 2 ( x 1 ( t 1 ), v 2 , ψ 2 ( t 1 – 1 ) ) ⎪ 0 Ψ1 ( t1 – 1 ) = – x* + --------------------------------------------------------------1 ⎪ ∂x 1 ⎬ ⎪ 0 0 0 L ∂ ( x ( t ) , v , ψ ( t – 1 ) ) 0 3 2 2 3 3 2 ⎪ Ψ2 ( t2 – 1 ) = – x* + --------------------------------------------------------------2 ⎪ ∂x 1 ⎪ 0 ⎪ Ψ 3 ( t 3 – 1 ) = – x *3 ⎭ ∂H i [ t ] 0 -, Ψ i ( t – 1 ) = -------------∂x i

i = 1, 2, 3, t ∈ T i

(3.11)

the increment formula (3.9) reduces to a simpler one: ti – 1

3

∑∑

∆S ( u , v ) = – 0

0

∆ ui H [ t ] – ∆ v 1 L 1 ( v 1, ψ 1 ( t 0 – 1 ) ) – ∆ v 2 L 2 ( x 1 ( t ), v 2, ψ 2 ( t 1 – 1 ) ) 0

0

0

0

0

i = 1 t = ti – 1

(3.12)

– ∆ v 3 L 3 ( x 2 ( t 2 ), v 3, ψ 3 ( t 2 – 1 ) ) + η 1 ( u , v ; ∆u, ∆v ). 0

0

0

0

0

Let ( u i ( t ), v i ) be an optimal pair, and assume that the sets of admissible velocities are convex along the process (ui(t), vi, xi(t)), i.e., the sets 0

0

f i ( t, x i ( t ), U i ) = { α i : α i = f i ( t, x i ( t ), u i ), u ∈ U i }, 0

0

i = 1, 2, 3,

g1(V1) = {α4 : α4 = g1(v1), v1 ∈ V1}, g i ( x i – 1 ( t i – 1 ), V i ) = { α i + 3 : α i + 3 = g i ( x i – 1 ( t i – 1 ), v i ), v ∈ V i }, 0

0

i = 2, 3,

are convex. Let ε ∈ [0, 1] be an arbitrary number. Denote the increment of the optimal pair by ∆u i ( t; ε ) = u i ( t; ε ) – u i ( t ),

t ∈ T i,

0

∆v i ( ε ) = v i ( ε ) –

i = 1, 2, 3,

(3.13)

i = 1, 2, 3.

0 vi ,

Then, by convexity, for each ui(t) ∈ Ui, vi ∈ V, t ∈ Ti, i = 1, 2, 3, there are ui(t, ε) ∈ Ui, vi(ε) ∈ Vi, i = 1, 2, 3, such that ∆ ui ( t, ε ) f i [ t ] = ε∆ ui ( t ) f i [ t ],

i = 1, 2, 3,

∆ v 1 ( ε ) g 1 ( v 1 ) = ε∆ v 1 g 1 ( v 1 ), 0

0

∆ v i ( ε ) g i ( x i – 1 ( t i – 1 ), v i ) = ε∆ v i g i ( x i – 1 ( t i – 1 ), v i ), 0

0

0

0

i = 2, 3.

Equation (3.10) introduces an increment of the solution xi(t) which is denoted by {∆xi(t; ε), i = 1, 2, 3}. Using the step methods, we can prove that ||∆xi(t; ε)|| ≤ Z11ε, t ∈ Ti ∪ ti, i = 1, 2, 3. Using these estimates in (3.12), it can easily be seen that the necessary optimality condition is ∆S(u0, v0) ≥ 0. Theorem 2: Assume that ϕi is Lipschitz continuous around at x i , and upper regular at this point and the sets 0

f i ( t i x i ( t ), U i ) = { α i : α i = f i ( t i x i ( t ), u i ), u i ∈ U }, 0

0

i = 1, 2, 3,

g1(V1) = {α4 : α4 = g1(v1), v1 ∈ V1}, g i ( x i – 1 ( t i – 1 ), V i ) = { α i : α i = g i ( x i – 1 ( t i – 1 ), v i ), v i ∈ V }, 0

AUTOMATIC CONTROL AND COMPUTER SCIENCES

0

Vol. 42

No. 2

2008

i = 2, 3

100

MAHARRAMOV

are convex. Then, for the optimality of an admissible control (u0(t), v0) in the problem given through (3.1)– (3.5), it is necessary that for any x *i ∈ ∂ ϕ(x0(ti)) the following conditions be true: 0

Discrete maximum principle for the control u i (t), i = 1, 2, 3 ti – 1



∆ ui ( t ) H i [ t ] ≤ 0, for all u i ( t ) ∈ U i ,

i = 1, 2, 3,

t ∈ T i.

(3.14)

t = ti – 1 0

Discrete maximum principle for the controlling parameter v i , i = 1, 2, 3 max L i ( x i – 1 ( t i – 1 ), v i, ψ i ( t i – 1 – 1 ) ) = L i ( x i – 1 ( t i – 1 ), v i , ψ i ( t i – 1 – 1 ) ), 0

0

0

0

0

v i ∈ Vi

i = 2, 3,

(3.15)

where ψ(·) is adjoint trajectory and satisfies the system described under (3.11). It is easy to prove this theorem by using the tools of nonsmooth analysis given above. It should be noted that the system of linear difference equations (3.11) is the conjugate system for the problem (3.1)–(3.5). If we take smoothness on the cost functional ϕi, then we can get some following corollary and analogies of Pontryagin maximum principle. 4. NECESSARY OPTIMALITY CONDITIONS USING THE LINEARZING PRINCIPLE AS AN ANALOGUE OF EULER EQUATION IN NONSMOOTH CASE If the cost functional is differentiable, the functions fi, gi also have partial derivatives with respect to ui, vi, respectively, and the sets Ui and Vi are convex, then another necessary optimality condition can be obtained using the linearizing maximum principle of Pontryagin. The proof of the next following corollaries is to a large extent similar to the proof of Theorem 1 and is omitted. For the proof, the interested reader is referred to the thesis [12]. Corollary 1: (The superdifferential form of linearizing maximum principle). If the sets Ui, Vi are convex, then, for the optimality of the pair (u0(t), v0)), it is necessary that for any x*k ∈ ∂ f ( x ( t i ) ) the following inequalities hold: 0

ti – 1



for all ui(t) ∈ Ui, t ∈ Ti, i = 1, 2, 3,

t = ti – 1

∂H 'i [ t ] 0 -------------- ( ui ( t ) – ui ( t ) ) ≤ 0 ∂u i

(4.1)

∂ L '1 ( v 1, ψ 1 ( t 0 – 1 ) ) 0 ----------------------------------------------(v 1 – v 1) ≤ 0 ∂v 1 0

0

(4.2)

for all v1 ∈ V1, ∂ L i' ( x i – 1 ( t i – 1 ), v i , Ψ i ( t i – 1 – 1 ) ) 0 ----------------------------------------------------------------------------- ( v i – v i ) ≤ 0, for all v i ∈ V i , i = 2, 3. (4.3) ∂v i In the case of openness of the sets Ui, Vi, i = 1, 2, 3 also using Euler’s equation, one can derive the necessary optimality conditions: Corollary 2 (An analogue of Euler’s equation): If the sets Ui, Vi are open, then, for the optimality of the 0

0

0

pair (u0(t), v0), it is necessary that for any x*k ∈ ∂ f ( x ( t i ) ) following equations hold ∂H 'i [ t ] -------------- = 0, t ∈ T i , i = 1, 2, 3, ∂u i 0

(4.4)

∂ L '1 ( v 1, ψ 1 ( t 0 – 1 ) ) ---------------------------------------------- = 0, ∂v 1 0

0

∂ L 'i ( x i – 1 ( t i – 1 ), v i , ψ i ( t i – 1 – 1 ) ) ----------------------------------------------------------------------------= 0, ∂v i 0

0

(4.5)

0

i = 1, 2, 3.

AUTOMATIC CONTROL AND COMPUTER SCIENCES

(4.6) Vol. 42

No. 2

2008

OPTIMALITY CONDITION

101

REFERENCES 1. Magerramov, Sh.F., Mansimov, K.B., Optimization of a class of discrete step control systems. (Russian, English) Comput. Math. Phys. 41(2001)3, p. 334−339. Translation from Zh. Vychisl. Mat. Mat. Fiz. 2001, vol. 41, no. 3, pp. 360−366. 2. Mordukhovich, B.S., Approximation Methods in Problems of Optimization and Control, Nauka, Main Physical and Mathematical Editions, Moscow, 1988 (Russian). 3. Propoy, A., Elementi teorii optimalnikh diskretnikh processov. M. Nauka, 1969. 4. DeCarlo, Sorin-Bengea, A. Roymand, A., Optimal control of switching stystem. Automatica, vol. 41(1), p. 11−27. 5. Gabasov, R., Tarasenko, N., Necessary high-order conditions of optimality for discrete systems. (Russian, English), Autom. Remote Control. 1971, p. 50−57, translation from Avtom. Telemekh. 1971, no. l, p. 58−65. 6. Maharramov, Sh., Dempe, S., Optimization a class discrete system with varying structure. Preprint 2005-US34, TU Berkgakademie Freiberg, Faculity of Mathematics and Informatics, Germany. http://www.mathe.tureiberg.de/~dempe/Artikel/inaharramov.pdf 7. Mansimov, K.B., Sufficient Conditions of Krotov Type In Discrete 2-Parameter System, Automation And Remote Control. 1985. vol. 46(8). part 1. pp. 932−937. 8. D’Apice, C., Garavello, M., Manzo, R., Piccoli B., Hybrid optimal control: case study of a car with gears, International Journal of Control. 2003. vol. 76, pp. 1272−1284. 9. Maharramov, Sh.F., Analysis of the qwasisingular control for the a step control problem. Azerbajdshan Republic “Tahsil” Society, Journal “Bilgi”. Physical, mathematics, earth sciences “Bilgi”, 2003, no. l, pp. 44−50. 10. Maharramov, Sh.F., Investigation of singular controls in one discrete system with variable structure and delay. Proc. of Institute of Mathematics and Mechanics. 2001. vol. 14, pp. 169−175. 11. Egertedt, M., Wardi, Y., Delmotte, F., Optimal Control of switching times in switched dynamical systems, Proc. IEEE Conf. Decision Control. 2003, p. 2138−2143. 12. Maharramov, Sh.F., Uslovija optimal’nosti dlja odnogo klassa diskretnikh zadach optimal’nogo upravlenija. PhD thesis, Akademija Nauk Azerbajdzana, Inst. Kibernetika, 2003. 13. Mordukhovich, B.S., Variational Analysis and Generalized Differentiation, I: Basic Theory, II: Applications, Grundlehren Series (Fundamental Principles of Mathematical Sciences). vol. 330, 331, Springer-Verlag, N.Y., 2005. 14. Mordukhovich, B.S. and Shvartsman, I., Discrete maximum principle for nonsmooth optimal control problems with delas. Cybernet. Systemss. Anal., 2002. vol. 38. pp. 255−264. 15. Mordukhovich, B.S., Maximum principle in problems of time optimal control with nonsmooth constraints, J. Appl. Math. Mech. 1976. vol. 40, p. 960−969. 16. Bellman, R., Dynamic Programming, Princeton University Press, Princeton, New Jersey, 1957. 17. Aris, R., Dyscrete Dynamic Programming, Blaisdell, New York, 1964. 18. Wu, Changzhi, Kok, Teo, Rui, Li, Yi, Zhao, Optimal control of switching system with time delay. Applied Mathematics Letters. 2006. vol. 19. p. 1062−1067. 19. Xu, X.P., A. Antsaklis, A., A dynamic programming approach for optimal control of switched system. Proc. IEEE Conf. Decision Control. 2000. p. 1822−1827.

AUTOMATIC CONTROL AND COMPUTER SCIENCES

Vol. 42

No. 2

2008