first- and second-order optimality conditions for convex composite

0 downloads 0 Views 179KB Size Report
Abstract: Multi-objective optimization is known as a useful mathematical ... tion 2, various scalarization results are reviewed of solutions for composite .... However, with composite structure, we are able to provide more general classes of func- .... 0, such that. +. pX i=1. (Fi(x) ?Fi(a)) 2 \p i=1F0i(a)(C ?a);. T. pX i=1 ivi ! = 0: 8 ...
To appear in the Journal of Optimization Theory & Applications

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR CONVEX COMPOSITE MULTI-OBJECTIVE OPTIMIZATION X.Q. YANG1 and V. JEYAKUMAR2

Abstract: Multi-objective optimization is known as a useful mathematical model in order

to investigate some real world problems with con icting objectives, arising from economics, engineering and human decision making. In this paper, a convex composite multi-objective optimization subject to a closed convex set constraint is studied. New rst-order optimality conditions of a weakly ecient solution for the convex composite multi-objective optimization problem are established via scalarization. These conditions are then extended to derive second-order optimality conditions.

Key Words: Multi-objective optimization, nonsmooth analysis, convex analysis, sucient

optimality condition.

1. Introduction This paper considers the following convex composite multi-objective optimization problem (P)

V-Minimize (f1 (F1(x));    ; fp(Fp (x))) subject to x 2 C;

1 Research Fellow, Department of Mathemmatics, The University of Western Australia, Australia. 2 Senior Lecturer, Department of Applied Mathematics, The University of New South Wales, Australia.

1

where C is a nonempty convex subset of a Banach space X , fi : R m ?! R ; is convex, for i = 1;    ; p, and Fi : X ?! R m , is locally Lipschitz, for i = 1;    ; p: It has been shown (see Refs. 1-4.) that this model is versatile. Various mathematical optimization models arising, for instance, from vector approximation and multi-objective penalty function method can be recast in the form of the model problem (P). So problem (P) provides an uni ed approach to various results of multi-objective optimization problems. A general convex composite multi-objective nonsmooth programming problem with convex composite inequality constraints was studied and rst-order optimality conditions were derived assuming a null space condition in Ref. 2. A special type of the problem (P), i.e., vector approximation, was considered in Ref. 1 by assuming a representation condition. Relations between these assumptions, amongst others were discussed in Ref. 5. The purpose of this paper is to study optimality conditions for the problem (P). In Section 2, various scalarization results are reviewed of solutions for composite multi-objective optimization problem (P), under some veri able conditions. In Section 3, rst-order sucient conditions for a weakly ecient solution of (P) are derived in terms of some nonempty interior conditions. In Section 4, these results are extended to the second-order case using a recent result for a scalar convex composite optimization. We now introduce the following de nitions of solutions for (P), see Ref. 6.

De nition 1.1. Consider the problem (P). (i) A feasible point x 2 C is said to be a weakly ecient solution of (P) if there exists no feasible solution y 2 C for (P) such that fi (Fi (y)) < fi (Fi(x)); i = 1;    ; p; (ii) A feasible point x 2 C is said to be an ecient solution of (P) if there exists no feasible solution y 2 C for (P) such that fi (Fi (y))  fi (Fi (x)); i = 1;    ; p and fi (Fi (y)) 6= fi (Fi (x)) for at least one i; (iii) A feasible point x 2 C is said to be a properly ecient solution of (P) if it is ecient 2

and if there exists a scalar M > 0 such that for each i, fi (Fi (x)) ? fi (Fi (y))  M; fj (Fj (y)) ? fj (Fj (x)) for some j such that fj (Fj (y)) > fj (Fj (x)) whenever y is feasible for (P) and fi (Fi (y)) < fi (Fi (x)): Let X  be the continuous dual space of the Banach space X . The canonical pair between X  and X is denoted by hx; xi where x 2 X  ; x 2 X . Let intK and clK denote the interior and closure of the subset K of X , respectively. The following are some de nitions from convex analysis (see Ref. 7). The normal cone to a convex subset C of R n at x is de ned by

N (xjC ) = fx 2 R n : hx; z ? xi  0; 8z 2 C g: Let h : R n ?! R [ f+1g be a lower semi-continuous convex function. The convex subdi erential of the convex function h at x 2 dom(h) = fx : h(x) < +1g is de ned by

@h(x) = fx 2 R n : (x ; ?1) 2 N (h(x)jepi(h)g; where epi(h) is the epi-graph of h, i.e.,

epi(h) = f(y; ) 2 R n  R : h(y)  g:

2. Scalarizations in Composite Multi-Objective Optimization Let  2 R p . The associated scalar problem for (P) is given by (P())

Minimize

p X i=1

i fi (Fi (x));

subject to x 2 C: The following Theorem establishes a scalarization result for a weakly ecient solution of the problem (P). 3

Theorem 2.1. Let C  X and fi : R m ?! R ; let Fi : X ?! R m , for i = 1;    ; p. (i) Assume that the set

= fz 2 R p : 9x 2 C; fi(Fi (x)) < zi ; i = 1;    ; pg is convex. If the point a 2 C is a weakly ecient solution for P, then a is an optimal solution for the scalar problem P() for some  2 R p+ n f0g; (ii) If there exists  2 R p+ nf0g such that a is a solution of P(), then a is a weakly ecient solution of P.

Proof. We shall prove (i). Consider the system x 2 C; fi(Fi (x)) < fi (Fi (a)); i = 1;    ; p: Since a is a weakly ecient solution for (P), the above system is inconsistent. So 0 2= a = fz 2 R p : 9x 2 C; fi(Fi (x)) < fi (Fi (a)) + zi ; i = 1;    ; pg and by the assumption, a is an open convex subset of R p . Now by a separation Theorem (see Ref. 8), there exists 0 6=  2 R p such that T z  0; 8z 2 a . Let x 2 C; r 2 R p ; ri > 0 and let  > 0. Taking zi = fi (Fi (x) ? fi (Fi (a)) + ri and using the standard arguments, we get p p p X X X i ri + i fi (Fi (x))  i fi (Fi (a)): Since x 2 C any x 2 C;

i=1 i=1 i=1 p and r 2 R ; ri > 0 are arbitrary and  2 R p , it follows that  2 R p+ , and for p X i=1

i fi (Fi (x)) 

p X i=1

i fi (Fi (a)):

(ii) readily follows from the de nition of a weakly ecient solution.



It is noted that the result of Theorem 2.1 is an extension of a result in Ref. 9. In fact, it is showed in Ref. 9 that if

1 = fz 2 Rp : 9x 2 C; (fi  Fi )(x))  zi ; i = 1;    ; p; (fi  Fi)(x)) < zi ; for some ig (2.1) 4

is convex, and a is an ecient solution of (P), then a is an optimal solution for the scalar problem P() for some  2 R p+ n f0g. Some conditions are listed as follows which guaranttee that 1 or is convex: (i) fi  Fi is linear and C is a polyhedral set, see Ref. 10; (ii) fi  Fi is convex, see Ref. 11. However, with composite structure, we are able to provide more general classes of functions such that is convex (see Ref. 2): (a) fi is convex and H (C ) is convex; (b) f = (f1;    ; fp) is convex-like on H (C ); where H : X ?! p R m is de ned by H (x) = (F1(x);    ; Fp(x)); where p R m is the product space of p pieces of R m . The following scalarization result is given in Ref. 2 for a properly ecient solution of (P).

Theorem 2.2. Let C  X and fi : R m ?! R ; let Fi : X ?! R m , for i = 1;    ; p. Then (i) Assume that, for each > 0; i = 1;    ; p; the set ?i = fzi 2 R p : 9x 2 C; fi(Fi (x)) < zii ; fi(Fi (x)) + fj (Fj (x)) < zij ; j 6= ig is convex. If the point a 2 C is a properly ecient solution for P, then a is an optimal solution for the scalar problem P() for some  2 intR p+ ; (ii) If a is optimal for P() for some  2 intR p+ , then a is a properly ecient solution for (P).

3. First-Order Optimality Conditions Let g : X ?! R be a locally Lipschitz function. The rst-order Michel-Penot subdi er5

ential @ g(x) is de ned by

@  g(x) = fx 2 X  : g(x; u)  hx; ui; 8u 2 Xg where g(x; u) is the rst-order Michel-Penot generalized directional derivative de ned by

g(x; u) = sup lim sup g(x + sz + sus) ? g(x + sz) : z2X s#o

Throughout this section, we assume that Assumption 1: Fi ; i = 1;    ; p; is locally Lipschitz and G^ateaux di erentiable on X , where the Gateaux derivative of Fi at x is denoted by Fi0 (x). The following is a rst-order necessary condition of a weakly ecient solution of (P).

Theorem 3.1. For the problem (P), asusme that Assumption 1 holds and H (C ) is convex.

If a is a weakly ecient solution for (P), then

p X p (9 2 R + nf0g)(8i = 1;    ; p)(9vi 2 @fi (Fi (a)))(8x 2 C ) i viT Fi0 (a)(x ? a)  0: (3:1) i=1

Proof. From our convexity assumptions, the conditions of Theorem 2.1 (i) are ful lled. Hence, there exists  2 R p+ n f0g, such that p X i=1

i (fi  Fi )(a) = min x2C

p X i=1

i (fi  Fi )(x);

and so by the nite dimensional variant of the optimality conditions, there exists ui 2 @ (fi  Fi )(a) satis es p X i ui (x ? a)  0; 8x 2 C: i=1

Now, from the chain rule formula in Ref. 12, for each ui ; i = 1;    ; p, we can nd vi 2 @fi(Fi (a)) such that ui = viT Fi0(a), thus, p X i=1

i viT Fi0 (a)(x ? a)  0; 8x 2 C: 6



The proof is complete.

The condition (3.1) is referred to as Pshenichnyi's type. First-order optimality conditions of a vector optimization problem with a nondi erentiable objective function from a real locally convex Hausdor topological vector space to an ordered real locally convex Hausdor topological vector space and a convex set constraint were given in Ref. 13. Let a 2 C and

A(a) = (F10 (a);    ; Fp0 (a)):

If A(a)(C ? a) = R m      R m (this is referred to as an onto condition) and (3.1) holds, then a is a weakly ecient solution of (P) (see Ref. 2). In general, this condition is obtained for a convex composite multi-objective nonsmooth programming problem with convex composite inequality constraints in Ref. 2. The following result shows that the onto condition can be relaxed to that intA(a)(C ? a) 6= ;.

Theorem 3.2. Suppose that (3.1) holds. If intA(a)(C ? a) 6= ;;

(3.2)

then a is a weakly ecient solution for (P).

Proof. Let x 2 C . Then, p X i=1

i fi (Fi (x)) ?

p X i=1

i fi (Fi (x)) 

p X i=1

i viT (Fi (x) ? Fi (a)):

Since intA(a)(C ? a) 6= ;, there exist w = (w1 ;    ; wp ) 2 intA(a)(C ? a) and > 0 such that

w + (F1 (x) ? F1 (a);    ; Fp(x) ? Fp (a)) 2 A(a)(C ? a); and wT (1 v1 ;    ; p vp ) = 0: Thus there exists y 2 C such that

w + (F1(x) ? F1(a);    ; Fp (x) ? Fp (a)) = A(a)(y ? a); and 7

p X i=1

i wiT vi = 0:

Thus,

wi + (Fi (x) ? Fi (a)) = Fi0 (a)(y ? a);

i = 1;    ; p:

Hence 0



p X i=1 p X

p X i=1

i viT (wi + (Fi (x) ? Fi (a))

i=1 p X

= thus

i viT Fi0 (a)(y ? a)

i=1

i viT (Fi (x) ? Fi (a));

i fi (Fi (x)) 

p X i=1

i fi (Fi (a)); 8x 2 C:



So, a is optimal for P(). Therefore a is a weakly ecient solution for (P). The following result assumes a di erent type of nonempty interior condition.

Theorem 3.3. Suppose that (3.1) holds. If (i) int \pi=1 Fi0 (a)(C ? a) 6= ;;

(3:3)

(ii) viT (Fj (x) ? Fj (a))  0; 8x 2 C; vi 2 @fi(Fi (a)); i 6= j:

(3:4)

Then the point a is a weakly ecient solution for (P).

Proof. Let x 2 C . Then, p X i=1

i fi (Fi (x)) ?

p X i=1

i fi (Fi (x)) 

p X i=1

i viT (Fi (x) ? Fi (a)):

Since int \pi=1 Fi0 (a)(C ? a) 6= ;; we can choose  2 \pi=1 Fi0 (a)(C ? a); > 0, such that

+

p X i=1

(Fi (x) ? Fi (a)) 2 \pi=1 Fi0 (a)(C ? a); 8

T

p X i=1

!

i vi = 0:

There exists y 2 C such that

+ Then Hence

p X i=1

p X i=1

(Fi (x) ? Fi (a)) = Fi0 (a)(y ? a); i = 1;    ; p:

i viT  +

0 =



p X i=1

i=1

p X i=1 p X i=1 p X

! X p

(Fi (x) ? Fi (a)) =

i=1

i viT Fi0 (a)(y ? a):

i viT Fi0 (a)(y ? a)

0 1 p X i viT @ + (Fj (x) ? Fj (a))A j =1

i viT ( + (Fi (x) ? Fi (a))

i=1 p X

= thus

p X

i=1

i viT (Fi (x) ? Fi (a));

i fi (Fi (x)) 

p X i=1

i fi (Fi (a)); 8x 2 C:

So, a is optimal for P(). Therefore a is a weakly ecient solution for (P).



When Fi = F , both Theorem 3.2 and Theorem 3.3 reduce to the following sucient condition.

Corollary 3.1. Suppose that Fi = F , for each i and intF 0(a)(C ? a) 6= ;: If p X p (9 2 R + n f0g)(8i = 1;    ; p)(9vi 2 @fi (F (a)))(8x 2 C ) i viT F 0 (a)(x ? a)  0: i=1

then a is a weakly ecient solution for (P).

Proof. Since Fi = F , the assumption intF 0(a)(C ? a) 6= ; implies that (3.2) and (3.3). At this time (3.4) automatically holds. Then a is a weakly ecient solution for (P).  9

The following example shows that (3.2) and (3.3) are di erent.

Example 3.1. Let C = f(x1; x2) : (x1 ? 1)2 + x22  1g and a = (0; 0). Let F2(x1 ; x2) = (?x1 ; ?x2):

F1 (x1; x2) = (x1; x2); Then

F20 (0; 0) = (?1; ?1):

F10 (0; 0) = (1; 1);

It is easy to see that

intfF10 (0; 0)(C ? (0; 0)) \ F20 (0; 0)(C ? (0; 0))g = intf(0; 0)g = ;; but

intf(F10 (0; 0); F20 (0; 0))(C ? (0; 0))g = f(x1; x2) : (x1 ? 1)2 + x22 < 1g  f(?x1; ?x2 ) : (x1 ? 1)2 + x22 < 1g 6= ;:

4. Second-Order Optimality Conditions In this section we derive second-order optimality conditions for (P). The scalarization problem of the problem (P) is recast into a convex composite optimization problem with a non- nite valued convex function. A second-order necessary condition is obtained using the result presented in Ref. 14. The rst-order sucient condition of Section 3 is then extended to obtain a second-order sucient condition of (P). Since the second-order necessary condition for a convex composite optimization with a non- nite valued function is only available in a nite dimensional space, we restrict our study in this section to a nite dimensional space, i.e., we assume X = R n . We also assume throughout this section that (i) Fi = F; i = 1;    ; p; 10

(ii) Let

F (x) = (F 1(x);    ; F m(x))T ;

where each F j : R n ?! R ; j = 1;    ; m; is a real-valued twice strictly di erentiable function, i.e., for each x, there exists an n  n matrix D2 F j (x) such that

F j (y + su + tv) ? F j (y + su) ? F j (y + tv) ? F j (y) = uT D2 F j (x)v; lim y!x st

s;t#0

see Ref. 15, D2F j (x) is called the second strict derivative of F j at x. For a xed  2 R p+ , consider the scalar optimization problem P() of (P), let

g(y0; y1;    ; yp) = C (y0) +

p X i=1

i fi (yi) :R n  p R m ?! R [ f+1g;

G(x) = (x; F (x);    ; F (x)) :R n ?! Rn  p R m ; where C (y0 ) is the indicator function. Clearly g is a lower semi-continuous convex function and G is twice strictly di erentiable, i.e., each component of G is twice strictly di erentiable. Then P() is formulated as a type of the convex composite minimization problem studied in Ref. 14 and Ref. 16: (CP)

Minimize g(G(x)); subject to x 2 R n ; G(x) 2 dom(g):

Let the Lagrangian, the critical cone and the multiplier set of (CP) be de ned, respectively, by,

L(x; v) = hv; G(x)i ? g(v); v 2 dom(g); K (x) = fu 2 R n : g(G(x) + trG(x)u)  g(G(x)); for some t > 0g; L0(x) = fv 2 R n  p R m : v 2 @g(G(x)); rG(x)T v = 0g: In fact the function L(x; v) and the sets K (x) and L0(x) depend on the parameter . 11

Lemma 4.1. Let  2 R p+ and x 2 C . We have K (x) = cone(C ? x) \ K1(x); where K1(x) is the multiplier set of the following convex composite problem Minimize (

p X

i fi )(F (x));

i=1

subject to x 2 X:

Proof. Let u 2 K (x). Then g(G(x) + trG(x)u)  g(G(x)); for some t > 0; i.e.,

C (x + tu) +

p X i=1

i fi (F (x) + trF (x)u)  C (x) +

Thus u 2 cone(C ? x) and p X i=1

i fi (F (x) + trF (x)u) 

i.e.

p X i=1

p X i=1

i fi (F (x)):

i fi (F (x)); for some t > 0;

u 2 K1(x):



The proof is complete.

Lemma 4.2. We have L0 (x) = f(v0 ; 1v1 ;    ; p vp ) :v0 2 N (xjC ); vi 2 @fi(F (x)); v0 +

p X i=1

i rF (x)T vi = 0g:

Proof. It is easy to see that @g(G(x)) = N (xjC )  p i @fi(F (a)): 12

Thus v 2 L0(x) if and only if v = (v0; v10 ;    ; vp0 ) where v0 2 N (ajC ); vi0 2 i @fi(F (a)) such that p X v0 + rF (a)T vi0 = 0: i=1

Then vi0 = i vi ; vi 2 @fi(F (a)) such that

v0 + Let

p X i=1

i rF (a)T vi = 0:



0 uT D2F 1(a)u 1 .. CA ; uT D2 F (a)u = B . @

uT D2 F m (a)u where D2 F j (x) is the second strict derivative of F j at x, an n  n matrix.

Theorem 4.1. Assume that (3.1) holds. If a is a weakly ecient solution of (P), then there exists  2 R p+ nf0g such that L0 (a) 6= ; and for each u 2 cl (cone(C ? a)) \ cl(K1(a)), p X

max f v

i=1

i viT uT D2 F (a)ug  0;

where vi satis es

v0 2 N (ajC ); vi 2 @fi (F (a)); v0 +

p X i=1

i rF (a)T vi = 0:

Proof. From Theorem 2.1, there exists  2 R p+ n f0g such that a is a solution of P(). Then a is a solution of the convex composite optimization (CP). We have for v 2 L0 (a) and u 2 R n , p L (a; v; u; u) =

X i=1

i viT uT D2 F (a)u;

where L(a; v; u; u) is the Clarke generalized second-order directional derivative (see Ref. 15). Using Theorem 4.1 in Ref. 14, we have that L0(a) 6= ; and p X

maxf

i=1

i viT uT D2 F (a)u : v 2 L0(a)g  0; 8u 2 cl(cone(C ? a)) \ cl(K1(a)): 13

Then the result follows from above Lemmas 4.1 and 4.2.



Let

K (a)T D2 F (a)K (a) := fq 2 R m : q = uT D2 F (a)u; for some u 2 K (a)g: We now obtain a second-order sucient condition for the problem (P) by assuming a nonempty interior of the set K (a)T D2 F (a)K (a).

Theorem 4.2. Assume that the interior of the set K (a)T D2 F (a)K (a) is nonempty, i.e., int K (a)T D2 F (a)K (a) 6= ;: and that there exists  2 R p+ n f0g such that L0 (a) 6= ; and for each u 2 cl(cone(C ? a)) \cl(K1(a)), p X T T 2 max v f i vi u D F (a)ug  0; i=1

where v satis es

v0 2 N (ajC ); vi 2 @fi (F (a)); v0 +

p X i=1

i rF (a)T vi = 0:

Then a is a weakly ecient solution of (P).

Proof. We show that a is a solution of P(). Let x 2 C . Then, p X

i fi (F (x)) ?

p X

i fi (F (a)) 

p X

i=1 i=1 i=1 Since int K (a)T D2 F (a)K (a) 6= ;, we can choose

i viT (F (x) ? F (a)):

w 2 int K (a)T D2 F (a)K (a); > 0; such that

w + [F (x) ? F (a) ? rF (a)(x ? a)] 2 K (a)T D2 F (a)K (a); wT

p X i=1

!

i vi = 0: 14

There exists u 2 K (a) such that

w + [F (x) ? F (a) ? rF (a)(x ? a)] = uT D2 F (a)u: Then

p X i=1

i viT (w + (F (x) ? F (a)) ? rF (a)(x ? a)) =

Hence 0 =

p X i=1 p X i=1

i viT (w + [F (x) ? F (a) ? rF (a)(x ? a)])

= v0T (x ? a) +



thus

i=1

i viT uT D2 F (a)u:

i viT uT D2 F (a)u

by v0 +



p X

p X i=1

p X i=1

p X i=1

p X i=1

i rF (a)T vi = 0

!

i viT (F (x) ? F (a))

by v0 2 N (ajC )



i viT (F (x) ? F (a));

i fi(F (x)) 

p X i=1

i fi (F (a)); 8x 2 C:

So, a is optimal for P(). Since  2 R p+ n f0g, a is a weakly ecient solution for (P). 

5. Conclusion We presented rst-order and second-order optimality conditions for a convex composite multi-objective optimization problem with a closed convex set constraint. In particular a second-order sucient condition was derived by transforming the scalarization problem into a convex composite optimization problem with a non- nite valued convex function. These conditions were obtained by assuming nonempty interiors of certain sets which 15

include some nonconvex cases. We point out that sucient conditions of an ecient solution or a properly ecient solution for (P) can be derived in terms of corresponding scalarization results.

References 1. Jahn, J. and Sacks, E., Generalized quasiconvex mappings and vector optimization, SIAM Journal on Control and Optimization, Vol. 24, pp. 306-322, 1986. 2. Jeyakumar, V. and Yang, X.Q., Convex composite multi-objective nonsmooth programming, Mathematical Programming, Vol. 59, pp. 325-343, 1993. 3. Ben-Tal, A. and Zowe, J., Necessary and sucient optimality conditions for a class of nonsmooth minimization problems, Mathematical Programming, Vol. 24, pp. 70-91, 1982. 4. Clarke, F., Optimization and Nonsmooth Analysis, John Wiley, New York, 1983. 5. Yang, X.Q., Generalized convex functions and vector variational inequalities, Journal of Optimization Theory and Applications, Vol. 79, pp. 563-580, 1993. 6. Sawaragi, Y., Nakayama, H. and Tanino, T., Theory of Multi-Objective Optimizatio, Academic Press, New York, 1985. 7. Rockafellar, R.T., Convex Analysis, Princeton Univ. Press, Princeton N.J. 1970. 8. Craven, B.D., Mathematical Programming and Control Theory, Chapman and Hall, London, 1978. 9. Sengupta, S.S., Podrebarac M.L. and Fernando, T.D.H., Probabilities of optima in multiobjective linear program, in Multiple Criteria Decision Making, edited by Cochrane, J.L. and Zeleny, M., South Caroline University Press, 1973. 10. Gale, D., Kuhn, H. and Tucker, A.W., Linear programming and the theory of games, 16

Chapter 19 of Activity Analysis of Production and Allocation. edited by Koopmans, T.C., New York, Interscience, 19966. 11. Geo rion, A.M., Proper eciency and the theory of vector maximization, Journal of Mathematical Analysis and Applications, Vol. 22, pp. 618-630, 1968. 12. Jeyakumar, V., Composite nonsmooth programming with G^ateaux di erentiability, SIAM Journal on Optimization, Vol. 1, pp. 30-41, 1991. 13. Swartz, C., Pshenichnyi's Theorem for vector minimization, Journal of Optimization Theory and Applications, Vol. 53, pp. 309-317, 1987. 14. Jeyakumar, V. and Yang, X.Q., Convex composite minimization with C 1;1 functions, Journal of Optimization Theory and Applications, Vol. 86, 1995 (to appear). 15. Cominetti, R. and Correa, R., A generalized second-order derivative in nonsmooth optimization, SIAM Journal on Control and Optimization, Vol. 28, pp. 789-809, 1990. 16. Burke, J.V. and Poliquin, R.A., Optimality conditions for non- nite valued convex composite functions, Mathematical Programming, Vol. 57, pp. 103-120, 1992.

17