Optimality conditions for a class of composite multiobjective ...

5 downloads 63 Views 214KB Size Report
In this paper, a class of composite multiobjective nonsmooth optimization problems with cone constraints is considered. Necessary optimality conditions for weak ...
J Glob Optim (2013) 57:399–414 DOI 10.1007/s10898-012-9957-5

Optimality conditions for a class of composite multiobjective nonsmooth optimization problems Li Ping Tang · Ke Quan Zhao

Received: 30 June 2011 / Accepted: 30 June 2012 / Published online: 10 July 2012 © Springer Science+Business Media, LLC. 2012

Abstract In this paper, a class of composite multiobjective nonsmooth optimization problems with cone constraints is considered. Necessary optimality conditions for weak minimum are established in terms of Semi-infinite Gordan type theorem. η-generalized null space condition, which is a proper generalization of generalized null space condition, is proposed. Sufficient optimality conditions are obtained for weak minimum, Pareto minimum, Benson’s proper minimum under K -generalized invexity and η-generalized null space condition. Some examples are given to illustrate our main results. Keywords Weak minimum · Pareto minimum · K -generalized invexity · η-generalized null space condition · Composite multiobjective nonsmooth optimization problems Mathematics Subject Classification (2000)

90C26 · 90C29 · 90C30 · 90C46

1 Introduction It is well known that composite optimization problem is a great important class of optimization problems. This is not only because it covers various practical optimization problems, such as minimax problems, the penalty methods for constrained optimization problems, but it provides a unified framework for studying the convergence behaviour of many algorithms in optimization (see [1–6]). With the development of economy, engineering, decision-making and other disciplines, multiobjective optimization problem has increasingly received a great

L. P. Tang (B) Department of Mathematics, College of Sciences, Shanghai University, 200444 Shanghai, China e-mail: [email protected] K. Q. Zhao College of Mathematics Science, Chongqing Normal University, 400047 Chongqing, China e-mail: [email protected]

123

400

J Glob Optim (2013) 57:399–414

deal of attention, and has become a useful mathematics model. As the generalization of single objective problem, multiobjective optimization problem enlarges the range of the application of optimization problems, which arises in mechanical engineering (see [7]), the design of aircraft control systems (see [8]), resource planning and management, mathematical biology (see [9]) and so on. In 1988, Jeyakumar [13] considered a class of single objective convex composite problems, where the objective function and the constraints are the composition of locally Lipschitz and Gâteaux differentiable functions and established optimality conditions in terms of the null space condition and upper convex approximation. In 1993, Jeyakumar and Yang [15] generalized the single objective composite optimization problem in [13] to the multiobjective case, and introduced a new definition of generalized null space condition to obtain sufficient optimality conditions and Mond-Weir type duality results. Moreover, a characterization of properly efficient solution for convex composite multiobjective problems were presented under appropriate conditions. In [15], Jeyakumar and Yang also pointed out that this provided us a unified framework for studying non-convex composite multiobjective problems. Later, Mishra and Mukherjee [16] and Mishra [17] considered the composite nonsmooth multiobjective problems in [15] under V -invexity, V -psedoinvexity, and V -quasiinvexity. They established sufficient optimality conditions and saddle points theorems. By V -ρ-invexity, Reddy and Mukherjee [18]presented sufficient optimality conditions and Wang and Yan [19] developed generalized saddle points theorem for the composite nonsmooth multiobjective problem in [15]. On the other hand, various forms of the generalization of convexity are made for the practical optimization problems. In 1981, Hanson [21] introduced a class of differentiable functions, named as invex function by Craven [22], and Craven [23] gave the definition of cone-invex function in the differentiable case, Khurana [26] defined (strongly) cone pseudoinvexity as the generalization of cone-invexity in [23]. However, many authors have been more interested in the nonsmooth case. In 1993, Yen and Sach [27] gave the concepts of cone generalized invexity and cone nonsmooth invexity of locally Lipschitz functions. Introducing the concepts of cone nonsmooth quasi-invex and (strictly or strongly) cone pseudo-invex functions, Suneja [28] established optimality conditions and proved Mond-Weir type duality results for a class of nonsmooth multiobjective optimization problems in the terms of generalized gradient. Motivated by the works of [13–22,25,28], we consider the following noonsmooth composite multiobjective optimization problem: (MOP)

K -min ( f ◦ F)(x) − (g ◦ G)(x) ∈ Q,

s.t.

where f : → g: → F : → Rn , G : Rs → Rn . K ⊆ R p and Q ⊆ Rm are closed convex cones with nonempty interiors. Let X 0 = {x ∈ Rs : −(g ◦ G)(x) ⊆ Q} denote the set of all feasible solutions of (MOP). Using Semi-infinite Gordon type theorem and establishing appropriate upper convex approximation, we derive Fritz-John and Karush–Kuhn–Tucker necessary optimality conditions for (MOP) under the assumption of cone generalized invexity. We also present a proper generalization of generalized null space condition, named as η-generalized null space condition (η-GNC) to establish sufficient optimality conditions for weak minimum, Pareto minimum and Benson’s proper minimum of (MOP). Moreover, some examples are given to illustrate sufficient optimality conditions for Pareto minimum of (MOP). Rn

123

Rp,

Rn

Rm ,

Rs

J Glob Optim (2013) 57:399–414

401

2 Preliminaries Throughout this paper, we suppose that f and g are locally Lipschitz functions, F and G are locally Lipschitz and Gâteaux differentiable functions with Gâteaux derivatives F  (·) and G  (·) respectively, and η : Rn × Rn → Rn . Let X be a subset of Rn . The closure , cone hull and convex hull of X are denoted by cl(X ), cone(X ) and co(X ), respectively. The directional derivative of ϕ : X → R at x ∈ X in the direction v ∈ Rn is defined by ϕ  (x; v) = lim λ↓0

ϕ(x + λv) − ϕ(x) . λ

ϕ : X → R is said to be Gâteaux differentiable at x ∈ X , if ϕ  (x; v) exists for each v ∈ Rn and there exists a continuous linear functional from Rn to R, denoted by ϕ  (x), such that ϕ  (x; v) = ϕ  (x), v , ∀v ∈ Rn . The upper Dini-directional derivative of ϕ at x ∈ X in the direction v ∈ Rn is defined by ϕ + (x; v) = lim sup λ↓0

ϕ(x + λv) − ϕ(x) . λ

A function ϕx : X → R for any given x ∈ X is said to be an upper convex approximation for ϕ : X → R at x, if ϕx is a continuous sublinear function and ϕ + (x; v)  ϕx (v), for each v ∈ X. ϕ : X → R is said to be locally Lipschitz at x ∈ X if there exists  > 0 such that |ϕ(y) − ϕ(z)| ≤  y − z for all y, z in a neighbourhood of x. ϕ is said to be locally Lipschitz on X if it is locally Lipschitz at each point of X . For a locally Lipschitz function ϕ : X → R, the Clarke directional derivative of ϕ at x ∈ X in the direction v ∈ Rn is defined by ϕ(y + λv) − ϕ(y) , λ y→x,λ↓0

ϕ ◦ (x; v) = lim sup

and the Clarke generalized gradient of ϕ at x is given by ∂ϕ(x) = {ξ ∈ Rn : ϕ ◦ (x; v) ≥ ξ, v , ∀v ∈ Rn }. Furthermore, ∂ϕ(x) is a nonempty convex compact subset of Rn and ϕ ◦ (x; v) = max{ξ, v : ξ ∈ ∂ϕ(x)}. Let f = ( f 1 , . . . , f p ) : Rn → R p . Then f is said to be locally Lipschitz on Rn if each f i is locally Lipschitz on Rn for i = 1, . . . , p. The generalized directional derivative of a locally Lipschitz function f : Rn → R p at x ∈ Rn in the direction v ∈ Rn is given by f ◦ (x, v) = ( f 1◦ (x, v), . . . , f p◦ (x, v)), the generalized gradient of f at x ∈ Rn is the set ∂ f (x) = ∂ f 1 (x) × · · · × ∂ f p (x), where f i◦ (x, v) is the Clarke directional derivative of f i at x ∈ Rn in the direction v ∈ Rn , ∂ f i (x) is the Clarke generalized gradient of f i at x ∈ Rn for i = 1, . . . , p, and f ◦ (x, v) = max{ξ, v : ξ ∈ ∂ f (x)}, v ∈ Rn .

123

402

J Glob Optim (2013) 57:399–414

Let K ⊆ Rm be a closed convex cone with nonempty interior. The positive dual cone K + and strictly positive dual cone K s+ of K are respectively defined as K + = {y ∗ ∈ Rm : y, y ∗ ≥ 0, ∀y ∈ K }, K s+ = {y ∗ ∈ Rm : y, y ∗ > 0, ∀y ∈ K \{0}}. Remark 2.1 If f , g, F and G of (MOP) are locally Lipschitz, then f ◦ F, g ◦ G, τ T f = τ ◦ f and λT g = λ ◦ g are also locally Lipschitz for all τ ∈ K + , λ ∈ Q + . Definition 2.1 Let x¯ ∈ X 0 , (i)

x¯ is called a weak minimum of (MOP) if f (F(x)) − f (F(x)) ¯ ∈ / intK , ∀x ∈ X 0 ;

(ii)

x¯ is called a Pareto minimum of (MOP) if f (F(x)) − f (F(x)) ¯ ∈ / K \{0}, ∀x ∈ X 0 ;

(iii)

x¯ is called a Benson’s proper minimum of (MOP) if ¯ = {0}. (−K ) ∩ conecl( f (F(X 0 )) + K − f (F(x)))

Definition 2.2 [24] The problem (MOP) is said to satisfy the generalized Slater constraint qualification, if there exists x¯ ∈ X 0 such that −g(G(x)) ¯ ∈ intQ. Let φ : Rn → R p be locally Lipschitz on Rn . Definition 2.3 [27] φ is said to be K -generalized invex at the point x ∈ Rn , if there exists η such that for every y ∈ Rn and ξ ∈ ∂φ(x), φ(y) − φ(x) − ξ, η(y, x) ∈ K . Definition 2.4 [27] φ is said to be K -nonsmooth invex at x ∈ Rn , if there exists η such that for every y ∈ Rn , φ(y) − φ(x) − φ ◦ (x; η(y, x)) ∈ K , where φ ◦ (x; η(y, x)) = (φ1◦ (x; η(y, x)), . . . , φ ◦p (x; η(y, x))). Definition 2.5 [28] φ is said to be K -nonsmooth quasi-invex at x ∈ Rn , if there exists η such that for every y ∈ Rn , φ(y) − φ(x) ∈ / intK ⇒ −φ ◦ (x; η(y, x)) ∈ K . Definition 2.6 [28] φ is said to be K -nonsmooth pseudo-invex at x ∈ Rn , if there exists η such that for every y ∈ Rn , / intK ⇒ −(φ(y) − φ(x)) ∈ / intK . −φ ◦ (x; η(y, x)) ∈ Definition 2.7 [28] φ is said to be strictly K -nonsmooth pseudo-invex at x ∈ Rn , if there exists η such that for every y ∈ Rn , / intK ⇒ −(φ(y) − φ(x)) ∈ / K. −φ ◦ (x; η(y, x)) ∈

123

J Glob Optim (2013) 57:399–414

403

3 Necessary optimality conditions for (MOP) In this section, we present necessary optimality conditions for (MOP). Fristly, we quote the following lemmas which will be used in the sequel. Lemma 3.1 (Semi-infinite Gordon Type Theorem) [12] Let P be a nonempty compact subset of Rn , C be a convex cone in Rn . Then, exactly one of the following two systems is consistent: (i) ∃x ∈ C, max p∈P p T s < 0; (ii) 0 ∈ −C + + co(P). Lemma 3.2 [14] Let ϕ : Rn → Rm be K -generalized invex at x0 ∈ Rn with respect to η : Rn × Rn → Rn where K is a closed convex cone in Rm with nonempty interior. Then, exactly one of the following two systems is consistent: (i) ∃x ∈ Rn , ϕ(x) − ϕ(x0 ) ∈ −intK ; (ii) ∃ p ∈ K + \{0}, ( p ◦ G)(Rn ) ⊂ R+ , where G(·) = ϕ(·) − ϕ(x0 ). From Lemma 3.1 and Lemma 3.2, we have the following results for (MOP). Lemma 3.3 Let f , g be K -generalized invex and Q-generalized invex at F(x 0 ) with respect to η, respectively. If (MOP) attains a weak minimum at x 0 , then there exist τ¯ ∈ K + , λ¯ ∈ Q + , τ¯ , λ¯  = 0, such that x0 is an optimal solution for the following problem: min [τ¯ ◦ ( f ◦ F) + λ¯ ◦ (g ◦ G)](x)

x∈Rs

and λ¯ T g(G(x0 )) = 0. Proof Since x0 is a weak minimum of (MOP), ( f ◦ F)(x) − ( f ◦ F)(x0 ) ∈ / −intK , ∀x ∈ X 0 , i.e., there does not exist x ∈ Rn such that −[ f (F(x)) − f (F(x0 )), g(G(x))] ∈ int(K × Q). By the K -generalized invexity of f and Q-generalized invexity of g with respect to η at F(x0 ), we get that ( f, g) is K × Q-generalized invex with respect to η at F(x 0 ), and it ¯  = 0, such that follows from Lemma 3.2 that there exist τ¯ ∈ K + , λ¯ ∈ Q + , τ¯ , λ τ¯ T [( f ◦ F)(x) − ( f ◦ F)(x0 )] + λ¯ T (g ◦ G)(x) ≥ 0, ∀x ∈ Rs , i.e., τ¯ T ( f ◦ F)(x) + λ¯ T (g ◦ G)(x) ≥ τ¯ T ( f ◦ F)(x0 ), ∀x ∈ Rs . By taking x = x0 in the above inequality, we obtain that λ¯ T (g ◦ G)(x0 ) ≥ 0; However, λ¯ ∈ Q + and −(g ◦ G)(x0 ) ∈ Q, we get λ¯ T (g ◦ G)(x0 ) ≤ 0. Hence, λ¯ T (g ◦ G)(x0 ) = 0. Then, τ¯ T ( f ◦ F)(x) + λ¯ T (g ◦ G)(x) ≥ τ¯ T ( f ◦ F)(x0 ) + λ¯ T (g ◦ G)(x0 ), ∀x ∈ Rs , which implies that x0 is optimal for the problem min [τ¯ ◦ ( f ◦ F) + λ¯ ◦ (g ◦ G)](x).

x∈Rs

123

404

J Glob Optim (2013) 57:399–414

Next, we establish Fritz-John necessary optimality conditions for (MOP), which are the extension of the standard necessary conditions presented by Fritz John [10] and Bazaraa et al. [20] under the assumption of continuous differentiability. It is worth noting that the following optimality conditions are not directly obtained by chain rule in [4]. The approach is due to Jeyakumar [13] who used the schame of null space condition and upper convex approximation, which paves the way for studying general composite nonlinear programming problems.   Theorem 3.1 (Fritz-John Optimality Conditions) Let f , g be K -generalized invex and Qgeneralized invex at F(x0 ) with respect to η, respectively. If (MOP) attains a weak minimum at x0 , then there exist τ¯ ∈ K + , λ¯ ∈ Q + , τ¯ , λ¯  = 0, such that 0 ∈ ∂(τ¯ ◦ f )(F(x0 )) · F  (x0 ) + ∂(λ¯ ◦ g)(G(x0 )) · G  (x0 ), λ¯ T g(G(x0 )) = 0.

(3.1) (3.2)

Proof Since x0 is a weak minimum of (MOP), by Lemma 3.3, there exist τ¯ ∈ K + , λ¯ ∈ Q + , τ¯ , λ¯  = 0, such that x0 is optimal for the following problem: min [τ¯ ◦ ( f ◦ F) + λ¯ ◦ (g ◦ G)](x)

x∈Rs

and λ¯ T g(G(x0 )) = 0. We define h k (x) =



(τ¯ ◦ f )(F(x)), (λ¯ ◦ g)(G(x)),

k = 1, k = 2.

Suppose that the following system has a solution d ∈ Rs − x0 , πx10 (d) + πx20 (d) < 0, where πxk0 (d) is given by  max{v T F  (x0 )d : v ∈ ∂(τ¯ ◦ f )(F(x0 ))}, πxk0 (d) = max{w T G  (x0 )d : w ∈ ∂(λ¯ ◦ g)(G(x0 ))},

k = 1, k = 2.

It follows from the Corollary 2.1 (see [15]) that πxk0 (d) is an upper convex approximation of h k at x0 . Then the system + d ∈ Rs − x 0 , h + 1 (x 0 ; d) + h 2 (x 0 ; d) < 0,

has a solution. Since (τ ◦ f ) ◦ F (∀τ ∈ K + ) and (λ ◦ g) ◦ G (∀λ ∈ Q + ) are locally Lipschitz, h 1 (x) and h 2 (x) are locally Lipschitz. Then, there exists α0 > 0 such that x0 + αd ∈ Rs and h 1 (x0 + αd) + h 2 (x0 + αd) < h 1 (x0 ) + h 2 (x0 ), ∀α : 0 < α < α0 . This contradicts Lemma 3.3. Hence, there is no solution for the system πx10 (d) + πx20 (d) < 0, d ∈ Rs − x0 , i.e., the following system has no solution max{σ d : σ ∈ ∂(τ¯ ◦ f )(F(x0 )) · F  (x0 ) + ∂(λ¯ ◦ g)(G(x0 )) · G  (x0 )} < 0, d ∈ Rs − x0 . By Lemma 3.1, 0 ∈ ∂(τ¯ ◦ f )(F(x0 )) · F  (x0 ) + ∂(λ¯ ◦ g)(G(x0 )) · G  (x0 ) − (Rs − x0 )+ ,

123

J Glob Optim (2013) 57:399–414

405

i.e., 0 ∈ ∂(τ¯ ◦ f )(F(x0 )) · F  (x0 ) + ∂(λ¯ ◦ g)(G(x0 )) · G  (x0 ). According to Kuhn and Tucker [11] and Bazaraa et al. [20], we need to find constraint qualifications to guarantee the multipliers of objective functions be positive. This means that Fritz-John conditions can be strengthened to Karush–Kuhn–Tucker conditions. Here, we consider the generalized Slater constraint qualification in [24].   Theorem 3.2 (Karush–Kuhn–Tucker Optimality Conditions) Let f , g be K -generalized invex and Q-generalized invex at F(x0 ) with respect to η : Rn × Rn → Rn , respectively. Suppose that the generalized Slater constraint qualification is satisfied. If (MOP) attains a weak minimum at x0 , then there exist 0  = τ¯ ∈ K + , λ¯ ∈ Q + such that (3.1) and (3.2) hold. Proof Since x0 is a weak minimum of (MOP), by Theorem 3.1, there exist τ¯ ∈ K + , λ¯ ∈ Q + , τ¯ , λ¯  = 0 such that (3.1) and (3.2). Assume that τ¯ = 0 and from the proof of Theorem 3.1, we get λ¯ T (g ◦ G)(x0 ) ≥ 0, ∀x ∈ Rs .

(3.3)

By the generalized Slater constraint qualification, there exists x¯ ∈ Rs such that −g(G(x)) ¯ ∈ int Q. ¯ < 0. This contradicts (3.3). It follows that λ¯ T g(G(x))

 

4 Sufficient optimality conditions for (MOP) In order to establish sufficient optimality conditions for convex composite single objective optimization problem, Jeyakumar introduced the definition of null space condition (NC) in [13]: for each x, a ∈ X , (X is a Banach space), there exists μ(x, a) ∈ X such that K (x) − K (a) = A0x,a (μ(x, a)), where K (x) = (F1 (x), . . . , F p (x), G p (x), . . . , G m (x)), A0x,a (μ(x, a)) = (F1 (a)μ(x, a), . . . , F p (a)μ(x, a), G 1 (a)μ(x, a), . . . , G m (a)μ(x, a)). However, sufficient optimality conditions were obtained by Jeyakumar only in the case of the same interior function of the objective function and constrains in [13]. For studying the convex composite multiobjective programming problem with different interior functions, Jeyakumar and Yang generalized null space condition as follows, which is called generalized null space condition (GNC) in [15]: for each x, a ∈ X , there exist real constants αi (x, a) > 0, (i = 1, . . . , p), β j (x, a) > 0, ( j = 1, . . . , m) and μ(x, a) ∈ X such that K (x) − K (a) = A x,a (μ(x, a)), where K (x) = (F1 (x), . . . , F p (x), G 1 (x), . . . , G m (x)), A x,a (μ(x, a)) = (α1 (x, a)F1 (a)μ(x, a), . . . , α p (x, a)F p (a)μ(x, a), β1 (x, a)G 1 (a)μ(x, a), . . . , βm (x, a)G m (a)μ(x, a)).

123

406

J Glob Optim (2013) 57:399–414

Jeyakumar and Yang also pointed out that generalized null space condition (GNC) was weaker than the null space condition (NC), and generalized null space condition (GNC) allowed us to treat more classes of nonconvex problems than before. To investigate the convex composite multiobjective problem on a convex set C, Jeyakumar and Yang limited generalized null space condition to the set C, and then derived sufficient optimality conditions for efficient solutions and properly efficient solutions in [15]. So as to deal with the nonsmooth composite multiobjective problem (NCMP) under the assumption of K -generalized invexity, we introduce the concept of η-generalized null space condition (η-GNC) as the generalization of generalized null space condition (GNC): for each x, a ∈ C (∅  = C ⊂ Rs ), there exist αi (x, a) > 0, (i = 1, . . . , p), β j (x, a) > 0, ( j = 1, . . . , m) and μ(x, a) ∈ (C − a) such that η(K (x), K (a)) = A x,a (μ(x, a)), where K (x) = (F1 (x), . . . , F p (x), G 1 (x), . . . , G m (x)), A x,a (μ(x, a)) = (α1 (x, a)F1 (a)μ(x, a), . . . , α p (x, a)F p (a)μ(x, a), β1 (x, a)G 1 (a)μ(x, a), . . . , βm (x, a)G m (a)μ(x, a)). Remake 4.1 Clearly, GNC implies η-GNC, but the converse is not necessarily true. The following example shows this issue. Example 4.1 Let C = R+ and F : R → R2 , where F(x) = (x 2 , x), ∀x ∈ R. Define η : R2 × R2 → R2 by η(u, v) = v, ∀u, v ∈ R2 . Then, ∀x, a ∈ C, ∃ α1 (x, a) = 21 , α2 (x, a) = 2, μ(x, a) = a ∈ (C − a), s.t.   1 · 2a · a, 1 · 1 · a η(F(x), F(a)) = (a 2 , a) = 2 = (α1 (x, a) · F1 (a) · μ(x, a), α2 (x, a) · F2 (a) · μ(x, a)). Therefore, η-generalized null space condition (η-GNC) holds. However, (GNC) doesn’t hold at the point a = 0, because for all 0  = x ∈ C, F(x) − F(a) = (x 2 , x)  = (α1 · 0 · μ, α2 · 0 · μ), ∀α1 , α2 > 0, ∀μ(x, a) ∈ (C − a). Now we present sufficient optimality conditions under the assumption of (η-GNC) and cone generalized invexity. Theorem 4.1 (Karush–Kuhn–Tucker Sufficient Opimality Conditions) Let f be K -generalized invex at F(x0 ) and g be Q-generalized invex at G(x0 ) with respect to η : Rn × Rn → Rn . Suppose that necessary optimality conditions (KKT) hold at x0 ∈ X 0 . If η-GNC holds with αi (x, x0 ) = α(x, x0 ) > 0, β j (x, x0 ) = β(x, x0 ) > 0, ∀i, j, ∀x ∈ X 0 , then x0 is a weak minimum of (MOP). Proof By necessary optimality conditions (KKT), there exist 0  = τ¯ ∈ K + , λ¯ ∈ Q + such that (3.1) and (3.2). From (3.1), there exist v¯ ∈ ∂(τ¯ ◦ f )(F(x0 )), w¯ ∈ ∂(λ¯ ◦ g)(G(x0 )) such that v¯ T F  (x0 ) + w¯ T G  (x0 ) = 0.

123

(4.1)

J Glob Optim (2013) 57:399–414

407

By the Q-generalized invexity of g at G(x0 ) with respect to η, g(G(x)) − g(G(x0 )) − ζ η(G(x), G(x0 )) ∈ Q, ∀ζ ∈ ∂g(G(x0 )), ∀x ∈ X 0 , it follows from λ¯ ∈ Q + that λ¯ T [g(G(x)) − g(G(x0 ))] − λ¯ T ζ η(G(x), G(x0 )) ≥ 0, ∀ζ ∈ ∂g(G(x0 )), ∀x ∈ X 0 . (4.2) From chain rule for subdifferentials in [4], ∂(λ ◦ g)(G(x0 )) = λT ∂g(G(x0 )), ∀λ ∈ Q + . It follows that there exists ζ¯ ∈ ∂g(G(x0 )) such that w¯ = λ¯ T ζ¯ , and from (4.2), we can get ¯ G(x0 )). 0 ≥ λ¯ T [g(G(x)) − g(G(x0 ))] ≥ wη(G(x), Now, by η-GNC, there exist α(x, x0 ), β(x, x0 ) > 0 and μ(x, x0 ) ∈

Rs

(4.3) such that



η(F(x), F(x0 )) = α(x, x0 )F (x0 )μ(x, x0 ), η(G(x), G(x0 )) = β(x, x0 )G  (x0 )μ(x, x0 ). Consequently, 1 1 v¯ T η(F(x), F(x0 )) + w¯ T η(G(x), G(x0 )) α(x, x0 ) β(x, x0 ) = [v¯ T F  (x0 ) + w¯ T G  (x0 )]μ(x, x0 ) = 0. Combine with (4.3), this yields v¯ T η(F(x), F(x0 )) ≥ 0.

(4.4)

By the K -generalized invexity of f at F(x0 ) with respect to η, f (F(x)) − f (F(x0 )) − ξ η(F(x), F(x0 )) ∈ K , ∀ξ ∈ ∂ f (F(x0 )), ∀x ∈ X 0 . Then, from τ¯ ∈ K + , τ¯ T [ f (F(x)) − f (F(x0 ))] ≥ τ¯ T ξ η(F(x), F(x0 )), ∀ξ ∈ ∂ f (F(x0 )), ∀x ∈ X 0 . (4.5) From (4.1), there exists ξ¯ ∈ ∂ f (F(x0 )) such that v¯ = τ¯ T ξ¯ . Taking it into (4.5), τ¯ T [ f (F(x)) − f (F(x0 ))] ≥ v¯ T η(F(x), F(x0 )), ∀x ∈ X 0 . Combine with (4.4), this yields τ¯ T [ f (F(x)) − f (F(x0 ))] ≥ 0, ∀x ∈ X 0 . By Lemma 3.2, / intK , ∀x ∈ X 0 , f (F(x)) − f (F(x0 )) ∈ which means that x0 is a weak minimum of (MOP).

 

Theorem 4.2 (Karush–Kuhn–Tucker Sufficient Opimality Conditions) Let f be K -generalized invex at F(x0 ) and g be Q-generalized invex at G(x0 ) with respect to η : Rn × Rn → Rn . Suppose that necessary optimality conditions (KKT) hold at x0 ∈ X 0 with τ¯ ∈ K s+ . If the η-GNC holds with αi (x, x0 ) = α(x, x0 ) > 0, β j (x, x0 ) = β(x, x0 ) > 0 ∀i, j, ∀x ∈ X 0 , then x0 is a Pareto minimum of (MOP).

123

408

J Glob Optim (2013) 57:399–414

Proof From the optimality conditions (KKT), there exist τ¯ ∈ K s+ , λ¯ ∈ Q + such that (3.1) and (3.2). Thus, there exist v¯ ∈ ∂(τ¯ ◦ f )(F(x0 )), w¯ ∈ ∂(λ¯ ◦ g)(G(x0 )) such that (4.1). Suppose that x0 is not a Pareto minimum of (MOP), then there exists xˆ ∈ Rs such that f (F(x)) ˆ − f (F(x0 )) ∈ −K \{0}, −g(G(x)) ˆ ∈ Q.

(4.6) (4.7)

Since f is K -generalized invex at F(x0 ) and g is Q-generalized invex at G(x0 ) with respect to η, f (F(x)) ˆ − f (F(x0 )) − ξ η(F(x), ˆ F(x0 )) ∈ K , ∀ξ ∈ ∂ f (F(x0 )),

(4.8)

g(G(x)) ˆ −g(G(x0 )) − ζ η(G(x), ˆ G(x0 )) ∈ Q, ∀ζ ∈ ∂g(G(x0 )).

(4.9)

η(F(x), ˆ F(x0 )) = α(x, ˆ x0 )F  (x0 )μ(x, ˆ x0 ),

(4.10)

By η-GNC, 

η(G(x), ˆ G(x0 )) = β(x, ˆ x0 )G (x0 )μ(x, ˆ x0 ).

(4.11)

Since τ¯ ∈ K s+ , λ¯ ∈ Q + and by (3.2), (4.6)–(4.11), we obtain α(x, ˆ x0 )τ¯ T ξ F  (x0 )μ(x, ˆ x0 ) < 0, ∀ξ ∈ ∂ f (F(x0 )), β(x, ˆ x0 )λ¯ T ζ G  (x0 )μ(x, ˆ x0 ) ≤ 0, ∀ζ ∈ ∂g(G(x0 )).

(4.12) (4.13)

From a chain rule of [4], there exist ξ¯ ∈ ∂ f (F(x0 )) and ζ¯ ∈ ∂g(G(x0 )) such that τ¯ T ξ¯ = v¯ ∈ ∂(τ¯ ◦ f )(F(x0 )), λ¯ T ζ¯ = w¯ ∈ ∂(λ¯ ◦ g)(G(x0 )).

(4.14)

It follows from (4.12)–(4.14) that ˆ x0 ) < 0, β(x, ˆ x0 )w¯ T G  (x0 )μ(x, ˆ x0 ) ≤ 0. α(x, ˆ x0 )v¯ T F  (x0 )μ(x, Then, v¯ T F  (x0 )μ(x, ˆ x0 ) + w¯ T G  (x0 )μ(x, ˆ x0 ) < 0, this contradicts to (4.1). Hence x0 is a Pareto minimum of (MOP).   Next, we present sufficient optimality conditions for (MOP) under cone-nonsmooth pseudo-invexity and cone-nonsmooth quasi-invexity. Theorem 4.3 (Karush–Kuhn–Tucker Sufficient Opimality Conditions) Let f be K -nonsmooth pseudo-invex at F(x0 ) and g be Q-nonsmooth quasi-invex at G(x0 ) with respect to η : Rn × Rn → Rn . Suppose that the necessary optimality conditions (KKT) hold at x0 ∈ X 0 . If η-GNC holds with αi (x, x0 ) = α(x, x0 ) > 0, β j (x, x0 ) = β(x, x0 ) > 0, ∀i, j, ∀x ∈ X 0 , then x0 is a weak minimum of (MOP). Proof From the optimality conditions (KKT), there exist 0  = τ¯ ∈ K + , λ¯ ∈ Q + such that (3.1) and (3.2). By (3.1), there exist v¯ ∈ ∂(τ¯ ◦ f )(F(x0 )), w¯ ∈ ∂(λ¯ ◦ g)(G(x0 )) such that (4.1). Suppose that x0 is not a weak minimum of (MOP). Then, there exists xˆ ∈ Rs such that f (F(x)) ˆ − f (F(x0 )) ∈ −intK , −g(G(x)) ˆ ∈ Q.

123

(4.15) (4.16)

J Glob Optim (2013) 57:399–414

409

By the K -nonsmooth pseudo-invexity of f at F(x0 ) and (4.15), − f 0 (F(x0 ); η(F(x), ˆ F(x0 ))) ∈ intK . ˆ F(x0 ))) < 0, which is equivalent to Since 0  = τ¯ ∈ K + , τ¯ T f 0 (F(x0 ); η(F(x), τ¯ T ξ η(F(x), ˆ F(x0 )) < 0, ∀ξ ∈ ∂ f (F(x0 )).

(4.17)

α(x, ˆ x0 )τ¯ T ξ F  (x0 )μ(x, ˆ x0 ) < 0, ∀ξ ∈ ∂ f (F(x0 )).

(4.18)

By η-GNC, we obtain Denote τ¯ T ξ by v, ¯ then α(x, ˆ x0 )v¯ T F  (x0 )μ(x, ˆ x0 ) < 0, i.e., v¯ T F  (x0 )μ(x, ˆ x0 ) < 0.

(4.19)

On the other hand, since λ¯ ∈ Q + and by (4.16), λ¯ T g(G(x)) ˆ ≤ 0. Combine with (3.2), this yields λ¯ T [g(G(x)) ˆ − g(G(x0 ))] ≤ 0. Now, we can claim that, ˆ G(x0 ))) ≤ 0. λ¯ T g 0 (G(x0 ); η(G(x),

(4.20)

If λ¯ = 0, then (4.20) holds trivially. Suppose λ¯  = 0, then g(G(x)) ˆ − g(G(x0 )) ∈ / intQ. Since g is Q-nonsmooth quasi-invex at G(x0 ), g 0 (G(x0 ); η(G(x), ˆ G(x0 ))) ∈ −Q, which implies (4.20) holds and is equivalent to λ¯ T ζ η(G(x), ˆ G(x0 )) ≤ 0, ∀ζ ∈ ∂g(G(x0 )). By η-generalized null space condition, β(x, ˆ x0 )λ¯ T ζ G  (x0 )μ(x, ˆ x0 ) ≤ 0, ∀ζ ∈ ∂g(G(x0 )). ¯ then β(x, ˆ x) ¯ wG ¯  (x0 )μ(x, ˆ x0 ) ≤ 0, i.e., Denote λ¯ T ζ by w, wG ¯  (x0 )μ(x, ˆ x0 ) ≤ 0. From (4.1), we have v¯ F  (x0 )μ(x, ˆ x0 ) ≥ 0. This contradicts to (4.19). Hence, x0 is a weak minimum of (MOP).   Theorem 4.4 (Karush–Kuhn–Tucker Sufficient Opimality Conditions) Let f be strictly K nonsmooth pseudo-invex at F(x0 ) and g be Q-nonsmooth quasi-invex at G(x0 ) with respect to η : Rn × Rn → Rn . Suppose that the necessary optimality conditions (KKT) hold at x0 ∈ X 0 . If η-GNC holds with αi (x, x0 ) = α(x, x0 ) > 0, β j (x, x0 ) = β(x, x0 ) > 0 ∀i, j, ∀x ∈ X 0 , then x0 is a Pareto minimum of (MOP). Proof The proof is similar to the one of Theorem 4.2.

 

From Theorem 4.3 and Theorem 4.4, we can obtain the following result.

123

410

J Glob Optim (2013) 57:399–414

Theorem 4.5 (Karush–Kuhn–Tucker Sufficient Opimality Conditions) Let f be K -generalized invex at F(x0 ) and g be Q-nonsmooth quasi-invex at G(x0 ) with respect to the same η : Rn × Rn → Rn . Suppose that the necessary optimality conditions (KKT) hold at x0 ∈ X 0 with τ¯ ∈ K s+ . If η-GNC holds with αi (x, x0 ) = α(x, x0 ) > 0, β j (x, x0 ) = β(x, x0 ) > 0 ∀i, j, ∀x ∈ X 0 , then x0 is a Pareto minimum of (MOP). Proof The proof is also similar to the one of Theorem 4.2.

 

Furthermore, we point out that sufficient optimality conditions for Pareto minimum of (MOP) are also the ones for Benson’s proper minimum of (MOP). Proposition 4.1 [26] Let τ¯ ∈ K s+ . If x0 is an optimal solution of the problem min (τ¯ ◦ f )(F(x)),

x∈X 0

then x0 is a Benson’s proper minimum of (MOP). Now, we present the results for Benson’s proper minimum of (MOP). Theorem 4.6 (Karush–Kuhn–Tucker Sufficient Opimality Conditions) Let f be K -generalized invex at F(x0 ) and g be Q-generalized invex at G(x0 ) with respect to η. Suppose that necessary optimality conditions (KKT) hold at x0 ∈ X 0 with τ¯ ∈ K s+ . If η-GNC holds with αi (x, x0 ) = α(x, x0 ) > 0, β j (x, x0 ) = β(x, x0 ) > 0 ∀i, j, ∀x ∈ X 0 , then x0 is a Benson’s proper minimum of (MOP). Proof The proof follows from Proposition 4.1 and Theorem 4.3 immediately.

 

Theorem 4.7 (Karush–Kuhn–Tucker Sufficient Opimality Conditions) Let f be K -generalized invex at F(x0 ) and g be Q-nonsmooth quasi-invex at G(x0 ) with respect to η. Suppose that necessary optimality conditions (KKT) hold at x0 ∈ X 0 with τ¯ ∈ K s+ . If η-GNC holds with αi (x, x0 ) = α(x, x0 ) > 0, β j (x, x0 ) = β(x, x0 ) > 0 ∀i, j, ∀x ∈ X 0 , then x0 is a Benson’s proper minimum of (MOP). Proof The proof is similar to the one of Theorem 4.6.

 

5 Examples In this section, two examples are given to illustrate Theorem 4.2 and Theorem 4.4, respectively. Example 5.1 Consider the problem: (MOP)

K -min ( f ◦ F)(x) = ( f 1 (F(x)), f 2 (F(x))) s.t.

123

−(g ◦ G)(x) = (−g1 (G(x)), −g2 (G(x))) ∈ Q,

J Glob Optim (2013) 57:399–414

411

where K = {(x1 , x2 ) ∈ R2 : x1 ≤ x2 , x2 ≤ 0}, Q = {(x1 , x2 ) ∈ R2 : x1 ≥ x2 , x2 ≥ 0}, and f = ( f 1 , f 2 ) : R2 → R2 , f 1 (y1 , y2 ) = −|y1 |, f 2 (y1 , y2 ) = y1 + y2 ; 1 1 g = (g1 , g2 ) : R2 → R2 , g1 (y1 , y2 ) = − y1 , g2 (y1 , y2 ) = − y2 ; 2 4 F : R2 → R2 , F(x) = F(x1 , x2 ) = (2x1 , x2 );   1 G : R2 → R2 , G(x) = G(x1 , x2 ) = x1 , x2 . 2 (i)

The set of feasible solutions of (MOP) is given by X 0 = {(x1 , x2 ) ∈ R2 : x2 ≥ 0, x1 ≥ x2 }.

(ii)

Let η : R2 → R2 be defined as η(x, z) = x + z. Then η-GNC holds for all x, y ∈ X 0 and f is K -generalized invex at F(0), g is Q-generalized invex at G(0) with respect to η. In fact, for each x, y ∈ X 0 , there exist α1 , α2 , β1 , β2 : α1 = α2 = β1 = β2 = 1 such that η(F(x), F(y)) = F(x) + F(y)

η(G(x), G(y)) = G(x) + G(y)   1 1 = = (2x1 + 2y1 , x2 + y2 ) x1 + y1 , x2 + y2 2 2 = (F1 (y), F2 (y))(x + y), = (G 1 (y), G 2 (y))(x + y),

which imply η-generalized null space condition holds for all x, y ∈ X 0 . For each z ∈ R2 , f (z) − f (F(0)) − ξ η(z, F(0)) = ( f 1 (z), f 2 (z)) − (ξ1 z), ξ2 z) = (−|2z 1 |, z 2 ) − (ξ1 z, z 2 ) = (−2x1 − ξ1 z, 0) ∈ K , where ξ = (ξ1 , ξ2 ) ∈ ∂ f (F(0)), ξ1 ∈ [−1, 1] × {0}, ξ2 = (0, 1); g(z) − g(G(0)) − ζ η(z, G(0)) = (g1 (z), g2 (z)) − (ζ1 z, ζ2 z)     1 1 1 1 = − z 1 , − z 2 − − z 1 , − z 2 ∈ Q, 4 4 4 4

(iii)

where ζ = (ζ1 , ζ2 ) ∈ ∂g(G(0)), ζ1 = (− 21 , 0), ζ2 = (0, − 41 ). That is to say, f is K -generalized invex at F(0), g is Q-generalized invex at G(0) with respect to η. It can be seen that there exist τ¯ = (−1, 21 ) ∈ K s+ = {(x1 , x2 ) ∈ R2 : x1 < 0, −x1 > x2 } and λ¯ = ( 21 , 2) ∈ Q + = {(x1 , x2 ) ∈ R2 : x1 > 0, −x1 ≤ x2 }, such that     1 1 1 ∂(τ¯ ◦ f )(F(0)) = [−1, 1] × , ∂(λ¯ ◦ g)(G(0)) = − , − , 2 4 2 mathb f 0 ∈ ∂(τ¯ ◦ f )(F(0)) · F  (0) + ∂(λ¯ ◦ g)(G(0)) · G(0), Also λ¯ T g(G(0)) = 0. Therefore, (3.1) and (3.2) hold at 0. From the above, all assumption conditions are satisfied in Theorem 4.2. Then, x0 = 0 = (0, 0) is a Pareto minimum of (MOP).

123

412

J Glob Optim (2013) 57:399–414

Example 5.2 Consider the problem: (MOP)

K -min ( f ◦ F)(x) = ( f 1 (F(x)), f 2 (F(x))) s.t.

−(g ◦ G)(x) = (−g1 (G(x)), −g2 (G(x))) ∈ Q,

where K = {(x1 , x2 ) ∈ R2 : x1 ≤ x2 , x2 < 0}, Q = {(x1 , x2 ) ∈ R2 : x1 ≥ x2 , x2 ≥ 0}, and f = ( f 1 , f 2 ) : R2 → R2 , f 1 (y1 , y2 ) = −|y1 |, f 2 (y1 , y2 ) = y2 − 1;  0 if y1 − y2 ≥ 0 2 2 ; g = (g1 , g2 ) : R → R , g1 (y1 , y2 ) = −y2 if y1 − y2 < 0  0 if y1 − y2 ≥ 0 g2 (y1 , y2 ) = ; −y1 if y1 − y2 < 0 F : R2 → R2 , F(x) = F(x1 , x2 ) = (2x1 , x2 );   1 2 2 G : R → R , G(x) = G(x1 , x2 ) = x1 , x2 . 2 (i)

The set of feasible solutions of (MOP) is given by X 0 = {(x1 , x2 ) ∈ R2 : x2 ≥ 0, x1 ≥ x2 }.

(ii)

Let η : R2 → R2 be defined as η(x, z) = x + z. Then η-GNC holds for all x, z ∈ X 0 and f is strictly K -nonsmooth pseudo-invex at F(0), g is Q-nonsmooth quasi-invex at G(0) with respect to η. In fact, for each x, z ∈ X 0 , there exist α1 , α2 , β1 , β2 : α1 = α2 = β1 = β2 = 1 such that η(F(x), F(z)) = F(x) + F(z)

η(G(x), G(z)) = G(x) + G(z)   1 1 = (2x1 + 2z 1 , x2 + z 2 ) x1 + z 1 , x2 + z 2 = 2 2    = (1 · F1 (z), 1 · F2 (z))(x + z), = (1 · G 1 (z), 1 · G 2 (z))(x + z).

which imply η-generalized null space condition holds for all x, y ∈ X 0 . For each y = (y1 , y2 ) ∈ R2 , − f 0 (0, η(y, 0)) = (−|y1 |, y2 ) ∈ / intK ⇒ −( f (y) − f (0)) ∈ / K,  g(y) − g(0) =  −g 0 (0; η(y, 0)) =

(iii)

(0, 0) if y1 − y2 ≥ 0 ∈ / intQ ⇒ (−y2 , −y1 ) if y1 − y2 < 0 (0, 0) if y1 − y2 ≥ 0 ∈ Q, (y2 , y1 ) if y1 − y2 < 0

That is to say, f is strictly K -nonsmooth pseudo-invex at F(0), g is Q-nonsmooth quasi-invex at G(0) with respect to η. It can be seen that there exist τ¯ = (−1, 21 ) ∈ K s+ = {(x1 , x2 ) ∈ R2 : x1 < 0, −x1 > x2 } and λ¯ = ( 21 , 2) ∈ Q + = {(x1 , x2 ) ∈ R2 : x1 > 0, −x1 ≤ x2 }, such that      1 1 ∂(τ¯ ◦ f )(F(0)) = [−1, 1] × , ∂(λ¯ ◦ g)(G(0)) = −2, − , (0, 0) , 2 2  ¯ 0 ∈ ∂(τ¯ ◦ f )(F(0)) · F (0) + ∂(λ ◦ g)(G(0)) · G(0), also λ¯ T g(G(0)) = 0. Therefore, (3.1) and (3.2) hold at 0.

123

J Glob Optim (2013) 57:399–414

413

From the above, all assumption conditions are satisfied in Theorem 4.4. Then, x0 = 0 = (0, 0) is a Pareto minimum of (MOP). Acknowledgments This work is partially supported by the National Science Foundation of China (Grants 10831009, 11126348, 11001289), the Special Fund of Chongqing Key Laboratory (CSTC, 2011KLORSE02) and the Education Committee Research Foundation of Chongqing (KJ110625). The authors are grateful to professor Xin Min Yang for his teaching and useful comments on the original version of this paper, and also thanks anonymous reviewers for their valuable comments and suggestions which have improved the presentation of the paper.

References 1. Loffe, A.D.: Necessary and sufficient conditions for a local minimum 2: conditions of Levitin-MiljutinOsmolovskii type. SIAM J. Control Optim. 17, 251–265 (1979) 2. Fletcher, R. (ed.): Practical Methods of Optimization, volume 2: Constrained Optimization. Wiley, New York (1981) 3. Ben-Tal, A., Zowe, J.: Necessary and sufficient optimality conditions for a class of nonsmooth minimization problems. Math. Program. 24, 70–91 (1982) 4. Clarke, F.H. (ed.): Optimization and Nonsmooth Analysis. Wiley, New York (1983) 5. Burke, J.V.: Descent methods for composite nondifferentiable optimization problems. Math. Program. 33, 260–279 (1985) 6. Rockafellar, R.T.: First and second order epi-differentiability in nonlinear programming. Trans. Am. Math. Soc. 307, 75–108 (1988) 7. Schy, A., Giesy, D.P.: Multicriteria optimization techniques for design of aircraft control systems. In: Stadler, W. (ed.) Multieriteria Optimization in Engineering and in the Sciences, Plenum Press, New York (1988) 8. Stadler, W. (ed.): Multicriteria Optimization in Engineering and in the Sciences. Plenum Press, New York (1988) 9. Stadler, W.: Multicriteria optimization in mechanics: a survey. Appl. Mech. Rev. 37, 277–286 (1984) 10. John, F.: Extremum problems with inequalities as subsidiary conditions. In: Friedericks, K.O., Neugebauer, O.E., Stoker, J.J. (eds.) Studies and Essays, Courant Anniversary Volume, Interscience, New York (1948) 11. Kuhn, H., Tucker, A. : Nonlinear programming. In: Neyman, J. (ed.) Proceeding of the Second Berkeley Symposium on Mathematical Statistics and Probability, University of California Press, Berkeley (1951) 12. Jeyakumar, V.: On optimality conditions in nonsmooth inequality constrained minimization. Numer. Funct. Anal. Optim. 9, 535–546 (1987) 13. Jeyakumar, V.: Composite nonsmooth programming with Gateaux differentiability. SIAM J. Optim. 1, 30– 41 (1991) 14. Craven, B.D., Yang, X.Q.: A nonsmooth version of alternative theorem and nonsmooth multiobjective programming. Util. Math. 40, 117–128 (1991) 15. Jeyakumar, V., Yang, X.Q.: Convex composite multiobjective nonsmooth programming. Math. Program. 59, 325–343 (1993) 16. Mishra, S.K., Mukherjee, R.N.: Generalized convex composite multiobjective nons-mooth programming and conditional proper efficiency. Optimization 34, 53–66 (1995) 17. Mishra, S.K.: Lagrange multipliers saddle points and scalarizations in composite multiobjective nonsmooth programming. Optimization 38, 93–105 (1996) 18. Reddy, L.V., Mukherjee, R.N.: Composite nonsmooth multiobjective programs with invexity. J. Math. Anal. Appl. 235, 567–577 (1999) 19. Wang, X.L., Yan, B.Z.: The vector saddle point and its properties for V − ρ−invex programming. J. Inn. Mong. Univ. Natly. 21, 264–266 (2006) 20. Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms, 3rd edn. Wiley, Hoboken (2006) 21. Hanson, M.A.: On sufficiency of the Kuhn-Tucker conditions. J. Math. Anal. Appl. 80, 545–550 (1981) 22. Craven, B.D. : Duality for generalized convex fractional programs. In: Schaible, S., Ziemba, W.T. (eds.) Generalized Concavity in Optimization and Economics, Academic Press, New York (1981) 23. Craven, B.D.: Invex functions and constrained local minima. Bull. Aust. Math. Soc. 24, 357–366 (1981)

123

414

J Glob Optim (2013) 57:399–414

24. Weir, T., Jeyakumar, V.: A Class of nonconvex functions and mathematical programming. Bull. Aust. Math. Soc. 38, 177–189 (1988) 25. Craven, B.D.: Nonsmooth multiobjective programming. Numer. Funct. Anal. Optim. 10, 49–64 (1989) 26. Khurana, S.: Symmetric duality in multiobjective programming involving generalized cone-invex functions. Euro. J. Oper. Res. 165, 592–597 (2005) 27. Yen, N.D., Sach, P.H.: On locally Lipschitz vector valued invex functions. Bull. Aust. Math. Soc. 47, 259– 271 (1993) 28. Suneja, S.K., Khurana, S., Vani, : Generalized nonsmooth invexity over cones in vector optimization. Euro. J. Oper. Res. 186, 28–40 (2008)

123