The structure of transformation (AI) does not provide for the meeting in the optimum basis (A5), of the upper bounds ~k ~< ekZb. These constraints, therefore, have to be reduced to the form ~k ~ 0 by using the transformations.

X=Xq-e/.X, i=l(k); which are performed on the straints of the group 0 ~

given

k before

V=V--e~e~;

transformation

z=z--eh, (A1).

The o c c u r r e n c e

of some con-

S'~ ~ S'zb in the optimum basis means that the respective columns ~ are virtually not used in transformation (AI). LITERATURE CITED i. 2.

L . S . Ladson, Optimization of Big Systems [in Russian], Nauka, Moscow (1975). Yu. M. Makarenko, "X-matrix simplex method," Dokl. Akad. Nauk SSSR, 259, No. i, 30-33

(1981).

METHOD OF GENERALIZED GRADIENT DESCENT

V. I. Norkin

UDC 519.853.6

In the practical application of any method of optimization there are inevitably errors in the quantities which occur in it, so it is necessary to be certain that for sufficiently small errors the optimizing sequence turns out to be in a small neighborhood of a solution, and then the method will be stable. One can specially single out the stability of the method in relation to errors in the input information, i.e., in the values of the functions and their gradients (external stability) and to the possible errors in all quantities occurring in the writing of the method (internal stability). The problem of external stability has been studied in many papers, cf., e.g., [1-8]. In the present paper we investigate the internal (computational) stability of the method of generalized gradient descent (GGD) for the minimization of nonconvex nonsmooth functions under restraints. Generalized-Differentiable Functions Definition. The function f:Rn + R i is called generalized-differentiable (GD) at the point x, if in some neighborhood of it there is defined a multivalued map Gf:y + Gf(y), which is upper semicontinuous at this point, such that the sets Gf(y) are bounded, convex, closed, and one also has the expansion

f(y) = f(x) + ( g , y - - ~ + o(x,y,g), where

gCGt(y), (g,y--x)

is

the

scalar

product

of g and y - x,

(l) o(x,

yk,

g k ) l l y k - xJl + 0 f o r

any sequence yk ~ x and gk~OS(gk), the latter condition is equivalent to the fact that lira

sup

sup Io(x,y,g)l =0.

A function is called generalized-differentiable in a domain, if it is generalizeddifferentiable at each point of the domain. Vectors g C Gf(y) are called pseudogradients (or generalized gradients) of the function f at the point y. Remark i. Properties of GD functions are studied in [9, i0]. We note that the map Gf is nonuniquely determined by Definition 1 [i0]. A whole class of pseudogradient mappings Gf, preserving the expansion (i), satisfies Definition i. Formulas for calculating pseudogradients of complex GD-functions [9], which essentially give a certain pseudogradient map of the corresponding complex function from the whole class, are based on this. Any Gf, satisfying Definition i, can be used in the methods of minimization of GD-functions. One can show that the minimal pseudogradient map (with respect to inclusion) of a GD-function coincides with the Clarke generalized-gradient map of this function [ii]. The class Translated from Kibernetika, No. 4, pp. 65-72,July-August, 1985. submitted February 16, 1982.

0011-4235/85/2104-0495509.50

Original article

9 1986 Plenum Publishing Corporation

495

of GD-functions contains continuously differentiable, convex and concave, weakly convex and weakly concave [6] functions and is closed with respect to finite operations of maximum, minimum composition. In [12] along the way to generalizing Definition i, a new definition of differentiability of locally Lifschitz mappings is given, but the definition of GD-functions is not entirely clear. Local Minimizing Property of the Method of GGD We consider the extremal problem

to minimize f (x) for h (X) ~ 0, X C R n, where f ( x ) and h ( x ) a r e G D - f u n c t i o n s , f i n e the multivalued map x ~ G(x),

G f ( x ) and Gh(x) a r e t h e i r

(2) pseudogradient

sets.

We de-

( o~ (x), h (x) < 0, O (x) = / co {~ (x) U Oh (x)}, h (x) = 0, (O~ (x), h (x) > 0. The map G(x) is upper semicontinuous.

problem (2),

(3)

If the point x* is a local extremum of the

then OEO(x*),h(x*)~O.

The algorithm for GGD for a solution of the problem (2) has the form

xk + ' = x k-pkgkl

x~

g~EG(xk), k > 0 ,

(4)

oo

0

X=Xq-e/.X, i=l(k); which are performed on the straints of the group 0 ~

given

k before

V=V--e~e~;

transformation

z=z--eh, (A1).

The o c c u r r e n c e

of some con-

S'~ ~ S'zb in the optimum basis means that the respective columns ~ are virtually not used in transformation (AI). LITERATURE CITED i. 2.

L . S . Ladson, Optimization of Big Systems [in Russian], Nauka, Moscow (1975). Yu. M. Makarenko, "X-matrix simplex method," Dokl. Akad. Nauk SSSR, 259, No. i, 30-33

(1981).

METHOD OF GENERALIZED GRADIENT DESCENT

V. I. Norkin

UDC 519.853.6

In the practical application of any method of optimization there are inevitably errors in the quantities which occur in it, so it is necessary to be certain that for sufficiently small errors the optimizing sequence turns out to be in a small neighborhood of a solution, and then the method will be stable. One can specially single out the stability of the method in relation to errors in the input information, i.e., in the values of the functions and their gradients (external stability) and to the possible errors in all quantities occurring in the writing of the method (internal stability). The problem of external stability has been studied in many papers, cf., e.g., [1-8]. In the present paper we investigate the internal (computational) stability of the method of generalized gradient descent (GGD) for the minimization of nonconvex nonsmooth functions under restraints. Generalized-Differentiable Functions Definition. The function f:Rn + R i is called generalized-differentiable (GD) at the point x, if in some neighborhood of it there is defined a multivalued map Gf:y + Gf(y), which is upper semicontinuous at this point, such that the sets Gf(y) are bounded, convex, closed, and one also has the expansion

f(y) = f(x) + ( g , y - - ~ + o(x,y,g), where

gCGt(y), (g,y--x)

is

the

scalar

product

of g and y - x,

(l) o(x,

yk,

g k ) l l y k - xJl + 0 f o r

any sequence yk ~ x and gk~OS(gk), the latter condition is equivalent to the fact that lira

sup

sup Io(x,y,g)l =0.

A function is called generalized-differentiable in a domain, if it is generalizeddifferentiable at each point of the domain. Vectors g C Gf(y) are called pseudogradients (or generalized gradients) of the function f at the point y. Remark i. Properties of GD functions are studied in [9, i0]. We note that the map Gf is nonuniquely determined by Definition 1 [i0]. A whole class of pseudogradient mappings Gf, preserving the expansion (i), satisfies Definition i. Formulas for calculating pseudogradients of complex GD-functions [9], which essentially give a certain pseudogradient map of the corresponding complex function from the whole class, are based on this. Any Gf, satisfying Definition i, can be used in the methods of minimization of GD-functions. One can show that the minimal pseudogradient map (with respect to inclusion) of a GD-function coincides with the Clarke generalized-gradient map of this function [ii]. The class Translated from Kibernetika, No. 4, pp. 65-72,July-August, 1985. submitted February 16, 1982.

0011-4235/85/2104-0495509.50

Original article

9 1986 Plenum Publishing Corporation

495

of GD-functions contains continuously differentiable, convex and concave, weakly convex and weakly concave [6] functions and is closed with respect to finite operations of maximum, minimum composition. In [12] along the way to generalizing Definition i, a new definition of differentiability of locally Lifschitz mappings is given, but the definition of GD-functions is not entirely clear. Local Minimizing Property of the Method of GGD We consider the extremal problem

to minimize f (x) for h (X) ~ 0, X C R n, where f ( x ) and h ( x ) a r e G D - f u n c t i o n s , f i n e the multivalued map x ~ G(x),

G f ( x ) and Gh(x) a r e t h e i r

(2) pseudogradient

sets.

We de-

( o~ (x), h (x) < 0, O (x) = / co {~ (x) U Oh (x)}, h (x) = 0, (O~ (x), h (x) > 0. The map G(x) is upper semicontinuous.

problem (2),

(3)

If the point x* is a local extremum of the

then OEO(x*),h(x*)~O.

The algorithm for GGD for a solution of the problem (2) has the form

xk + ' = x k-pkgkl

x~

g~EG(xk), k > 0 ,

(4)

oo

0