PARAMETERIZED MINIMAX PROBLEM: ON LIPSCHITZ ... - CiteSeerX

0 downloads 0 Views 262KB Size Report
Abstract. We study Lipschitz continuity with respect to the parameter of the set of solutions of a parameterized minimax problem on a product Banach space.
PARAMETERIZED MINIMAX PROBLEM: ON LIPSCHITZ-LIKE DEPENDENCE OF THE SOLUTION WITH RESPECT TO THE PARAMETER∗ MARC QUINCAMPOIX† AND NADIA ZLATEVA‡ Abstract. We study Lipschitz continuity with respect to the parameter of the set of solutions of a parameterized minimax problem on a product Banach space. We present a sufficient condition ensuring that the map which to any value of the parameter assigns the set of solutions of the problem (possibly multi-valued, and unbounded) possesses Lipschitz-like property, introduced by J.-P. Aubin. Key words. Parameterized minimax problem, saddle point, set-valued map, Aubin property, pseudo-Lipschitz continuity AMS subject classifications. 90C31, 90C47, 46N10

1. Introduction. Consider the parameterized minimax problem M (λ)

inf sup f (x, y, λ),

x∈K y∈L

where λ ∈ Λ is a parameter. Here K and L are non-empty closed subsets of the Banach spaces X and Y , respectively; {f (·, ·, λ) : X × Y → R, λ ∈ Λ} is a family of real-valued functions parameterized by λ ∈ Λ, where Λ is a subset of the Banach space Z. Saddle point of f (·, ·, λ) on K × L is any point (x, y) ∈ K × L that satisfies f (x, y, λ) ≤ f (x, y, λ) ≤ f (x, y, λ)

∀x ∈ K,

∀y ∈ L.

A saddle point (x, y) of f (·, ·, λ) on K × L can be considered as a solution of the minimax problem M (λ) by reason of (x, y) ∈ K × L and f (x, y) = inf sup f (x, y, λ). x∈K y∈L

Let us denote the (possibly empty) set of all saddle points of the function f (·, ·, λ) on K × L by (1.1) S(λ) := {(x, y)∈K×L : f (x, y, λ)≤f (x, y, λ)≤f (x, y, λ), ∀x ∈ K, ∀y ∈ L}. That S(λ) is non-empty can be ensured in several cases. For example, if K and L are convex sets, f (x, y, λ) is convex and lower semicontinuous in x, concave and upper semicontinuous in y, and there are x0 ∈ K and y0 ∈ L such that f (·, y0 , λ) is inf-compact and f (x0 , ·, λ) is sup-compact, then S(λ) 6= ∅ by a minimax result due to Hartung [17], Theorem 1 (see also [3], Theorem 6.2.8). When, moreover, f (x, y, λ) is strictly convex in x and strictly concave in y, then S(λ) is a singleton. In the present work we presume the existence of saddle points for M (λ) and focus our attention on studying Lipschitz-like dependence of the solution set S(λ) on the parameter λ. That is, we find sufficient conditions for Lipschitz-like continuity of the set-valued map S : λ ⇒ S(λ) ∗ The

research has been supported by European’s Community Human Potential Programme HPRN-CT-2002-00281 (Evolution Equations). † Laboratoire de Math´ ematiques, UMR CNRS 6205, Universit´ e de Bretagne Occidentale, 6, avenue Victor Le Gorgeu, 29200 Brest, France ([email protected]). ‡ Faculty of Mathematics and Informatics, St. Kliment Ohridski University of Sofia, 5, James Bourchier Blvd, 1164 Sofia, Bulgaria, ([email protected]). 1

2

M. QUINCAMPOIX AND N. ZLATEVA

from Λ to non-empty subsets of K × L. Of course, when the map S is single-valued, the Lipschitz continuity is understood in the classical sense. However, the map S could be multi-valued. Moreover, its values S(λ) could be unbounded sets. A notion of Lipschitz-like continuity very appropriate for such a case is due to J.-P. Aubin [1, 2]: The multi-valued map S : Λ ⇒ X has Aubin property, or it is Aubin continuous, near (λ, x) ∈ gph S, if there are positive constant κ and neighbourhoods U of x, and V of λ, such that (1.2)

e(S(λ) ∩ U, S(µ)) ≤ κkλ − µk,

∀λ, µ ∈ Λ ∩ V,

where e(A, B) := supx∈A d(x, B) is the excess from set A to set B with e(∅, B) = +∞. S is said to be Aubin continuous if S is Aubin continuous near any point (λ, x) ∈ gph S. For various applications of Aubin continuity in the field of Non-linear analysis and Optimization the reader is referred e.g. to [1, 2, 4, 23]. The Aubin property of a map S near (λ, x) is known to be equivalent to the metric regularity of S −1 near (x, λ) and was originally introduced in [2] under the name of pseudo-Lipschitz continuity. For bibliographical details see [23]. Whenever S is locally bounded, Aubin continuity coincides with the classical notion for Lipschitz continuity of set-valued maps [4, 23] e(S(λ), S(µ)) ≤ κkλ − µk,

∀λ, µ,

but Aubin property works without any boundedness imposed on the values of S. Aubin property is in fact Lipschitzean property localized in the range space, as well as, in the domain space. In the present paper we establish quite general sufficient condition for Aubin continuity of the saddle point map S : λ ⇒ S(λ) arising from the parameterized minimax problem M (λ). Examples illustrating this condition are presented. Several corollaries related to the case of convex-concave smooth data are also sketched. The paper is organized as follows. In Section 2, after a short subsection devoted to preliminaries, we formulate and prove a sufficient condition for Aubin continuity of the solution map S : Λ ⇒ X of a parameterized minimization problem P (λ)

inf f (x, λ).

x∈K

Many authors study Lipschitz-like dependence on λ of the solutions of the associated generalized Euler equation 0 ∈ ∇x f (x, λ) + NK (x), see [15, 7, 25] and the references therein for recent developments. Here we do not follow that approach, because the map St : λ ⇒ St(λ) which to any λ assigns the set St(λ) of solutions of the generalized Euler equation, does not inherit Aubin continuity property from S (see Example 2.6 for a parameterized problem such that the corresponding S is Aubin continuous while St is not). In Section 3 we present our main result (Theorem 3.2) which is a sufficient condition for Aubin continuity of the saddle point map S : Λ ⇒ X × Y of a parameterized minimax problem M (λ).

3

PARAMETERIZED MINIMAX PROBLEM

It is clear that the results of Section 2 are contained in the more general framework of Section 3. Nevertheless, we think that presenting the proof of the former simple case will help the understanding of the more technical proof of the latter general case. Section 4 relates the obtained results to some questions in the field of two-player zero sum differential games. 2. Parameterized minimization problem. 2.1. Preliminaries. As already said, X stands for a Banach space. We denote its norm by k · k, and its open unit ball by B ◦ . The dual space is denoted by X ∗ , while for the duality brackets the notation h·, ·i is used. For C ⊂ X the distance function to C is d(x, C) := inf kx − ck if C 6= ∅, and c∈C

d(x, C) := +∞ if C = ∅. Function f : X → R is Gˆ ateaux differentiable at x ∈ X if there exists ∇f (x) ∈ X ∗ , called Gˆ ateaux derivative of f at x, such that for any h ∈ X, f (x + th) − f (x) = h∇f (x), hi. t→0 t lim

Also, f is said to be strictly differentiable at x whenever lim

x→x t→0

f (x + th) − f (x) = h∇f (x), hi. t

Given an open set U ⊂ X we denote by C 1,α (U ) the class of all Gˆateaux differentiable functions f : U → R such that ∇f : U → X ∗ is α-H¨ older on U , that is, for some constant L > 0, k∇f (x) − ∇f (y)k ≤ Lkx − ykα ,

∀x, y ∈ U.

Let Z be a Banach space, whose norm is also denoted by k · k. Let S be a map from Λ ⊂ Z to X. If not stated otherwise, map means set-valued map. In order to outline the multivaluedness we write S : Z ⇒ X. The inverse S −1 : X ⇒ Z of S is defined by λ ∈ S −1 (x) ⇐⇒ x ∈ S(λ). The graph, domain and range sets of S are given by gph S := {(λ, x) | x ∈ S(λ)},

dom S := {λ | S(λ) 6= ∅},

rge S := dom S −1 ,

respectively. Any product space X × Z of Banach spaces X and Z is considered with the supremum norm k(x, z)k := max{kxk, kzk}. 2.2. Assumptions. Let {f (·, λ) : X → R, λ ∈ Λ} be a family of functions parameterized by λ ∈ Λ ⊂ Z. We look for sufficient conditions to ensure Aubin continuity of the solutions of the parameterized family of constrained minimization problems: P (λ)

inf f (x, λ),

x∈K

where K is given non-empty closed set in X. For λ ∈ Λ, the (possibly empty) set of solutions of the minimization problem P (λ) is denoted by S(λ) := {x ∈ K : f (x, λ) = inf f (x, λ)}, x∈K

4

M. QUINCAMPOIX AND N. ZLATEVA

and its optimal value – by m(λ) := inf f (x, λ). x∈K

It is well-known that even for smooth parameterized problem P (λ) the solution S : Λ ⇒ X may fail Lipschitz continuity. For example, for f (x, λ) = 41 x4 − λx, where √ x, λ ∈ R, and K = [−1, 1], we see that for λ ∈ (−1, 1) the solution is S(λ) = { 3 λ} and it is not Lipschitz continuous at λ = 0 ([7], Example 4.31). Hence, to establish Lipschitz behaviour of S one needs something more than the standard requirements. We now turn to relevant analysis of P (λ). Definition 2.1. Let X and Z be Banach spaces. Let U ⊂ X, V ⊂ Z be nonempty. We denote by Lα,β (U ; V ), α, β ∈ [0, 1], the class of all functions g : U × U × V → R such that there exists a constant kg > 0 such that for all x, x0 ∈ U and all λ, λ0 ∈ V , |g(x, x0 , λ) − g(x, x0 , λ0 )| ≤ kg kx − x0 kα kλ − λ0 kβ . For example, g ∈ L1,1 (U ; V ) means that g(x, x0 , ·) is Lipschitz on V and its best Lipschitz constant L(x, x0 ) satisfies L(x, x0 ) ≤ kkx − x0 k for some positive constant k and all x, x0 ∈ U . With the parameterized family of functions {f (·, λ), λ ∈ Λ} one may associate two difference functions: the function f1 : X × X × Λ → R defined by f1 (x, x0 , λ) := f (x, λ) − f (x0 , λ), and the function f2 : Λ × Λ × X → R defined by f2 (λ, λ0 , x) := f (x, λ) − f (x, λ0 ). The above notions are linked through the following: Proposition 2.2. For any U ⊂ X, V ⊂ Z the function f1 ∈ Lα,β (U ; V ) if and only if f2 ∈ Lβ,α (V ; U ). Proof. Let f1 ∈ Lα,β (U ; V ). Take any x, x0 ∈ U , and any λ, λ0 ∈ V . Since f2 (λ, λ0 , x) − f2 (λ, λ0 , x0 ) = [f (x, λ) − f (x, λ0 )] − [f (x0 , λ) − f (x0 , λ0 )] = [f (x, λ) − f (x0 , λ)] − [f (x, λ0 ) − f (x0 , λ0 )] = f1 (x, x0 , λ) − f1 (x, x0 , λ0 ) ≤ ≤ kf1 kx − x0 kα kλ − λ0 kβ , one can take kf2 := kf1 to conclude that f2 ∈ Lβ,α (V ; U ). The proof of the other direction is similar. We are ready to present the sufficient condition for Aubin continuity of the solution map. Given a (λ, x) ∈ gph S, consider the following local assumption A at (λ, x):  there exist neighbourhoods U of x and V of λ, such that        1. S(λ) ∩ U 6= ∅ for all λ ∈ Λ ∩ V ; (A) and there exist constants c > 0 and α ∈ [0, 1] such that    2. f (x, λ0 ) ≥ m(λ0 ) + cd1+α (x, S(λ0 )), ∀λ, λ0 ∈ Λ ∩ V, ∀x ∈ S(λ) ∩ U ;     3. f1 ∈ Lα,1 (K; Λ ∩ V ).

PARAMETERIZED MINIMAX PROBLEM

5

By Proposition 2.2 it is clear that assumption A3 could be replaced with f2 ∈ L1,α (Λ ∩ V ; K). It is clear that A1 implies that m(λ) is finite for all λ ∈ Λ ∩ V . In the case α = 1, assumption A2 can be considered as an relaxed (in x) uniform (in λ0 ) version of the so-called second order growth condition. One says that the second order growth condition holds for the problem inf f (x)

x∈K

in a neighbourhood N of the solution set S0 , if there exists a constant c > 0 such that (2.1)

f (x) ≥ inf f + cd2 (x, S0 ), K

∀x ∈ K ∩ N.

This condition is involved in number of works (see [5, 6, 7, 19, 24]) in order to ensure Lipschitz stability of the solution map S of constrained minimization problem. Let us recall that S is said to be Lipschitz stable or, equivalently, upper Lipschitz at a point λ ∈ Λ, if there exist a constant κ > 0 and a neighbourhood V of λ such that it holds that e(S(λ), S(λ)) ≤ κkλ − λk,

∀λ ∈ Λ ∩ V.

Let us note that Lipschitz stability is a point property – it holds for S at a fixed point λ, while Aubin continuity we wish to obtain, is a local property – it holds uniformly at all points µ in some neighbourhood V of the referenced point λ. Obviously, Aubin continuity of S near (λ, x) implies Lipschitz stability of S ∩ U at λ while the converse is not always true. Stronger version of uniform second order growth condition than A2 with α = 1 is given in [7], Definition 5.16. It implies single-valuedness and local Lipschitz continuity of S (cf. [7], Theorem 5.17 and Remark 5.19). In contrast, assumption A2 does not imply neither single-valuedness nor local boundedness of the solution map S (see Example 2.6). We would now give few examples of parameterized families of functions {f (·, λ), λ ∈ Λ} for which A holds, in this way showing the consistency of our main assumption. Obviously, the compactness of K and lower semicontinuity of f (·, λ) on K are sufficient to ensure A1. A2 with α = 1 is satisfied at any (λ, x) ∈ gph S provided that, for example, U = X, V = Z, and K ⊂ X is a non-empty closed convex set, the functions f (·, λ) are lower semicontinuous, and uniformly on λ ∈ Λ strongly convex on K, that is, for some constant c > 0 the inequality f (tx0 + (1 − t)x00 , λ) ≤ tf (x0 , λ) + (1 − t)f (x00 , λ) − ct(1 − t)kx0 − x00 k2 holds for every t ∈ [0, 1], every x0 , x00 ∈ K and every λ ∈ Λ. Lemma 2.3 below provides examples of parameterized families of functions satisfying A3. However, we need few more definitions before stating this lemma. Recall that the Clarke generalized derivative of Lipschitz function f : X → R at x ∈ X in direction h ∈ X is f ◦ (x; h) := lim sup x→x t↓0

f (x + th) − f (x) , t

6

M. QUINCAMPOIX AND N. ZLATEVA

and the Clarke subdifferential at x is the non-empty w∗ compact set ∂f (x) := {x∗ ∈ X ∗ : hx∗ , hi ≤ f ◦ (x; h), ∀h ∈ X}, see [12]. It is well-known that for any h ∈ X there exists some x∗ ∈ ∂f (x) such that hx∗ , hi = f ◦ (x; h). Lipschitz function f : U → R is said to be regular on an open set U ⊂ X if for any h ∈ X and any x ∈ U its directional derivative f 0 (x; h) := lim t↓0

f (x + th) − f (x) t

exists and is equal to f ◦ (x; h). Convex continuous functions and strictly differentiable functions are examples of regular functions. Let f (x, λ) be Lipschitz on each variable bivariate function. Denote by fx◦ (x, λ; h) and by fx0 (x, λ; h) the generalized derivative and the directional derivative of f (·, λ) at x in direction h, respectively. Also, denote by ∂x f (x, λ) the partial Clarke subdifferential of f (·, λ) at x, and by ∂λ f (x, λ) the partial Clarke subdifferential of f (x, ·) at λ. Lemma 2.3. Let (λ, x) ∈ gph S and let U ⊂ X and V ⊂ Z be convex neighbourhoods of K and λ, respectively. Consider the conditions:    for λ ∈ Λ ∩ V, f (·, λ) is Lipschitz and regular on U and ∂x f (x, ·) : Λ ∩ V →X ∗ is a k-Lipschitz map on Λ ∩ V (F 1)   with k that does not depend on x ∈ U,    for x ∈ K, f (x, ·) is Lipschitz and regular on V and ∂λ f (·, λ) : K→Z ∗ is a k-Lipschitz map on K   with k that does not depend on λ ∈ V.

(F 2)

If f satisfies F 1 or F 2, then A3 holds with α = 1. Proof. Let f satisfy F 1. Fix x, y ∈ K and λ, µ ∈ Λ ∩ V . Consider the function r(t) := f (y + t(x − y), λ) which is well-defined on an open interval I containing [0, 1]. Since the function f (·, λ) is assumed to be Lipschitz on U , we have that r is Lipschitz on I. By Rademacher’s theorem, for almost all t ∈ [0, 1] there exists r0 (t) = lim

s→0

r(t + s) − r(t) f (y + t(x − y) + s(x − y), λ) − f (y + t(x − y), λ) = lim s↓0 s s = fx0 (y + t(x − y), λ; x − y) = fx◦ (y + t(x − y), λ; x − y).

The last equality holds because f (·, λ) is regular on U . Hence, Z 1 Z 1 (2.2) f (x, λ)−f (y, λ) = r(1)−r(0)= r0 (t) dt= fx◦ (y+t(x−y), λ; x−y) dt. 0

0

Similarly, Z (2.3)

f (x, µ) − f (y, µ) = 0

1

fx◦ (y + t(x − y), µ; x − y) dt.

7

PARAMETERIZED MINIMAX PROBLEM

There exists x∗λ (t) ∈ ∂x f (y + t(x − y), λ) such that fx◦ (y + t(x − y), λ; x − y) = so (2.2) becomes

hx∗λ (t), x−yi,

Z (2.4)

f (x, λ) − f (y, λ) = 0

1

hx∗λ (t), x − yi dt.

Since xλ (t) ∈ ∂x f (y + t(x − y), λ) and the multi-valued map ∂x f (x, ·) : Λ ∩ V →X ∗ is k–Lipschitz continuous with w∗ compact images, there is x∗µ (t) ∈ ∂x f (y + t(x − y), µ) such that kx∗λ (t)−x∗µ (t)k ≤ kkλ−µk. Note that k does not depend on neither t ∈ [0, 1] nor x, y ∈ U . Let us use these for estimating f1 (x, y, λ) − f1 (x, y, µ). From (2.4) we get Z

1

f (x, λ) − f (y, λ) = Z

0

Z hx∗λ (t)

1

≤ 0



kx∗λ (t)

x∗µ (t), x



1

− yi dt + 0

x∗µ (t)k kx Z

≤ kkλ − µk kx − yk + 0

hx∗µ (t), x − yi dt

Z

− yk dt + 0 1

1

hx∗µ (t), x − yi dt

hx∗µ (t), x − yi dt.

Since x∗µ (t) ∈ ∂x f (y+t(x−y), µ), it holds that hx∗µ (t), x−yi ≤ fx◦ (y+t(x−y), µ; x−y), and by (2.3) we have Z 0

1

Z hx∗µ (t), x − yi dt ≤

0

1

fx◦ (y + t(x − y), µ; x − y) dt = f (x, µ) − f (y, µ).

Hence, f (x, λ) − f (y, λ) ≤ f (x, µ) − f (y, µ) + kkλ − µkkx − yk, that is, f1 (x, y, λ) ≤ f1 (x, y, µ) + kkλ − µkkx − yk, or f1 (x, y, λ) − f1 (x, y, µ) ≤ kkλ − µkkx − yk, which means that f1 ∈ L1,1 (K; Λ∩V ). If f satisfies F 2, then by the same reasoning one obtains that f2 ∈ L1,1 (Λ∩V ; K) and by Proposition 2.2, A3 holds. It is interesting to note here that the regularity (in particular the differentiability) can be asked for the argument x – as in F 1, or for the parameter λ – as in F 2. It is clear that both F 1 and F 2 hold whenever f ∈ C 1,1 (U × V ). 2.3. Lipschitz-like continuity of the solution map. Here we prove that given (λ, x) ∈ gph S, assumption A is sufficient to ensure Aubin continuity of the solution map S near (λ, x). Proposition 2.4. Assume that X and Z are Banach spaces and consider a family of constraint minimization problems P (λ) parameterized by λ ∈ Λ, non-empty subset of Z. If for some (λ, x) ∈ gph S assumption A holds, then (2.5)

e(S(λ) ∩ U, S(µ)) ≤

kf1 kλ − µk, c

∀λ, µ ∈ Λ ∩ V

8

M. QUINCAMPOIX AND N. ZLATEVA

and the solution map S is Aubin continuous near (λ, x) ∈ gph S. Proof. Take any λ ∈ Λ ∩ V and any xλ ∈ S(λ) ∩ U (which is non-empty set thanks to A1). By A2, for arbitrary µ ∈ Λ ∩ V f (xλ , µ) ≥ m(µ) + cd1+α (xλ , S(µ)).

(2.6)

Since by A1 the set S(µ) ∩ U is non-empty, for any ε > 0 there exists some xεµ ∈ S(µ) such that kxλ − xεµ k ≤ d(xλ , S(µ)) + ε.

(2.7)

As xεµ ∈ S(µ) we have m(µ) = f (xεµ , µ) and inequality (2.6) reads f (xλ , µ) ≥ f (xεµ , µ) + cd1+α (xλ , S(µ)).

(2.8)

Since xλ ∈ S(λ), we have that f (xεµ , λ) ≥ f (xλ , λ).

(2.9)

By adding (2.8) and (2.9) and rearranging, we obtain [f (xλ , µ) − f (xεµ , µ)] − [f (xλ , λ) − f (xεµ , λ)] ≥ cd1+α (xλ , S(µ)). That is, (2.10)

f1 (xλ , xεµ , µ) − f1 (xλ , xεµ , λ) ≥ cd1+α (xλ , S(µ)).

Using A3, that is f1 ∈ Lα,1 (K; Λ ∩ V ), we estimate the left hand side of (2.10): f1 (xλ , xεµ , µ) − f1 (xλ , xεµ , λ) ≤ kf1 kxλ − xεµ kα kλ − µk. Hence, we have that kf1 kλ − µk kxλ − xεµ kα ≥ cd1+α (xλ , S(µ)). From this and (2.7) it follows that kf1 kλ − µk[d(xλ , S(µ)) + ε]α ≥ cd1+α (xλ , S(µ)). Letting ε ↓ 0 and then dividing by dα (xλ , S(µ)) > 0 (if = 0 the inequality below is trivial), we obtain kf1 kλ − µk ≥ cd(xλ , S(µ)), or d(xλ , S(µ)) ≤

kf1 kλ − µk. c

As xλ was arbitrary point in in S(λ) ∩ U , the latter yields e(S(λ) ∩ U, S(µ)) ≤

kf1 kλ − µk, c

completing the proof. 2.4. Examples and corollaries. The following is a basic example of nonsmooth parameterized minimization problem with Lipschitz continuous solution map with unbounded values. We show that it is within the scope of Proposition 2.4. 2 Example 2.5. Let K = R and f (x1 , x2 , λ) := |x1 − x2 − λ|,

PARAMETERIZED MINIMAX PROBLEM

9

x1 , x2 , λ ∈ R. Consider the parameterized family of unconstrained minimization problems over the plane P (λ)

inf f (x1 , x2 , λ).

x1 ,x2

Then the solution map S : λ ⇒ S(λ) is Lipschitz continuous. Proof. Obviously, for any λ ∈ R the solution set consists of single line, i.e. S(λ) = {(x1 , x2 ) : x1 − x2 = λ}. Moreover, for λ and µ the solution sets S(λ) and S(µ) are parallel lines. The distance between S(λ) and S(µ) is the distance from any |x1 − x2 − µ| |λ − µ| √ point (x1 , x2 ) ∈ S(λ) to the line x1 −x2 = µ which is equal to = √ , 2 2 1 so the map S is Lipschitz continuous with Lipschitz constant √ . 2 Note that the sufficient condition A holds. Indeed: 2 A1 holds with U ≡ R ; √ 2 A2 holds with α = 0, c = 2, and U = R , V = R; 2 A3 holds because f1 ∈ L0,1 (R , R) with kf1 = 2. √ kf The Lipschitz constant provided by Proposition 2.4 is 1 = 2. c The next example shows that studying the generalized Euler equation may sometimes be inadequate for obtaining Aubin continuity of the solution map. This is because the set of the stationary points may be larger than the set of minima. 2 Example 2.6. Let K = R and f (x1 , x2 , λ) := (x1 + λx2 − 1)2 (x2 + λx1 + 1)2 , x1 , x2 , λ ∈ R. Consider the parameterized family of unconstrained minimization problems over the plane P (λ)

inf f (x1 , x2 , λ).

x1 ,x2

Then at the point λ = 1 the set of solutions S(λ) is smaller than the set of 2 stationary points St(λ) := {x ∈ R : 0 ∈ ∇x f (x, λ)}. Moreover, the map S is Aubin continuous near any point in his graph while St is not Aubin continuous near the point (λ, x) ∈ gph St where λ = 1 and x = (0, 0). Proof. Straightforward computations show that for any λ ∈ R the solution set S(λ) = {(x1 , x2 ) : x2 + λx1 = −1, or x1 + λx2 = 1 } is union of two lines – the line p1 (λ) with equation x2 + λx1 = −1 and the line p2 (λ) with equation x1 + λx2 = 1. Because of ∇x f (x, λ) = [2(x1 + λx2 − 1)(x2 + λx1 + 1)(2λx1 + (1 + λ2 )x2 + 1 − λ), 2(x1 + λx2 − 1)(x2 + λx1 + 1)((1 + λ)2 x1 + 2λx2 + λ − 1)], the set of the stationary points at λ = 1 consists of three parallel lines St(1) = {(x1 , x2 ) : x1 + x2 = 1, or x1 + x2 = −1, or x1 + x2 = 0}, while for λ 6= 1, St(λ) ≡ S(λ).

10

M. QUINCAMPOIX AND N. ZLATEVA

It is not difficult to see that S is Aubin continuous near arbitrary point (λ, x) ∈ ˜ ∈ R and gph S (we note by the way that S is not Lipschitz continuous). Indeed, fix λ ˜ ˜ ˜ take x ˜ = (˜ x1 , x ˜2 ) ∈ S(λ) = p1 (λ) ∪ p2 (λ). Obviously, x ˜ 6= 0. ˜ < 1/2. If x ˜ then Take λ such that |λ − λ| ˜ ∈ p1 (λ) ˜ x1 | |(λ − λ)˜ ˜ |˜ d(˜ x, S(λ)) ≤ d(˜ x, p1 (λ)) = √ ≤ |λ − λ| x1 |, 1 + λ2 ˜ then and if x ˜ ∈ p2 (λ) |(λ − λ)˜ x2 | ˜ |˜ d(˜ x, S(λ)) ≤ d(˜ x, p2 (λ)) = √ ≤ |λ − λ| x2 |, 1 + λ2 which yields ˜ max{|˜ ˜ k˜ d(˜ x, S(λ)) ≤ |λ − λ| x1 |, |˜ x2 |} ≤ |λ − λ| xk < k˜ xk/2. ˜ < 1/2 the intersection of S(λ) with This implies that for all λ such that |λ − λ| the neighbourhood U := x ˜ + k˜ xkB ◦ is non-empty. ˜ < 1/2. Similarly we get Take x = (x1 , x2 ) ∈ S(λ) ∩ U and µ such that |µ − λ| d(x, S(µ)) ≤ |λ − µ| kxk ≤ |λ − µ| [kx − x ˜k + k˜ xk] ≤ 2k˜ xk |λ − µ|. Hence, e(S(λ) ∩ U, S(µ)) ≤ 2k˜ xk |λ − µ|,

˜ + 1 B◦, ∀λ, µ ∈ λ 2

˜ x which means that S is Aubin continuous near (λ, ˜) ∈ gph S. In contrast, St is not Aubin continuous near the point (λ, x) ∈ gph St where λ = 1 and x = (0, 0). Indeed, if St is Aubin continuous near that point then d(x, St(λ)) tends to zero as λ tends to 1. But the distance d(x, St(λ)) = min{d(x, p1 (λ)), d(x, p2 (λ))} = √

1 1 + λ2

1 tends to √ as λ tends to 1, which means that St is not Aubin continuous near 2 (λ, x) ∈ gph St. As immediate consequence of Proposition 2.4 we get Corollary 2.7. Let for the parameterized family of minimization problems P (λ) the following assumption hold  for all λ ∈ Λ, all x ∈ K and some c > 0      1. S(λ) 6= ∅; (A0 )  2. f (x, λ) ≥ m(λ) + cd2 (x, S(λ));     3. f ∈ C 1,1 (X × Z). Then the solution map S : Λ ⇒ X is Lipschitz continuous on Λ. In a Banach space E with separable dual E ∗ the notion of a second order subdifferential for a function f ∈ C 1,1 (E) is introduced in [16] (see also the former work [18]

PARAMETERIZED MINIMAX PROBLEM

11

for the finite dimensional case). For any x ∈ E the second order subdifferential ∂ 2 f (x) of f at x is is a non-empty, convex and w∗ compact set in L(E × E) (the Banach space of all bilinear continuous functionals M : E × E → R with the norm kM k := supkh1 k=kh2 k=1 | M [h1 , h2 ] |) which is singleton exactly when f is twice strictly Gˆateaux differentiable at x. Setting a simple condition on the second subdifferential is sufficient to get a family of functions satisfying assumption A0 in the above corollary. Indeed, let E be a Banach space with separable dual. Let in the parameterized family of minimization problems P (λ), f ∈ C 1,1 (E × Z), and let the constraint set K be closed and convex. If there exist c > 0 with (2.11) hM (y−x), y−xi≥cky − xk2 for all λ ∈ Λ, x, y ∈ K, M ∈ ∂ 2 f (·, λ)(x), then, the solution map S : Λ → E will be single-valued and Lipschitz continuous on Λ. It is easily seen that (2.11) implies uniformly on λ ∈ Λ strong convexity of f (·, λ) on K. By this and continuity of f (·, λ), for every λ the infimum of f (·, λ) is attained at unique xλ ∈ K and A0 1 holds. For any x ∈ K and λ ∈ Λ there exists some zλ ∈ K and Mzλ ∈ ∂ 2 f (·, λ)(zλ ) with 1 f (x, λ) = f (xλ , λ) + h∇x f (xλ , λ), x − xλ i + hMzλ (x − xλ ), x − xλ i 2 (see [16]). Since xλ is a minimum point for f (·, λ) on K and K is convex, then for all x ∈ K, h∇x f (xλ , λ), x − xλ i ≥ 0 and from above equality and (2.11) 1 f (x, λ) ≥ m(λ) + ckx − xλ k2 , 2 so A0 2 holds with α = 1. We will use Corollary 2.7 to obtain existence and Lipschitz continuity of the optimal solution for linearly perturbed optimization problem, assuming slightly weaker version (see (2.12) below) of the uniform second order growth condition (Definition 5.19 in [7]), and C 1,1 data. In this way we extend [7], Theorem 5.17 (see also [7], Remark 5.19), where C 2 data are assumed. Recall that the Banach space X has Radon-Nikodym property (RNP) if for every bounded set C and every ε > 0, there exists an x ∈ C that does not belong to the closed convex hull of C \ {x + εB ◦ }. All Banach spaces which have separable dual and all reflexive Banach spaces have RNP. In [13], p.157 there is a long list of equivalent definitions of RNP. A good introductory survey on RNP is [14]. Efficient tool in dealing with minimization problems on Banach space X with RNP is Stegall’s variational principle [26] (see also [21], Theorem 5.15): Let C ⊂ X be a non-empty closed and bounded convex set and let f : C → R ∪ {+∞} be a lower semicontinuous function, bounded below on C, then for every ε > 0, there exists x∗ ∈ X ∗ with kx∗ k ≤ ε such that f + x∗ attains its strong minimum on C. Let us remind that x0 ∈ C is said to be a strong minimum for function g : C → R ∪ {+∞} on the set C if g(x0 ) = inf g and kxn − x0 k → 0 whenever g(xn ) → g(x0 ). C

Corollary 2.8. Let the Banach space X have Radon-Nikodym property. Consider parameterized family of minimization problems P (λ), where the parameter space is X ∗ and f : X × X ∗ → R is defined by f (x, λ) := f (x) + hλ, xi.

12

M. QUINCAMPOIX AND N. ZLATEVA

Assume that the constraint set K is closed and convex, f ∈ C 1,1 (X), and S(0) is non-empty. Suppose that there exist neighbourhood V of the origin 0 of X ∗ and a constant c > 0 such that for all λ ∈ V and all xλ ∈ S(λ) it holds that (2.12)

f (x, λ) ≥ f (xλ , λ) + ckx − xλ k2 ,

∀x ∈ K.

Then there exists a neighbourhood W of the origin 0 of X ∗ such that S(λ) is singlevalued and Lipschitz continuous on W . Proof. From (2.12) it is clear that S(λ) contains at most one point for λ ∈ V . We will show that S(λ) is non-empty for λ belonging to some neighbourhood of 0. Fix γ > 0 such that W := 2γB ◦ ⊂ V . Given λ ∈ γB ◦ , let εk be a sequence of positive numbers less than γ, tending to zero. Thanks to (2.12) with λ = 0, f (·, λ) is bounded below on K. If K is bounded we could apply directly Stegall’s variational principle for the function f (·, λ) : K → R and εk to find x∗k ∈ X ∗ with kx∗k k ≤ εk and a strong minimum xk of f (·, λ) + x∗k on K. If K is not bounded, a variant of Stegall’s variational principle still holds thanks to (2.12). Indeed, (2.12) for λ = 0 reads f (x) ≥ f (x0 ) + ckx − x0 k2 ,

∀x ∈ K,

which yields that for all x ∈ K, f (x, λ) ≥ f (x0 ) + hλ, xi + ckx − x0 k2 = f (x0 , λ) + hλ, x − x0 i + ckx − x0 k2 ≥ (2.13) ≥ f (x0 , λ) + kx − x0 k[ckx − x0 k − kλk] ≥ f (x0 , λ) + kx − x0 k[ckx − x0 k − γ]. 3γ . Now, we apply Stegall’s variational principle for the function f (·, λ) on c the closed bounded set K ∩ {x0 + rB} and εk . Thus, there exists x∗k ∈ X ∗ , kx∗k k < εk , and a point xk ∈ K ∩ {x0 + rB} such that f (·, λ) + x∗k attains a strong minimum on K ∩ {x0 + rB} at xk . Moreover, xk is a strong minimum of f (·, λ) + x∗k on K. Indeed, if we assume that x ∈ K is such that Set r :=

f (x, λ) + hx∗k , xi ≤ f (xk , λ) + hx∗k , xk i =

inf

K∩{x0 +rB}

f (·, λ) + x∗k ≤ f (x0 , λ) + hx∗k , x0 i,

then by (2.13) we will have kx − x0 k[ckx − x0 k − γ] ≤ kx∗k kkx − x0 k ≤ εk kx − x0 k < γkx − x0 k, or kx − x0 k ≤

2γ 0 and α ∈ [0, 1] such that 2. f (x, y 0 , λ0 )≥m(λ0 )+cd1+α (x, πX S(λ0 )), (A)   f (x0 , y, λ0 )≤m(λ0 )−cd1+α (y, πY S(λ0 )),     ∀λ, λ0 ∈ Λ ∩ V, ∀(x, y) ∈ S(λ) ∩ [U × W ], ∀(x0 , y 0 ) ∈ S(λ0 );     α,1 3. f1 ∈ Lα,1 L∩W (K; Λ ∩ V ) and f2 ∈ LK∩U (L; Λ ∩ V ). Clearly, condition A3 could be replaced by 1,α f3 ∈ L1,α L∩W (Λ ∩ V ; K) ∩ LK∩U (Λ ∩ V ; L).

A1 implies that m(λ) is finite for λ ∈ Λ ∩ V . We would show the consistency of our main hypothesis by giving some examples of parameterized families of functions {f (·, ·, λ), λ ∈ Λ} for which A is satisfied. One gets a parameterized family of functions {f (·, ·, λ), λ∈Λ} satisfying A2 for example by assuming that K ⊂ X and L ⊂ Y are non-empty closed convex sets; the function f (·, y, λ) is lower semicontinuous and uniformly on (y, λ) ∈ L × Λ strongly convex on K, i.e. such that for some constant c > 0 the inequality f (tx + (1 − t)x0 , y, λ) ≤ tf (x, y, λ) + (1 − t)f (x0 , y, λ) − ct(1 − t)kx − x0 k2 holds for every t ∈ [0, 1], every x, x0 ∈ K, and every (y, λ) ∈ L × Λ; the function f (x, ·, λ) is upper semicontinuous and uniformly on (x, λ) ∈ K × Λ strongly concave on L, i.e. such that the inequality f (x, ty + (1 − t)y 0 , λ) ≥ tf (x, y, λ) + (1 − t)f (x, y 0 , λ) + ct(1 − t)ky − y 0 k2 holds for every t ∈ [0, 1], every y, y 0 ∈ L, and every (x, λ) ∈ K × Λ. Then it is routine to see that A2 holds at any (λ, x, y) ∈ gph S with α = 1, V = Z, U = X, and W = Y . Examples of parameterized families of functions satisfying A3 are given by the following Lemma 3.1. Let (λ, x, y) ∈ gph S and let U ⊂ X, W ⊂ Y and V ⊂ Z be convex neighbourhoods of K, L and λ, respectively. Consider the conditions:    for any (y, λ) ∈ L × [Λ ∩ V ], f (·, y, λ) is Lipschitz and regular function on U and ∂x f (x, y, ·) : Λ ∩ V → X ∗ is a k–Lipschitz map on (F1)   Λ ∩ V with k that does not depend on (x, y) ∈ K × L,

(F2)

   for any (x, λ) ∈ K × [Λ ∩ V ], f (x, ·, λ) is Lipschitz and regular function on W and ∂y f (x, y, ·) : Λ ∩ V → Y ∗ is a k–Lipschitz map on   Λ ∩ V with k that does not depend on (x, y) ∈ K × L,

(F3)

   for any (x, y) ∈ K × L, f (x, y, ·) is Lipschitz and regular function on V and ∂λ f (·, y, λ) : K → Λ∗ is a k–Lipschitz map on   K with k that does not depend on (y, λ) ∈ L × [Λ ∩ V ],

PARAMETERIZED MINIMAX PROBLEM

(F4)

15

   for any (x, y) ∈ K × L, f (x, y, ·) is Lipschitz and regular function on V and ∂λ f (x, ·, λ) : L → Λ∗ is a k–Lipschitz map on   L with k that does not depend on (x, λ) ∈ K × [Λ ∩ V ].

If f satisfies F1 − F2 or F3 − F4, then A3 holds with α = 1. Proof. We follows the same steps as in the proof of Lemma 2.3. If f satisfies F1, then f1 ∈ L1,1 L (K; Λ ∩ V ). If f satisfies F2, then f2 ∈ L1,1 K (L; Λ ∩ V ). If f satisfies F3, then f3 ∈ L1,1 L (Λ ∩ V ; K). If f satisfies F4, then f3 ∈ L1,1 K (Λ ∩ V ; L). Obviously, if f ∈ C 1,1 (U × W × V ) then F1 to F4 hold. 3.2. Lipschitz-like continuity of the saddle point map. Here we will prove that assumption A is sufficient for Aubin continuity of the saddle point map S. Let us note that the result can not be derived (or at least not in an obvious manner) from the case of minimisation only. Indeed, if f (x, y, λ) satisfies A then the function supy∈L f (x, y, λ) will satisfy A2 but A3 for supy∈L f (x, y, λ) can not be derived from A3 since the differences of supremuma involved do not yield themselves to rearrangements. Theorem 3.2. Assume that for the parameterized family of minimax problems M (λ) the assumption A holds at some (λ, x, y) ∈ gph S. Then for all λ, µ ∈ Λ ∩ V (3.2)

e(S(λ) ∩ [U × W ], S(µ)) ≤

2k kλ − µk, c

where k := max{kf1 , kf2 }, hence the saddle point map S : Λ ⇒ X × Y is Aubin continuous near (λ, x, y) ∈ gph S. Proof. By A1 for all λ ∈ Λ ∩ V the set S(λ) ∩ [U × W ] is non-empty. Fix λ ∈ Λ ∩ V and take some (xλ , yλ ) ∈ S(λ) ∩ [U × W ]. Pick any other µ ∈ Λ ∩ V . Since S(µ) is a non-empty set we find some xεµ ∈ πX S(µ) such that kxλ − xεµ k ≤ d(xλ , πX S(µ)) + ε. Similarly, there is yµε ∈ πY S(µ) such that kyλ − yµε k ≤ d(yλ , πY S(µ)) + ε. By the product form of the saddle point set, (xεµ , yµε ) ∈ S(µ). The first inequality of A2 for (xλ , yλ ) ∈ S(λ) ∩ [U × W ] and (xεµ , yµε ) ∈ S(µ) reads (3.3)

f (xλ , yµε , µ) ≥ m(µ) + cd1+α (xλ , πX S(µ)),

in particular (3.4)

f (xλ , yµε , µ) ≥ m(µ),

while the second inequality of A2 states (3.5)

m(µ) ≥ f (xεµ , yλ , µ) + cd1+α (yλ , πY S(µ)),

16

M. QUINCAMPOIX AND N. ZLATEVA

in particular m(µ) ≥ f (xεµ , yλ , µ).

(3.6)

Combining (3.3) with (3.6) and (3.4) with (3.5), we get f (xλ , yµε , µ) ≥ f (xεµ , yλ , µ) + cd1+α (xλ , πX S(µ)), f (xλ , yµε , µ) ≥ f (xεµ , yλ , µ) + cd1+α (yλ , πY S(µ)), which yields f (xλ , yµε , µ) − f (xεµ , yλ , µ) ≥ c[max{d(xλ , πX S(µ)), d(yλ , πY S(µ))}]1+α . By the definition of the supremum norm and since S(µ) is a product set, it is obvious that (3.7)

d((xλ , yλ ), S(µ)) = d((xλ , yλ ), πX S(µ) × πY S(µ)) = max{d(xλ , πX S(µ)), d(yλ , πY S(µ))}

and the above inequality can be rewritten as f (xλ , yµε , µ) − f (xεµ , yλ , µ) ≥ cd1+α ((xλ , yλ ), S(µ)). We transform the left hand side to get f (xλ , yµε , µ) − f (xλ , yλ , µ) + f (xλ , yλ , µ) − f (xεµ , yλ , µ) ≥ cd1+α ((xλ , yλ ), S(µ)), which is (3.8)

−f2 (xλ , yλ , yµε , µ) − f1 (xεµ , xλ , yλ , µ) ≥ cd1+α ((xλ , yλ ), S(µ)).

On the other hand, since (xλ , yλ ) ∈ S(λ), the saddle point inequalities give f (x, yλ , λ) ≥ f (xλ , yλ , λ) ≥ f (xλ , y, λ),

∀x ∈ K, ∀y ∈ L.

In particular, for x = xεµ ∈ K we have f (xεµ , yλ , λ) ≥ f (xλ , yλ , λ), which is (3.9)

f1 (xεµ , xλ , yλ , λ) ≥ 0,

and for y = yµε ∈ L we get f (xλ , yλ , λ) ≥ f (xλ , yµε , λ), which is (3.10)

f2 (xλ , yλ , yµε , λ) ≥ 0.

17

PARAMETERIZED MINIMAX PROBLEM

Adding the inequalities (3.8), (3.9) and (3.10) and rearranging we obtain (3.11)

by

[f1 (xεµ , xλ , yλ , λ)−f1 (xεµ , xλ , yλ , µ)]+[f2 (xλ , yλ , yµε , λ)−f2 (xλ , yλ , yµε , µ)]≥ cd1+α ((xλ , yλ ), S(µ)).

Since by A3, f1 ∈ Lα,1 L∩W (K; Λ∩V ) the term in first brackets in (3.11) is estimated

(3.12)

f1 (xεµ , xλ , yλ , µ) − f1 (xεµ , xλ , yλ , λ) ≤ kf1 kxεµ − xλ kα kλ − µk,

and since f2 ∈ Lα,1 K∩U (L; Λ ∩ V ) the term in second brackets in (3.11) is estimated by (3.13)

f2 (xλ , yλ , yµε , λ) − f2 (xλ , yλ , yµε , µ) ≤ kf2 kyλ − yµε kα kλ − µk.

Using (3.13) and (3.12) in (3.11) and setting k := max{kf1 , kf2 }, we get kkλ − µk[kxλ − xεµ kα + kyλ − yµε kα ] ≥ cd1+α ((xλ , yλ ), S(µ)). By the choice of xεµ and yµε , we have that kkλ − µk[(d(xλ , πX S(µ)) + ε)α + (d(yλ , πY S(µ)) + ε)α ] ≥ cd1+α ((xλ , yλ ), S(µ)). Passing to limit ε ↓ 0 we obtain (3.14) kkλ − µk[dα (xλ , πX S(µ)) + dα (yλ , πY S(µ))] ≥ cd1+α ((xλ , yλ ), S(µ)). By (3.7) we get α

dα (yλ , πY S(µ)) + dα (xλ , πX S(µ)) ≤ 2 [max{d(yλ , πY S(µ)), d(xλ , πX S(µ))}] = 2dα ((xλ , yλ ), S(µ)), and from (3.14) we obtain 2kkλ − µkdα ((xλ , yλ ), S(µ)) ≥ cd1+α ((xλ , yλ ), S(µ)). This yields 2k kλ − µk ≥ d((xλ , yλ ), S(µ)), c and since (xλ , yλ ) was arbitrary element of S(λ) ∩ [U × W ] the latter implies e(S(λ) ∩ [U × W ], S(µ)) ≤

2k kλ − µk. c

The proof is completed. As immediate consequence of Theorem 3.2 and Lemma 3.1 one deduces Corollary 3.3. Let for the parameterized family of minimax problems M (λ) the following assumption hold:  1. S(λ) 6= ∅ for any λ ∈ Λ;     0 0    2. for some constant c > 0 and all λ∈Λ, (x, y)∈S(λ), (x , y )∈K×L :  f (x0 , y, λ) ≥ m(λ) + cd2 (x0 , πX S(λ)), (A0 )    f (x, y 0 , λ) ≤ m(λ) − cd2 (y 0 , πY S(λ));      3. f ∈ C 1,1 (X × Y × Z).

18

M. QUINCAMPOIX AND N. ZLATEVA

Then the saddle point map S : Λ → X × Y is single-valued and Lipschitz continuous. As we pointed out after Corollary 2.7 we could deduce the single-valuedness and Lipschitz continuity of the saddle point map S when X and Y has separable duals, the sets K and L are convex, f ∈ C 1,1 (X × Y × Z) and there exists a constant c > 0 such that for all λ ∈ Λ, x, z ∈ K, y, w ∈ L, hM (z − x), z − xi ≥ ckz − xk2 ,

hN (w − y), w − yi ≤ −ckw − yk2

for all M ∈ ∂ 2 f (·, y, λ)(x) and all N ∈ ∂ 2 f (x, ·, λ)(y). 4. Lipschitz continuity of the saddle points map in context of twoplayer zero sum differential games. In this section we briefly consider a differential game for which our result might be of relevance. In differential games, open-loop strategies are of low interests in many examples. One major reason is that differential games with open-loop strategies do not satisfy, in general, the dynamic programming principle [11, 9, 22]. It is well-known now, that to solve many problems in differential games: existence of a value, characterization of the game through Hamilton-Jacobi equations, one needs a more general class of strategies which contains the feedback strategies1 . Such class of strategies is the class of nonanticipative strategies introduced by Elliot-Roxin-Varaiya-Kalton (cf. for instance [10]), another possible class of strategies are the positional strategies discussed in [20]. Nevertheless, the class of non-anticipative strategies is nice enough to prove existence of the value, but it is hard to implement for the players. So it is important to know, when the non-anticipative strategies giving the value of the game, can be reduced to feedback strategies. We will explain in this part how the main result of the paper can lead to a partial answer to this question. We consider the following differential game with dynamic described by the differential equation ( 0 x (t) = f (x(t), u(t)), y 0 (t) ∈ g(y(t), v(t)), (4.1) u(t) ∈ U, v(t) ∈ V, n

n

n

n

k

where f : R × U → R and g : R × V → R are (globally) Lipschitz, U ⊂ R , l V ⊂ R being the control sets of the players. The first player – Ursula, playing with u – wants to minimize a given cost. The goal of the second player – Victor, playing with v – is to maximize the cost Z ∞ J(x0 , y0 , u(·), v(·)) := e−rt l(x(t), y(t)) dt, 0

where (x(·), y(·)) is the unique solution starting at t = 0 from (x0 , y0 ) and r > 0 is fixed. Observe that the game is of a separable form, i.e. each player acts in his own dynamic. This is the case for instance for pursuit games. Moreover the integral cost does not depend directly on the control but only on the trajectories. We work here in the framework of the non-anticipative strategies (also called Varaiya-Roxin-Elliot-Kalton strategies). Let (4.2)

U = L1 ([0, +∞[, U ), V = L1 ([0, +∞[, V )

1 It has been shown in [8] that the class of regular feedback is not rich enough to solve differential games at a satisfactory level of generality.

PARAMETERIZED MINIMAX PROBLEM

19

be the sets of time-measurable controls of the first (Ursula) and the second (Victor) player, respectively. We denote t 7→ (x(t, x0 , u(t)), y(t, y0 , v(t))) the solution of (4.1) starting at t = 0 from (x0 , y0 ). Definition 4.1 (Non-anticipative strategies). A map α : V → U is a non-anticipative strategy (for Ursula) if it satisfies the following condition: For any s ≥ 0, for any v1 (·) and v2 (·) belonging to V such that v1 (·) and v2 (·) coincide almost everywhere on [0, s], the images α(v1 (·)) and α(v2 (·)) coincide almost everywhere on [0, s]. Non-anticipative strategies β : U → V (for Victor) are defined in the symmetric way. Assume now that f and g are continuous and Lipschitz with respect to x and y. Then, we know that the game has a value (cf. [11]), namely V (x0 , y0 ) = inf sup J(x0 , y0 , α(v(·)), v(·)) = sup inf J(x0 , y0 , u(·), β(u(·))). α v∈V

β

u∈U

Let us denote by R(t) the attainable set of the dynamics (4.1) at moment t, i.e. R(t)

n

n

= {(x(t), y(t)) ∈ R × R : ∃ u ∈ U , v ∈ V such that (x(·), y(·)) is the solution of (4.1) starting at t = 0 from (x0 , y0 )}.

Now, suppose that U and V are convex and compact. Saddle point of the function l(·, ·) on R(t) will be any point (x, y) ∈ R(t) that satisfies l(x, y) ≤ l(x, y) ≤ l(x, y),

∀(x, y) ∈ R(t)

and, because of e−rt > 0, the saddle points of l(·, ·) on R(t) will be the same as the saddle points of e−rt l(·, ·) on R(t). Let us denote the (possibly empty) set of all saddle points of the function l(·, ·) on R(t) by S(t) := {(x, y) ∈ R(t) : l(x, y) ≤ l(x, y) ≤ l(x, y), and set m(t) :=

inf

sup

x∈πX R(t) y∈πY R(t)

∀(x, y) ∈ R(t)}.

l(x, y).

Let us suppose that the parameterized by t family of functions {e−rt l(·, ·), t ∈ [0, ∞)} satisfies assumption in a slight stronger form than the assumption A, namely:  1. S(t) 6= ∅, ∀t ≥ 0;     and there exist constants k, c > 0 and α ∈ [0, 1] such that     ∀t, t0 ≥ 0, ∀(x, y)∈S(t), ∀(x0 , y 0 )∈S(t0 ) it holds :    2. l(x0 , y) ≥ m(t) + ckx0 − xk1+α ,   l(x, y 0 ) ≤ m(t) − cky 0 − yk1+α ;       3. |l(x, y) − l(x0 , y)| ≤ kkx − x0 kα ,    |l(x, y) − l(x, y 0 )| ≤ kky − y 0 kα . This assumption guarantees that for any t ∈ [0, ∞) the saddle point mapping S(t) is single-valued and Lipschitz continuous, i.e. for all positive t, the function e−rt l(·, ·) has a saddle point (x(t), y(t)) on the attainable set R(t) of the dynamics (4.1), which depends in a Lipschitz way on t. Hence, if furthermore we can prove that the so obtained single valued saddle point mapping generates a trajectory (x(·), y(·)) of (4.1), we have found an optimal feedback strategy of the game. Under the above

20

M. QUINCAMPOIX AND N. ZLATEVA

assumptions, when k = l = n and f (x, u) = u, g(y, v) = v, the saddle point is Lipschitz with respect to t, so the corresponding controls u ∈ U, v ∈ V and, hence, they generate a trajectory of the differential game. Acknowledgement. The authors express their gratitude to anonymous referees for their valuable comments and suggestions. REFERENCES emes de minimisation convexes, [1] J.-P. Aubin, Comportement lipschitzien des solutions de probl´ Compt. rend. Acad. Sci. Paris, S´ er. I, 295 (1982), pp. 235–238. [2] , Lipschitz behavior of solutions to convex minimization problems, Math. Oper. Res., 9 (1984), pp. 87–111. [3] J.-P. Aubin and I. Ekeland, Applied nonlinear analysis, New York etc., John Wiley & Sons, 1984. auser, 1990. [4] J.-P. Aubin and H. Frankowska, Set-valued analysis, Birkh¨ [5] J. Bonnans and A. Ioffe, Quadratic growth and stability in convex programming problems with multiple solutions, J. Convex Anal., 2 (1995), No. 1-2, pp. 41–57. [6] J. Bonnans and A. Shapiro, Optimization problems with perturbations: A guided tour, SIAM Rev., 40 (1998), No. 2, pp. 228–264. [7] , Perturbation analysis of optimization problems, Springer Series in Operations Research, New York, Springer, 2000. [8] P. Cardaliaguet, A differential game with two players and one target, SIAM J. Control and Optim., 34 (1996), No. 4, pp. 1441–1460. [9] P. Cardaliaguet, M. Quincampoix and P. Saint-Pierre, Numerical methods for differential games, In: Stochastic and differential games: Theory and numerical methods, Annals of the International Society of Dynamic Games (M. Bardi, T. E. S. Raghavan and T. Parthasarathy Eds.), Birkh¨ auser, (1999), pp. 177–247. [10] , Pursuit differential games with state constraints, SIAM J. Control and Optim., 39 (2001), No. 5, pp. 1615–1632. [11] , Differential games through viability theory: Old and recent results, Annals of the International Society of Dynamic Games, Birkhauser, 9 (2007), pp. 2–23. [12] F. Clarke, Optimization and nonsmooth analysis, New York, John Wiley & Sons, 1983. [13] J. Diestel and J. J. Uhl, Vector measures, Mathematical Surveys, No 15, Providence, R.I. (AMS), 1977. [14] , The Radon-Nikodym theorem for Banach space valued measures, Rocky Mt. J. Math., 6 (1976), pp. 1–46. [15] A. L. Dontchev and R. T. Rockafellar, Ample parameterization of variational inclusions, SIAM J. Optim., 12 (2001), No. 1, pp. 170–187. [16] P. G. Georgiev and N. P. Zlateva, Second-order subdifferentials of C 1,1 functions and optimality conditions, Set-Valued Anal., 4 (1996), No. 2, pp. 101–117. [17] J. Hartung, An extension of Sion’s minimax theorem with an application to a method for constrained games. Pacific J. Math., 103 (1982), No. 2, 401–408. [18] J.-B. Hiriart-Urruty, J.-J. Strodiot and V. H. Nguyen, Generalized Hessian matrix and second-order optimality conditions for problems with C 1,1 data, Appl. Math. Optimization, 11 (1984), pp. 43–56. [19] D. Klatte and R. Henrion, Regularity and stability in nonlinear semi-infinite optimization, In: Semi-infinite programming (Reemtsen, Rembert et al. Eds.), Workshop, Cottbus, Germany, September 1996, Boston, Kluwer Academic Publishers; Nonconvex Optim. Appl., 25 (1998), pp. 69–102. [20] N. N. Krasovskii and A. I. Subbotin, Game-Theorical Control Problems, Springer-Verlag, New-York, 1988. [21] R. R. Phelps, Convex functions, monotone operators and differentiability, 2nd ed., Lecture Notes in Mathematics 1364, Berlin, Springer-Verlag, 1993. [22] S. Plaskacz and M. Quincampoix, Value-functions for differential games and control systems with discontinuous terminal cost, SIAM J. Control and Optim., 39 (2001), No. 5, pp. 1485– 1498. [23] R. T. Rockafellar and R. Wets, Variational analysis, Berlin, Springer, 1998. [24] A. Shapiro, On Lipschitzian stability of optimal solutions of parametrized semi-infinite programs, Math. Oper. Res., 19 (1994), No. 3, pp. 743–752. , Sensitivity analysis of parameterized variational inequalities, Math. Oper. Res., 30 [25]

PARAMETERIZED MINIMAX PROBLEM

21

(2005), No. 1, pp. 109–126. [26] C. Stegall, Optimization of functions on certain subsets of Banach spaces, Math. Ann., 236 (1978), pp. 171–176.