garcia de galdeano - Universidad de Zaragoza

0 downloads 0 Views 379KB Size Report
JESÚS BASTERO AND JULIO BERNUÉS. Abstract. We study the asymptotic behaviour, as n → ∞, of the ... and its covariance matrix is a multiple of the identity. ...... M. Fradelizi Sections of convev bodies though their centroid, Arch. Math. 69 ... M.W. Meckes, Gaussian marginals of probability measures with geometric sym-.
2005

“garcia de galdeano”

PRE-PUBLICACIONES del seminario matematico

Asymptotic behavior of averages of k-dimensional marginals of measures on Rn

Jesús Bastero Julio Bernués

n. 34

seminario matemático

garcía de galdeano Universidad de Zaragoza

ASYMPTOTIC BEHAVIOUR OF AVERAGES OF k-DIMENSIONAL MARGINALS OF MEASURES ON Rn ´ BASTERO AND JULIO BERNUES ´ JESUS Abstract. We study the asymptotic behaviour, as n → ∞, of the Lebesgue measure of the set {x ∈ K ; |PE (x)| ≤ t} for a random kdimensional subspace E ⊂ Rn and K ⊂ Rn in a certain class of isotropic bodies. For k growing slowly to infinity, we prove it to be close to the suitably normalised Gaussian measure in Rk of a t-dilate of the Euclidean unit ball. Some of the results hold for a wider class of probabilities on Rn .

1. Preliminaries and Notation Let E be a k-dimensional subspace of Rn , 1 ≤ k ≤ n and denote by PE the orthogonal projection onto E. For any Borel probability P on Rn , its n marginal probability on E is defined as PE (A) := P(A + E ⊥ R ) = P(x ∈ R | PE (x) ∈ A), A ⊆ E . A Borel probability P is isotropic if Rn x dP(x) = 0 and its covariance matrix is a multiple of the identity. A convex body K of volume 1 is isotropic if the uniform measure on K is. In this case, such multiple of the identity is denoted by L2K . In [Kl2] the author solved the so called Central limit problem for convex bodies (posed in [ABP], [BV] for k = 1 and considered in [BK], [BHVV], [KL], [MM], [Mi], [Wo]). He showed that every isotropic convex body K (and more generally, every isotropic log-concave probability measure) has the property that most of its k-dimensional marginal distributions are approximately Gaussian, with respect to the total variation metric, provided that k 0. We extend the ideas in [BK] and solve the (log log n) 2

difficulties appearing in that paper for s = 0. When k increases very fast to infinity, k = n − `, ` fixed and k = (1 − λ)n, 0 < λ < 1 we cannot expect a Gaussian behaviour. We obtain upper bounds for the average marginal density (Proposition 4.7) which, for some cases, are shown to be sharp. Such upper bounds are also needed in the first part of the section (Lemma 4.5). Next we shall introduce some notation and definitions. We denote by Dn the Euclidean ball in Rn and by ωn its Lebesgue measure. The area measure of the unit sphere S n−1 is |S n−1 | = n ωn . The letters c, C, c1 ... will denote absolute numerical constants whose value may change from line to line. The elements of the orthogonal group O(n) are denoted by U = (ξ1 . . . ξn ) so the columns (ξi ) form an orthonormal basis in Rn and dU is the Haar probability on O(n). The Haar probability on S n−1 is denoted by σn−1 . Let P be a Borel probability on Rn . We introduce the following parameters Z P{tDn } 1 |x|2 dP(x) MP := sup M22 = M22 (P) := n Rn t>0 |tDn |

´ J. BASTERO AND J. BERNUES

4

and

³ R |x|4 dP(x) ´ Var(|x|2 ) n − 1 = σP := n R R ( Rn |x|2 dP(x))2 nM24 (P) 2

When P is the uniform measure on K we change the notation accordingly, that is, σP to σK and so on. Remark 1.1. σP is a concentration parameter. Chebyshev’s inequality implies (see [ABP]), ¯ ¯ ¯ σ2 P{x ∈ Rn ; ¯ |x|2 − nM22 (P)¯ > εnM22 (P)}¯ ≤ 2P 2 n ε For P the uniform measure on an isotropic convex body K, the parameter σK is conjectured to be bounded by an absolute constant (the Variance Hypothesis). When P has density f , MP is the Hardy-Littlewood maximal function of f at the origin. It is finite, for instance, when this the origin is a regular Lebesgue point of f (Lebesgue differentiation theorem holds) or when f is bounded and in such case MP ≤ kf k∞ (the supremum norm of f ). Also observe that MP < ∞ implies P({0}) = 0. 1/n

Remark 1.2. For MP and M2 (P) finite the parameter M2 (P)MP plays an important role. In the particular case of P being the uniform measure of an isotropic convex body K, such constant is LK (= M2 (P) and MP = 1). If P 1/n has density f an even log-concave function, the constant M2 (P)MP is the isotropy constant of the function since MP = f (0), (see [B]). The following fact due to Hensley [H], whose proof follows from the results in [B], Lemma 6, will be extensively used along the paper: Lemma 1.3. There exists an absolute constant c > 0 so that for any prob1/n ability P on Rn , M2 (P)MP ≥ c. We finish this section with some 1.1. Technical preliminaries. Let P be a Borel probability on Rn with MP < ∞, M2 = M2 (P) < ∞. Denote Z |s|2 − 1 2 k k 2M2 γPk (s) = e , s ∈ R Γ (t) = γPk (s)ds, t ≥ 0 P (2π)k/2 M2k |s|≤t In the next three lemmas we state some inequalities that will be useful in the sequel. Given g, h : [0, ∞) → R, write g ∼ h for c1 h(x) ≤ g(x) ≤ c2 h(x), ∀x ≥ 0. Lemma 1.4. The following estimates are well known, √ 1 i) Γ(x + 1) = xx e−x 2πx(1 + 12x + O( x12 ))

AVERAGES OF k-DIMENSIONAL MARGINALS

ii) |S n−1 | = n ωn = |S n−k−1 | |S n−1 |

2π n/2 , Γ( n 2)

ωn ≤

cn , nn/2

1/n

ωn



√ 2πe √ n

5

and for k = o(n),

k/2

n ≤ C (2π) k/2 .

Lemma 1.5. 2 − t 2 tk ωk tk ωk 2M2 e , i) √ ≤ ΓkP (t) ≤ √ ( 2πM2 )k ( 2πM2 )k k



k/2

∀t ≥ 0

t2 2 4M2

ii) ΓP (t) ≥ 1 − 2 , ∀t ≥ 0 e µ ¶k δ iii) ΓkP (t + δ) ≤ 1 + ΓkP (t), ∀δ, t > 0 t Proof. i) is straightforward as for ii), Z



t2

2

e 4M2 1 − ΓkP (t) = γPk (s)ds ≤ √ ( 2πM2 )k |s|≥t

Z e



|s|2 2 4M2

ds ≤ 2

k/2

e



t2 2 4M2

|s|≥t

iii) follows from ΓkP (t + δ) =1+ ΓkP (t)

R

k t 0 such that ¯µ ¯ ¶ n−k−2 ¯ ¯ 2 n 2u k ¯ ¯ i) ¯ 1 − , ∀u ∈ [0, ]. − e−u ¯ ≤ C ¯ ¯ n n−k 2 ¯ ¯Ã ! ¯ ¯ |S n−k−1 | (2π)k/2 2 k ¯ ¯ ii) ¯ − 1¯ ≤ C n−1 k/2 ¯ |S ¯ | n n ¯ µ ¯ n−k−2 ¶ ¶ µ ¯ ¯ 2 √ 2u ku u2 ¯ u ¯ iii) ¯e 1 − − 1¯ ≤ 8 + , ∀u ∈ [0, n/4], provided ¯ ¯ n n n that

ku n

+

u2 n

≤ 1/8.

Proof. The proof of i) is the same as in [BK]. As for ii), it is a conn/2 sequence of the formula |S n−1 | = 2π and the asymptotic formula for Γ( n 2) the Gamma function Lemma 1.4. Let us show the proof of iii). Write ¡ in 2u ¢ n−k−2 y = u + 2 log 1 − n . We use the inequality |ey − 1| ≤ 2|y|, provided |y| ≤ 1 and Taylor’s formula with Lagrange’s error term log(1 − x) = x2 −x − 2(1−ξ) 2 , 0 < ξ < x ≤ 1 with x = 2u/n. Thus,

´ J. BASTERO AND J. BERNUES

6

¯ ¯ µ ¯ ¶ n−k−2 µ ¶¯ ¯ ¯ 2 ¯ 2u ¯¯ 2u n−k−2 ¯ ¯ u ¯ − 1¯ ≤ 2 ¯u + log 1 − ¯e 1 − ¯ ¯ n 2 n ¯ ¯ ¶¯ ¶ µ µ ¯ ¯ 4u2 k+2 (n − k − 2)u2 n − k − 2 2u ¯ ¯ + 2 ≤2 u+ ≤ 2 ¯u − 2 n 2n (1 − ξ)2 ¯ n (n − 2u)2 à ! ¶ µ 3ku n u2 ku u2 √ ≤2 + ≤ 8 + n n n (n − n )2 2

¤ 2. Average of k-dimensional marginals Let P be a Borel probability on Rn . For every k ∈ N, 1 ≤ k ≤ n we define the following average of k-marginals Z Ak (P)(B) = P(U (B + Rn−k )) dU, B ⊂ Rk O(n)

Ak (P) is a Borel probability on Rk invariant under the action of the orthogonal group in Rk . Clearly, Ak (An (P)) = Ak (P). The following proposition was considered in [BV], [BK] and [So] for the case k = 1. Proposition 2.1. Let P be a Borel probability on Rn . Then, for all 1 ≤ k < n and any Borel set B ⊂ Rk we have, µ ¶ n−k−2 Z Z 2 dP(x) |S n−k−1 | |s|2 Ak (P)(B) = P({0})δ0 (B) + 1 − ds n−1 | 2 |S |x| |x|k B {|x|≥|s|} where δ0 is the Dirac measure at 0. The density function of the absolutely continuous part is denoted by s ∈ Rk → ϕkP (s). Proof. Since Ak (An (P)) = Ak (P) and the function in the inner integral is radial it is enough to prove it for probabilities P that are invariant under orthogonal transformations. First we consider the case P = σn−1 . It is enough to prove the equality for dilates of the Euclidean ball, that is, to show that µ ¶ n−k−2 Z 2 dσn−1 (x) |S n−k−1 | |s|2 Ak (σn−1 )(rDk ) = 1 − ds n−1 2 | {|x|≥|s|} |x| |x|k rDk |S Z ¡ ¢ n−k−2 |S n−k−1 | 2 = 1 − |s|2 XDk (s)ds n−1 |S | rDk Z ¡ ¢ n−k−2 |S n−k−1 | 2 1 − |s|2 If r ≥ 1, then Ak (σn−1 )(rDk ) = 1 = ds. n−1 |S | Dk Z

AVERAGES OF k-DIMENSIONAL MARGINALS

7

If r < 1, after passing to polar coordinates, the right hand side equals to Z ¢ n−k−2 k−1 |S n−k−1 | |S k−1 | r ¡ 2 2 1 − t t dt |S n−1 | 0 On the other hand, Ak (σn−1 )(rDk ) = σn−1 (rDk × Rn−k ) and ¶ µ Z 1 ωk |S n−k−1 | rk n−k 2 n−k n−k−1 2 k2 2 + √ t σn−1 (rDk × R )= (1 − r ) (1 − t ) dt ωn n 1−r2 Now, the derivative of the two expressions are equal and we have the result. Observe that, by re-scaling, the formula also holds for the Haar probabilities on λS n−1 , λ > 0. In the general case, we use the fact that any probability P invariant under orthogonal transformations is, up to P({0}), the product measure of a positive measure on (0, ∞) and the Haar measure on S n−1 and so, it can be approximated by convex combinations of Haar probabilities on λS n−1 , λ > 0. For λ = 0, the associated probability is δ0 . ¤ R Remark 2.2. If P is a probability with density f , that is P(C) = C f (x) dx, R then Ak (P)(B) = B ϕkP (s) ds where Z Z k ϕP (s) = f (s1 ξ1 + · · · + sn ξn ) dsk+1 . . . dsn dU O(n)

Rn−k

and s = (s1 , . . . , sk ). In the particular caseR P(C) = |K ∩ C| for K ⊂ Rn a Borel set of volume 1 we have Ak (P)(B) = B ϕkK (s) ds where Z k ϕK (s) = |(s1 ξ1 + · · · + sk ξk + {ξ1 , . . . , ξk }⊥ ) ∩ K|n−k dU O(n)

This integral, an average of sections by n − k dimensional subspaces at distance |s| from the origin is the density function of a certain average of k-dimensional marginals of K (further applications of this formula appear in [BBR]). The following proposition gives a more geometrical interpretation of such function, Proposition 2.3. Let K ⊂ Rn be a Borel set of volume 1. Then if 1 ≤ k < n and s ∈ Rk we have ÃZ ! Z ϕkK (s) = |(|s|θ + E) ∩ K|n−k dν(E) dσn−1 (θ) S n−1

G(θ⊥ ,n−k)

where G(θ⊥ , n − k) is the Grassman manifold of the n − k-dimensional subspaces of the hyperplane θ⊥ and dν(E) its Haar measure. That is, consider the sphere |s|S n−1 ; for any θ ∈ S n−1 we first average over all the (n − k)-dimensional sections of K at distance |s| from the origin

´ J. BASTERO AND J. BERNUES

8

in the direction θ, that is inside |s|θ + θ⊥ and then we average over the sphere. Proof. Since ϕkK (s) is radial, Z ϕkK (s)

= O(n)

¯ ¯ ¯ ¯ ¯(|s|ξ1 + {ξ1 , . . . , ξk }⊥ ) ∩ K ¯

n−k

dU.

Next we consider the following consequence of the conditional expectation theorem as it appears in [Ko] Lemma 1: for any (say) continuous function F on O(n), Z

Z

Z

Z

F (U ) dU = O(n)

G(n,k)

ξk+1 ,...,ξn ∈E ⊥

dUn−k

ξ1 ,...,ξk ∈E

F (U )dUk dν(E)

where dUn−k and dUk are the Haar measures on O(n − k) and O(k). We apply this formula for k = 1 and any continuous function and we have in particular Z O(n)

ÃZ

Z F (ξ1 , . . . , ξn )dU =

S n−1

O(ξ1⊥ )

! F (ξ1 , ξ2 , . . . , ξn )dU1

dσn−1 (θ)

where O(ξ1⊥ ) is the orthogonal group in the hyperplane ξ1⊥ , dU1 its Haar measure (this formula can also be proved for any (say) continuous function F directly, by using the uniqueness of the Haar measure on O(n)). Applying again Koldobsky’s formula in the whole space ξ1⊥ ¯ ¯ and n − k for the function ⊥ ¯ F (ξ1 , ξ2 , . . . , ξn ) = (|s|ξ1 + {ξ1 , . . . , ξk } ) ∩ K ¯n−k we eventually get the result. ¤ Remark 2.4. Let E be a k-dimensional subspace of Rn . We show some ¯ ¯ ¯ relations between the function FK (t, E) := {x ∈ K : |PE (x)| ≤ t}¯ (formula (1.1)) and the average marginal density ϕkK (s). Fix {ξ1 , . . . , ξk } ⊂ Rn an orthonormal basis of E. By Fubini’s theorem we have ¯ ¯Ã k ! ¯ ¯ X ¯ ¯ ⊥ si ξi + E ∩ K¯ FK (t, E) = ¯ ¯ ¯ |s|≤t Z

i=1

ds1 . . . dsk .

n−k

We now integrate when U = (ξ1 , . . . , ξk ) runs over the orthogonal group O(E) which allows us to express FK (t, E) as a convenient average of marginal densities.

AVERAGES OF k-DIMENSIONAL MARGINALS

Z FK (t, E) =

 Z 

Z =



¯Ã k ¯ ! ¯ X ¯ ¯ ¯ si ξi + E ⊥ ∩ K ¯ ¯ ¯ ¯ |s|≤t

O(E)

i=1

 Z 

ds1 . . . dsk  dU

n−k

¯Ã k ¯ ! ¯ X ¯ ¯ ¯ ⊥ si ξi + E ∩ K¯ ¯ ¯ ¯ O(E)

|s|≤t

i=1

9

 dU  ds1 . . . dsk

n−k

(by the invariance under the orthogonal group) ÃZ ! Z ¯³ ¯ ´ ¯ ¯ = dU ds1 . . . dsk ¯ |s|ξ1 + E ⊥ ∩ K ¯ |s|≤t

(by using the µZ Z = |s|≤t

n−k

O(E)

Lemma 1 in [Ko]) ¯³ ¯ ´ ¯ ¯ ⊥ |s|θ + E ∩ K ¯ ¯

SE

n−k

¶ dσE (θ) ds1 . . . dsk

(by passing to polar coordinates in E) Z t k−1 = |S | rk−1 fK (r, E)dr 0

where SE = S n−1 ∩ E, σE its Haar probability and Z ¯³ ¯ ´ ¯ ¯ fK (r, E) = dσE (θ) , ¯ rθ + E ⊥ ∩ K ¯ n−k

SE

Finally, observe that we also obtain formula (1.2) Z Z k FK (t) = FK (t, E) dν(E) = Gn,k

{|s|≤t}

r ≥ 0.

ϕkK (s) ds

Our last lemma provides bounds for fK (r, E) and FK (t, E) that will be useful in the next section. Lemma 2.5. Let K ⊂ Rn be an isotropic convex body and E ∈ Gn,k . Denote Lk = sup{LM | M ⊂ Rk , isotropic}. There exist an absolute constant c1 > 0 such that k

i) fK (r, E) ≤ ek fK (r0 , E) ≤ (c1LLkk ) , ∀ r ≥ r0 ≥ 0 K ¡ ¢ ii) FK (t, E) ≥ 1 − exp − L c1√t k ∀ t ≥ 0. K

Proof. i) A result by Fradelizi, see [F], states that ¯³ ¯ ¯³ ¯ ´ ´ ¯ ¯ ¯ ¯ , ≤ ek ¯ r 0 θ + E ⊥ ∩ K ¯ ¯ rθ + E ⊥ ∩ K ¯ n−k

n−k

∀ r ≥ r0 ≥ 0

and the first inequality follows. As for the second inequality, it is a consequence of the previous one (for r0 = 0) and a result by Ball and Milman and k Pajor, see [B], [MP], which states that |E ⊥ ∩ K|n−k ≤ (c2LLkk ) . K

´ J. BASTERO AND J. BERNUES

10

ii) It is a consequence of a more general result: Let T : Rn → Rn be a linear map such that dim T (Rn ) = k, then ¯ ¯ ¡ ¢ c1 t ¯{x ∈ K ; |T (x)| ≤ t}¯ ≥ 1 − exp − ∀t ≥ 0 LK kT kHS where kT kHS denotes the Hilbert-Schmidt norm. Indeed, Borell’s inequality (see [MS]) states ¯ ¯ |T (x)| ¯ ¯ ∀t ≥ 0 ¯{x ∈ K ; ¡ R ¢1/2 > t}¯ ≤ exp(−c1 t) 2 dx |T x| K P We can suppose that T = kj=1 uj ⊗ ej , where {uj }kj=1 are k vectors in Rn and {ej }kj=1 is an orthonormal basis in the subspace T (Rn ). Then Z 2

|T x| dx = K

Z X k K j=1

2

|huj , xi| dx =

k X

Z 2

|uj |

j=1

|h K

uj , xi|2 dx = L2K kT k2HS |uj |

In our case simply take T = PE . ¤ 3. Estimating the quotient. ¯ ¯ ¯ for a random k-dimensional subspace Our aim is to estimate ¯ FΓKk(t,E) − 1 K (t) E ⊂ Rn . Some of the steps hold true for more general probabilities P and we will state them in full generality. The following hypothesis will be imposed on P throughout the section. Concentration hypothesis ([So]): ¯ ¯ √ √ (3.3) P{x ∈ Rn : ¯|x| − nM2 |¯ > t nM2 } ≤ A exp(−Bnα tβ ) for all 0 ≤ t ≤ 1 and for some constants α, β, A, B > 0 . 3.1. Gaussian approximation of the average density and distribution. We first consider Gaussian approximation of the average density ϕkP (s). Theorem 3.1. Let P a probability on Rn satisfying (3.3) and M2 , MP < ∞. min{α, α ,1} ˜ ˜ β 2 Denote h(n) = n and let h(n) such that h(n) < c(B, β)h(n) with ˜ 1 c h(n) min{1, β } we c = c(B, β) = min{ 81 , ( B2 ) }. Then, if k ≤ 1/n log(1 + M2 MP ) have ¯ ¯ k ˜ ¯ ¯ ϕP (s) ¯ ≤ c1 h(n) ¯ sup − 1 ¯ ¯ γ k (s) √ h(n) ˜ P |s|≤

h(n)M2

for some constant c1 = c1 (α, A, β, B) > 0

AVERAGES OF k-DIMENSIONAL MARGINALS

11

Z Proof. Recall that, by Proposition 2.1, ϕkP (s) = |S n−k−1 | 1 gt (r) = |S n−1 | rk

g|s| (|x|) dP(x), where

{|x|≥|s|}

¶ n−k−2 µ 2 t2 , r ≥ t > 0. 1− 2 r

Consider the image probability of P under the map x → |x|, that is, the probability on [0, ∞) also denoted by P with distribution function P{x ∈ Rn | |x| ≤ r}. With this notation, Z k ϕP (s) = g|s| (r) dP(r) [|s|,∞)

In order to estimate asymptotic behaviour of ϕkP (s) as n → ∞ we write Z ³ ¢ ¢´ ¡√ ¡√ k (3.4) ϕP (s) = g|s| nM2 P{|x| ≥ |s|}+ g|s| (r)−g|s| nM2 dP(r) [|s|,∞)

¢ Rr ¡√ 0 (u)du. By using Fubini’s theorem Write g|s| (r) − g|s| nM2 = √nM2 g|s| in (3.4), it is easy to see that ϕkP (s) − g|s|

¢ ¡√ √ √ nM2 = −g|s| (2 nM2 )P{2 nM2 ≤ |x|} Z √nM2 0 − g|s| (r)P{|x| ≤ r}dr |s|

Z +



Z +

√ 2 nM2 nM2

0 g|s| (r)P{|x| > r}dr

g|s| (r)dP(r) √ [2 nM2 ,∞)

The summands above are estimated with the help of the following three technical lemmas which extend the ideas in [So] to a general k for |s| far from the origin. The behaviour at the origin (included in Lemma 3.4) is estimated via the parameter MP . p Lemma 3.2. If |s| ≤ n2 M2 , then µ 2 ¶ Z g|s| (r) |s| A α √ − Bn d P(r) ≤ k exp √ 2 M22 [2 nM2 ,∞) g|s| ( nM2 ) and

√ µ 2 ¶ g|s| (2 nM2 ) √ |s| A α √ P{2 nM2 ≤ |x|} ≤ k exp − Bn . 2 g|s| ( nM2 ) M22

Proof. Use the elementary inequalities (1 − x)−1 ≤ e2x , 0 ≤ x ≤ 1/2 and 1 − x ≤ e−x , x ≥ 0. ¤

´ J. BASTERO AND J. BERNUES

12

³ Lemma 3.3. If Z

|s|2 M22

´max{β,1}


r} dr ≤ max ,k . 2 M2 (Bnα )1/β g|s| nM2

√ 2 nM2

√ nM2

Proof. By straightforward computations and the inequalities (1 − x)−1 ≤ e2x , 0 ≤ x ≤ 1/2 and 1 − x ≤ e−x , x ≥ 0, ¯ ¯ 0 ¯g (r)¯ (n−k−2)|s|2 2 |(n − 2)|s|2 − kr2 | − (n−k−4)|s| |s| 2 2nM2 2r 2 ¡√ ¢≤ e e r3 g|s| nM2 √ √ For r ∈ [ nM2 , 2 nM2 ], © |s|2 ª |(n − 2)|s|2 − kr2 | c ≤ ,k . max r3 r M22 On the other hand, nM22 ¢ |s|2 ¡ 1 + 2M . 2 1 − r2

(n−k−2)|s|2 2nM22



(n−k−4)|s|2 2r2

≤ 1+

(n−k−4)|s|2 ¡ 1 2nM22



nM22 ¢ r2



2 √ After using such bounds, the change of variables r = (1 + u) nM2 and 1 the inequality 1 − (1+u) 2 ≤ 2u, u ≥ 0 yield

Z

¯ 0 ¯ Z 1 |s|2 ¯g (r)¯ √ |s| 2u ¢ P{|x| > r} dr ≤ c ¡√ e M2 P{|x| > (1 + u) nM2} du g|s| nM2 0

√ 2 nM2

√ nM2

Now use the concentration hypothesis (3.3). The proof finishes by estimating the remaining integral with the aid of the following Claim (with |s|2 α K=M 2 and L = Bn ), see [So] Lemma 9. 2

Claim. Let K, L > 0 such that K max{β,1} < L/2. Then, Z 1 c(β) exp (Ku − Luβ ) du ≤ 1/β . L 0 ¤ pn

Lemma 3.4. There exists c > 0 such that whenever |s| ≤ 2 M2 , k < n/2 ³ ´ B 1/n max{1,β} and 8k log(cM2 MP ) < nα , 2 ¯ ¯ √ ½ 2 ¾ Z nM2 ¯g 0 (r)¯ |s| A c(β) 1 |s| ¢ P{|x| ≤ r} dr ≤ max ¡√ + n 2,k 1/β α 2 M g nM 2 (Bn ) |s| |s| 2 1/n

Proof. Denote λ := (cM2 MP )−2 , with c > 0 to be chosen below and split the integral into two parts Z

√ nM2

√ max{|s|,λ nM2 }

Z

√ max{|s|,λ nM2 }

+ |s|

= I1 + I2

AVERAGES OF k-DIMENSIONAL MARGINALS

13

By Hensley’s Lemma 1.3 we choose c so that λ < 1. It is easy to see that ¯ 0 ¯ ¯g (r)¯ |(n − 2)|s|2 − kr2 | √ |s| ¡√ ¢ ≤2 ( nM2 )k rk+3 g|s| nM2 √ The change of variables r = nM2 u and the inequality © |s|2 ª ≤ max M 2 , k , 0 ≤ u ≤ 1 yield to 2

I1 ≤ 2 max

© |s|2

2,k

ª

Z

1 √ max{|s|,λ nM2 } √ nM2

M2

|(n−2)|s|2 −knM22 u2 | nM22

√ P{|x| ≤ nM2 u} du uk+3



nM2 } √ Set a = max{|s|,λ . We have 0 ≤ a ≤ 1. By the change of variables nM2 u = 1 − v and the concentration hypothesis (3.3),

I1 ≤ 2A max

© |s|2

2,k

M2

ª

Z

1−a

exp(−(k + 3) log(1 − v) − Bnα v β ) dv

0

Finally, use the inequalities 3 − log a v ≤ log( ) v ≤ 2 log(cLP ) v, v ∈ [0, 1 − a] 1−a a and the Claim above. √ For the second integral I2 we can suppose |s| ≤ λ nM2 . Proceeding as before, we have Z λ√nM2 |(n − 2)|s|2 − kr2 | √ I2 ≤ 2 ( nM2 )k P{|x| ≤ r} dr rk+3 |s| − log(1 − v) ≤

By the inequality |(n − 2)|s|2 − kr2 | ≤ nr2 , the definition of MP and k < n/2 we have Z λ√nM2 √ √ k I2 ≤ 2n( nM2 ) MP ωn rn−k−1 dr ≤ 4 (LP ωn1/n n λ1/2 )n |s|

1/n √ since λn−k < λn/2 . Finally, the sequence ωn n is bounded by an absolute constant and we can choose c > 0 in the definition of λ so that I2 ≤ 2−n . ¤

End of proof of Theorem 3.1: Notice that the hypothesis of the Lemmas are satisfied and therefore ¯ ¯ µ 2 ¶ ¯ ϕk (s) ¯ A |s| ¯ ¡P ¯ α ¢ − 1¯ ≤ k−1 exp − Bn √ ¯ ¯ g|s| nM2 ¯ 2 M22 ½ 2 ¾ c(A, B, β) |s| 1 + max , k + 2 2n M2 nα/β

´ J. BASTERO AND J. BERNUES

14

Finally, use the inequality √ √ ¯ k ¯ ¯ ¯ ¯ ¯ ¯ ϕP (s) ¯ g|s| ( nM2 ) ¯ ϕkP (s) ¯ ¯ g|s| ( nM2 ) ¯ ¯ ¯≤ ¯ ¯+¯ ¯ √ − 1 − 1 − 1 ¯ γ k (s) ¯ ¯ ¯ ¯ ¯ g|s| ( nM2 ) γPk (s) γPk (s) P By Lemma 1.6 ii), iii) for u =

|s|2 , 2M22

we have

√ g|s| ( nM2 ) γPk (s)

≤ c1 and

à ! ¯ k ¯ ³ ´ h(n) ˜ ¯ ϕP (s) ¯ 1 α ˜ ¯ ¯ ¯ γ k (s) − 1¯ ≤ c1 exp h(n) − Bn + nα/β + 2n + P · µ ¶¸ ˜ ˜ 2 (n) ¢ ¡ h(n) 8 k|s|2 |s|4 h + + ≤ c + 1 n 2M22 4M24 n nα/β which finishes the proof.

¤

For P the uniform measure on an isotropic convex body K we obtain as a corollary, Theorem 3.5. Let K ⊂ Rn be an isotropic convex body satisfying the concentration hypothesis (3.3). For some c, c1 depending on the constants in (3.3), ˜

h(n) 1) If k ≤ c log(1+L , K)

¯ ¯ k ˜ ¯ ¯ ϕK (s) h(n) ¯ sup ¯ k − 1¯¯ ≤ c1 h(n) |s|∈I γK (s)

where I = [0, LK

q ˜ h(n)].

˜

h(n) 2) If k ≤ c log 2 , n

¯ k ¯ ˜ ¯ F (t) ¯ h(n) sup ¯¯ kK − 1¯¯ ≤ c1 h(n) t≥0 ΓK (t)

Proof. The statement 1) is a consequence Theorem 3.1, since in our case MP = 1, M2 = LK . Part 2) follows from 1). Indeed, by the Lemmas 2.5 ii) and 1.5 ii), 2 ¯ ¯ ¯ ¯ c t − t2 − 2√ ¯ ¯ ¯ ¯ k k k/2 4L LK k K +2 e ¯FK (t, E) − ΓK (t)¯ = ¯(1 − FK (t, E)) − (1 − ΓK (t))¯ ≤ e √ Therefore, in the range t ≥ C log n kLK (for suitable C > 0) we trivially ¯ ¯ 2 ¯ ¯ have ¯FK (t, E) − ΓkK (t)¯ ≤ for every k-dimensional subspace E. For that n ¯ ¯ ¯ FK (t, E) ¯ c1 k range of t, Lemma 1.5 ii) gives ΓK (t) ≥ c0 > 0 and so ¯¯ k − 1¯¯ ≤ . n Γq K (t) √ ˜ Finally, observe that t ≤ C log n kLK implies t ≤ LK h(n) and so by integrating 1) and formula (1.2) we have the result. ¤

AVERAGES OF k-DIMENSIONAL MARGINALS

15

Example. It is proved in [So] that the uniform probability on the unit ball of `np , p ≥ 1 verifies the concentration hypothesis (3.3) for α = 12 min{p, 2} √ ˜ and β = min{p, 2}. So, h(n) ¯ k = n ¯and by taking h(n) = o(h(n)), Theorem ¯ ϕ (s) ¯ 3.5.1) implies that sup ¯ γ kK (s) − 1¯ → 0 as n → ∞ for I = [0, o(n1/4 )] |s|∈I

K

o(n1/2 )

and k = (since in this case LK is uniformly bounded by a constant depending only on p). If we study the behaviour at t = 0 of Theorem 3.1 2), we obtain the following strong form of reverse H¨older’s inequality in the spirit of [V]. Corollary 3.6. Let K ⊂ Rn in the family of isotropic convex bodies satish(n) fying (3.3). If k = o( log 2 ), n ´1/2 ³ Z dx ´1/k ³Z |x|2 →1 as n → ∞ k K K |x| Proof. By Remark 2.4. and L’Hopital’s rule lim

t→0+

√ FK (t, E) fK (t, E) = lim = ( 2πLK )k |E ⊥ ∩ K|n−k 2 − t2 t→0+ √ ΓkK (t) ( 2πLK )−k e 2LK

Therefore, lim

t→0+

k (t) √ FK = ( 2πLK )k k ΓK (t)

Z Gn,n−k

|E ∩ K|n−k dν

But this is equal to Z √ √ ωn−k ˜ |S n−k−1 | dx Wk (K) = ( 2πLK )k ( 2πLK )k n−1 ωn |S | K |x|k ˜ k (K) denotes the k-th dual mixed by the dual Kubota formula, where W R 2 volume of K (see [BBR]). Since LK = n1 K |x|2 dx, we have Z ´k/2 ³ Z dx ´ k (t) FK (2π)k/2 |S n−k−1 | ³ 2 lim = |x| k t→0+ Γk nk/2 |S n−1 | K K |x| K (t) By Lemma 1.6 ii) and Theorem 3.5.2), the result follows.

¤

3.2. Gaussian behaviour of a typical subspace. The main tool of this subsection is the concentration of measure phenomena in the space Gn,k equipped with its Haar probability and the distance kPE1 − PE2 kHS , E1 , E2 ∈ Gn,k , where PE is the orthogonal projection onto E. Recall that the modulus of continuity of a continuous f : Gn,k → R is ω(a) =

sup kPE1 −PE2 kHS ≤a

|f (E1 ) − f (E2 )|,

a>0

´ J. BASTERO AND J. BERNUES

16

Theorem 3.7 (Concentration of measure). Denote by ν the Haar probability on Gn,k . Let f : Gn,k → R continuous. There exist absolute constants c1 , c2 > 0 such that for every a > 0, ¡ ¢ ν {E ∈ Gn,k ; |f (E) − E (f (E))| > ω(a)} ≤ c1 exp −c2 na2 Proof. The inequality above can stated with Gn,k equipped with the distance d(E1 , E2 ) = min

k ©¡ X

|uj −vj |2

¢1/2

| (uj ), (vj ) orthonormal basis of E1 , E2

ª

j=1

for E1 , E2 ∈ Gn,k (see [MS]). √ In order finish the proof we show kPE1 −PE2 kHS ≤ 2 d(E1 , E2 ). Indeed, P for any (uj ), (vj ) orthonormal basis of E1 , E2 we write PE1 = kj=1 uj ⊗ uj P and PE2 = ki=1 vi ⊗ vi and by definition kPE1 − PE2 k2HS

= 2k − 2

k X

2

huj , vi i ≤ 2

i,j=1

k X

2

(1 − huj , vj i ) ≤ 2

j=1

k X

|uj − vj |2

j=1

since 1 − huj , vj i2 ≤ 2(1 − huj , vj i) = |uj − vj |2 . ¤ We will compute the modulus of continuity of E →

FK (t,E) : ΓkK (t)

Lemma 3.8. Let 0 < ε < 1, t > 0 and K ⊂ Rn an isotropic convex body. Then for every E1 , E2 ∈ Gn,k and some universal constant c > 0 we have ¯ F (t, E ) F (t, E ) ¯ ¯ K 1 2 ¯ K − ¯ ¯≤² k k ΓK (t) ΓK (t) provided that kPE1 − PE2 kHS ≤ a where  √  ck2ε2 tk2 , if t ≤ 2 kLK ; LK Lk a=  cε2 , otherwise . tk−1 Proof. Let 0 < δ(< t) to be fixed later. By the triangle inequality, FK (t, E2 ) − FK (t, E1 ) ≤ FK (t + δ, E1 ) − FK (t, E1 )+ + |{x ∈ K, |(PE1 − PE2 )(x)| ≥ δ}| Let us estimate each summand. By Remark 2.4, Z t+δ k−1 FK (t + δ, E1 ) − FK (t, E1 ) = |S | rk−1 fK (r, E1 )dr t

we can apply Lemma 2.5 i) and FK (t + δ, E1 ) − FK (t, E1 ) ≤ |S k−1 |ck

Lkk 1 ((t + δ)k − tk ) LkK k

AVERAGES OF k-DIMENSIONAL MARGINALS

17

By the mean value theorem, (t + δ)k − tk ≤ k(t + δ)k−1 δ ≤ k2k−1 tk−1 δ so, FK (t + δ, E1 ) − FK (t, E1 ) ≤ |S k−1 |ck

Lkk k−1 t δ LkK

Now we compute the second summand. Repeat the arguments in Lemma 2.5 ii) with T = PE1 − PE2 and we have, ¡ ¢ c1 δ |{x ∈ K, |(PE1 − PE2 )(x)| ≥ δ}| ≤ exp − LK kPE1 − PE2 kHS Put the estimates together, exchange E1 and E2 and conclude that ³ ´ c1 δ ¯ F (t, E ) F (t, E ) ¯ |S k−1 |ck Lk exp − LK a ¯ K 1 2 ¯ K − (3.5) ¯ ≤ k k k tk−1 δ + ¯ k k k ΓK (t) ΓK (t) LK ΓK (t) ΓK (t) √ c ε LK If t ≥ 2 kLK , Lemma 1.5 ii) gives ΓkK (t) ≥ c0 > 0. Take δ = k−1 t (c > 0 small enough). Substituting in formula (3.5) together with |S k−1 | ≤ ck0 kk/2

(Lemma 1.4.ii)), Lk ≤ c1 k 1/4 ([Kl2]) and LK ≥ c2 , we have

¯ F (t, E ) F (t, E ) ¯ ε ³ c ε ´ ¯ K 3 1 2 ¯ K + exp − k−1 − ≤ ¯ ¯ 2 t a ΓkK (t) ΓkK (t)

if kPE1 − PE2 kHS ≤ a

c3 ε2 Finally set a = k−1 so the second summand reads exp(− 1ε ) ≤ 2ε . t √ −2k k If t ≤ 2 kLK , Lemma 1.5 i) implies ΓkK (t) ≥ (e√2πLt ω)kk . We substitute this estimate in formula (3.5) and so,

K

¶ µ ¯ F (t, E ) F (t, E ) ¯ ck LkK cδ ¯ K 1 2 ¯ K k kδ − exp − ¯ ¯ ≤ c Lk + t ωk tk LK a ΓkK (t) ΓkK (t) εt We take δ = k k . Thus the first summand is equal to 2ε . With this 2c Lk √ choice of δ, if we also write u = 2LtK ∈ [0, k] the second summand becomes µ ¶ ck1 εu exp − k k ωk uk Lk c2 a 2 2

Finally set a = ckckε3/2uLk for some appropriately chosen c > 1 and substi2 k tute in the previous formula, Ã ! ck1 k 3/2 c exp − := h(u) ωk uk εu The √ maximum value of h is obtained at u0 so that h0 (u0 ) = 0, that is u0 =

c

ε

k

and h(u0 ) =

ck1 εk e−k ωk ck kk/2

≤ 2ε .

¤

´ J. BASTERO AND J. BERNUES

18

˜

Next, we apply Theorem 3.7. Recall that c1 h(n) h(n) is the error term in Theorem 3.5. Lemma 3.9. Let 0 < ε < 1, t > 0, K ⊂ Rn an isotropic convex body ˜ h(n) satisfying the concentration hypothesis (3.3) and k ≤ c log 2 . Then, n ( ) ¯ ¯ ˜ ¯ FK (t, E) ¯ h(n) ν E ∈ Gn,k ; ¯¯ k − 1¯¯ > ε + c1 ≤ c1 exp(−c2 a2 n) h(n) ΓK (t) where

  a=



Proof. Theorem 3.5 states

ck ε2 t2 , L2K Lkk 2 cε , tk−1

√ if t ≤ 2 kLK ; otherwise .

¯ k ¯ ˜ ¯ FK (t) ¯ h(n) ¯ ¯ ¯ Γk (t) − 1¯ ≤ c1 h(n) K

Hence, ¯ n ¯ F (t, E) o ˜ h(n) ¯ K ¯ ν E; ¯ k − 1¯ > ε + c1 ≤ h(n) ΓK (t) n ¯ F (t, E) F k (t) ¯ o ¡ ¢ ¯ K ¯ ≤ ν E; ¯ k − kK ¯ > ε ≤ c1 exp − c2 na2 ΓK (t) ΓK (t) since Lemma 3.8 reads ω(a) ≤ ε

¤

In our last result of the section we pass from Lemma 3.9, valid for any fixed t, to a statement that holds for every t simultaneously. 1 2, ˜ h(n) c log2 n .

Lemma 3.10. Let 0 < ε
0, K ⊂ Rn an isotropic convex body ˜

h(n) satisfying (3.3) and k ≤ Suppose c1 h(n) ≤ 12 . Then ( ) ¯ ¯ ˜ ¯ FK (t, E) ¯ h(n) , ∀t ≥ t0 ≥ 1−N exp(−c A2 n) ν E ∈ Gn,k ; ¯¯ k − 1¯¯ ≤ 2ε + 2c1 h(n) ΓK (t)

where ³ c log n√kL ´ c1 k ε K N∼ t0

and

A=

ck2 2 t20 1 ε min{ , k−1 } 2 k/2 LK LK (log n)k−1 k

Proof. By the arguments in the proof of Theorem 3.5 √ 2), we only need to compute the probability ∀t ∈ [t0 , T ] with T = C log n kLK . Pick 0 < t0 < t1 ≤ t2 ≤ · · · ≤ tN = T in the following way ¶ i µ Y c1 ² ε ti = t0 1+ ∼ t0 i k i = 1, . . . , N 8kj j=1

AVERAGES OF k-DIMENSIONAL MARGINALS

19

˜

h(n) Write η = 2ε + 2c1 h(n) , 0 < η < 2. By Lemma 3.9,

¯ ½ ¯ ¾ N X ¯ η ¯ FK (ti , E) ¯ ¯ − 1¯ > , for some i ≤ c1 exp −(c2 na2i ) ν E; ¯ k 2 ΓK (ti ) i=0 where

  ai =



ck ε2 t2i , L2K Lkk 2 cε , tk−1 i

√ if ti ≤ 2 kLK ; otherwise .

If t ∈ [ti , ti+1 ], the fact that ¯ ¯ ¯ FK (t, E) ¯ ¯ ¯>η − 1 ¯ Γk (t) ¯ K implies that either FK (ti+1 , E) > (1 + η)ΓkK (ti )

FK (ti , E) < (1 − η)ΓkK (ti+1 ).

or

Taking into account the choice of ti , Lemma 1.5 iii) (with t = ti , δ = ti+1 − ti ) reads ´k ΓkK (ti+1 ) ³ ti+1 ´k ³ ε η ≤ 1 + ≤ eε/8 ≤ 1 + ≤ ti 8k(j + 1) 4 ΓkK (ti ) and so, by the elementary inequalities (1 + η)(1 + η4 )−1 ≥ (1 + η2 ) and (1 − η)(1 + η4 ) < 1 − η2 we have that either η FK (ti+1 , E) > (1 + )ΓkK (ti+1 ) 2 thus,

or

η FK (ti , E) < (1 − )ΓkK (ti ) 2

¯ ¯ o ¯ FK (t, E) ¯ | ¯¯ k − 1¯¯ > η, for some t ∈ [t0 , T ] ≤ ΓK (t) ¯ ¯ o ¯ FK (ti , E) ¯ η | ¯¯ k − 1¯¯ > , for some i ≤ c1 N exp −(c2 nA2 ) 2 ΓK (ti )

n ν E ∈ Gn,k n ≤ 2 ν E ∈ Gn,k

where A = min ai . By definition, 1≤i≤N

√ c1 ² c log n kLK = T = tN ∼ t0 N k That is, N∼

³ c log n√kL ´ c1 k K

t0

ε

and

A≥

ck2 2 t20 1 ε min{ } , k−1 2 k/2 LK LK (log n)k−1 k ¤

´ J. BASTERO AND J. BERNUES

20

Theorem 3.11. Let K ⊂ Rn in the family of isotropic convex bodies satisfying LK ≤ c and condition (3.3). Then, for every 0 < ε < 1 and c1 ε log n 1 ≤ k ≤ (log we have log n)2 ¯ ¯ ½ ¾ ¯ FK (t, E) ¯ ¯ ¯ ν E ∈ Gn,k ; sup ¯ k − 1¯ ≤ ε ≥ 1 − exp(−c2 n0,9 ) ΓK (t) t≥ε where c1 , c2 depend only on c and on the constants in (3.3). Proof. By hypothesis,

k ε



c1 log n (log log n)2

and ε ≥

(log log n)2 c1 log n .

We can clearly ˜

h(n) ˜ ≤ ε. choose h(n) to fulfill the hypothesis of Lemma 3.10 and moreover c1 h(n) c3

Now, direct computation N ≤ n log log n and A2 ≥ result follows.

−c

4 log log4 n log log n n log3 n

and the ¤

4. Asymptotic results on the average density and distribution 4.1. Gaussian approximation of the average density and distribution. In this section we show that, for a range of k and a class of probabilities P, the average density is uniformly close to the Gaussian density. Furthermore, if P has exponential tails on half spaces (see definition below), we can also approximate the average distribution. Recall that FP (t, E) = P{x ∈ Rn : |PE (x)| ≤ t}. Definition 4.1. Let c > 0. Denote by Pc,n the set of Borel probabilities 1/n such that σP , M2 , MP ≤ c Theorem 4.2. Let k ≤

c



log n 1

(log log n) 2 +δ

, δ > 0. Then there exist c1 > 0 (de-

pending only on c and δ) such that ∀ P ∈ Pc,n , 1) ¯ ¯ ck1 k k/2 ¯ ¯ sup ¯ϕkP (s) − γPk (s)¯ ≤ 1/(k+3) n s∈Rk

c3 t Furthermore, if P satisfies P{x ∈ Rn ; |hθ, xi| > t} ≤ c2 exp − M , for some 2 c2 , c3 > 0 and all t > 0, θ ∈ S n−1 then 2) ¯ ¯ ck4 k k/2 ¯ k ¯ sup ¯FK (t) − ΓkP (t)¯ ≤ 1/(k+3) (log n)k n t≥0 for some c4 > 0 depending only on the constants.

Proof. Observe, by straightforward computation, that the bound on k insures that the error terms in parts 1) and 2) tend to 0 as n → ∞. The proof of 1) will be done in 3 steps. Step 3 takes care of very large values of |s|, Step 2 of values of |s| near, and including, the origin and Step 1 of the remaining case. Fix c0 > 0 small enough that will be chosen below. c0 is used to separate these three steps.

AVERAGES OF k-DIMENSIONAL MARGINALS

21

Step 1. √ Let k = o(n). There exists a constant C > 0 such that for 0 < |s| ≤ c0 1/nn and every Borel probability P we have MP

¯ ¯ ³ σ M ¯ k ¯ P 2 + 1 ´. ¯ϕP (s) − γPk (s)¯ ≤ C k k/2 √ n |s|k+1 n M k/n P

Proof of Step 1. By formula (3.4), ¢ ¢ ¡√ ¡√ ϕkP (s) − γPk (s) = (g|s| nM2 − γPk (s)) + g|s| nM2 P{|x| < |s|}+ Z ³ ¡√ ¢´ + g|s| (r) − g|s| nM2 dP(r) [|s|,∞)

We compute the second and third summand with the aid of the following lemmas, Lemma 4.3. Let k = o(n). There exists an absolute constant C > 0 such that 1 C k k/2 i) sup gt (r) ≤ k , t (2πe)k/2 r≥t 1 C k (k+3)/2 . ii) sup |gt0 (r)| ≤ √ k+1 nt (2πe)k/2 r≥t Proof. By Lemma 1.4,

|S n−k−1 | nk/2 ≤ C . Proceed as in [BK]. |S n−1 | (2π)k/2

¤

k Lemma 4.4. √ Let k = o(n). There exists C > 0 such that ∀ s ∈ R with 0 < |s| ≤ nM2 ¢ ¡√ C k k/2 i) g|s| nM2 P{|x| < |s|} ≤ MP ωn |s|n−k . (2πe)k/2 Z ³ ¢´ ¡√ C k k/2 σP M2 ii) g|s| (r) − g|s| nM2 dP(r) ≤ √ n |s|k+1 {r≥|s|} ¢ ¡√ kk/2 Proof. i) By Lemma 4.3, g|s| nM2 ≤ |s|Ck (2πe) k/2 and, by definition of MP , n P{|x| ≤ |s|} ≤ MP ωn |s|

ii) By the mean value theorem Z

¯ ¢¯ ¡√ ¯g|s| (|x|) − g|s| nM2 ¯ dP(x) ≤ sup |g 0 (r)| |s|

{|x|≥|s|}

r≥|s|

Z

¯ ¯ √ ¯|x| − nM2 ¯ dP(x)

Rn

¯ ¯ √ Now use Lemma 4.3 and the inequality Rn ¯|x| − nM2 ¯ dP(x) ≤ σP M2 (see [BK]). ¤ R

´ J. BASTERO AND J. BERNUES

22



Observe that, by suitably choosing c0 we have: a) |s| ≤ c0 1/nn implies MP √ |s| ≤ nM2 , by Hensley’s Lemma 1.3 and b) the second error term in the previous Lemma absorbs the first one. √ It remains to estimate the first summand, |γPk (s) − g|s| ( nM2 )| where µ ¶ n−k−2 2 √ |s|2 1 |S n−k−1 | g|s| ( nM2 ) = 1 − . 2 |S n−1 | nk/2 M2k nM2 √ Write |s|2 = 2M22 u. Then, 0 < |s| ≤ nM2 is equivalent to 0 < u ≤ n/2 and so, for such a values of u we need to estimate ¶ n−k−2 ¯ |S n−k−1 | (2π)k/2 µ ¯ 2 2u ¯ −u ¯ 1 − − e ¯ ¯ n (2π)k/2 M2k |S n−1 | nk/2 1 k/n By Lemma 1.3 we have ≤ C k MP . Finally add the value k/2 (2π) M2k |S n−k−1 | (2π)k/2 −u ± e and use Lemma 1.6 to conclude the proof of Step 1. |S n−1 | nk/2 1

Step 2. Let P ∈ Pc,n and k = o(n). Then ¯ ¯ ck1 k k/2 ¯ k ¯ ¯ϕP (s) − γPk (s)¯ ≤ 1/(k+3) n for all |s| ≤

√ c0 n 1/n MP

(c1 depending only on c). 1/n

Proof of Step 2. By Lemma 1.3 we also have M2 , MP ≥ c2 > 0. 1 √ Let (sn ) be a sequence such that n|sn |k+1 = n k+3 , or equivalently, −

1

|sn | = n 2(k+3) . For |s| ≥ |sn | we have ¯ ¯ ³ σ M 1 ¯ k ¯ P 2 + 1 ´ ≤ ck kk/2 n− k+3 ¯ϕP (s) − γPk (s)¯ ≤ C k k/2 √ 1 k+1 k/n n |s| n MP

If 0 ≤ |s| ≤ |sn |, write ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ k ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ϕP (s) − γPk (s)¯ ≤ ¯ϕkP (s) − ϕkP (sn )¯ + ¯ϕkP (sn ) − γPk (sn )¯ + ¯γPk (sn ) − γPk (s)¯ .

The second summand was estimated above. As for the third one, the inequality |e−x − e−y | ≤ |x|, x ≥ y > 0 implies |sn |2 ck1 ≤ 1 2 (2π)k/2 M2k+2 n k+3 For the first summand, we use the following lemma |γPk (s) − γPk (sn )| ≤

Lemma 4.5.√ Let n ≥ 2k. There exists c0 , c1 > 0 such that for all s ∈ Rk with |s| ≤ c0 1/nn , MP

¯ ¯ ´ ³ k+2 ¯ k ¯ ¯ϕP (s) − ϕkP (0)¯ ≤ ck1 nk/2 MP ωn |s|n−k + MP n |s|2 .

AVERAGES OF k-DIMENSIONAL MARGINALS

23

This finishes the proof of Step 2 since the estimate of the remaining first summand readily follows from ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ k ¯ϕP (s) − ϕkP (sn )¯ ≤ ¯ϕkP (sn ) − ϕkP (0)¯ + ¯ϕkP (s) − ϕkP (0)¯ ¯ ¯ Proof of the Lemma. By definition, ¯ϕkP (s) − ϕkP (0)¯ equals |S n−k−1 | |S n−1 |

Z

dP(x) |S n−k−1 | + k |S n−1 | {|x|≤|s|} |x|

µ ¶ 1 ³ |s|2 ´ n−k−2 2 1 − − 1 dP(x) k |x|2 {|x|≥|s|}|x|

Z

We estimate the first summand. By Fubini’s theorem, Z

dP(x) = k {|x|≤|s|} |x|

Z



P{|x| ≤ |s|, 0

Z The first integral is equal to 0

1 |s|k

1 > t} dt = |x|k

Z

1 |s|k

Z +

0

∞ 1 |s|k

P{|x| ≤ |s|} dt and by definition of MP ,

it follows that this integral is bounded by |s|n−k MP ωn . The second integral is equal to Z ∞ Z ∞ 1 k P{ k > t} dt ≤ MP ωn t−n/k dt = MP ωn |s|n−k 1 1 |x| n − k k k |s|

|s|

Therefore, by Lemma 1.4 |S n−k−1 | |S n−1 |

Z

n dP(x) c nk/2 MP ωn |s|n−k≤ ck nk/2 MP ωn |s|n−k ≤ k k/2 n−k (2π) {|x|≤|s|} |x|

Next we compute the second summand. Use in the integrand the elementary inequality |ap − bp | ≤ p|a − b|, a, b ∈ [0, 1] with p = n−k−2 and conclude 2 that the second summand is bounded by |S n−k−1 | n−k−2 |S n−1 | 2

Z

|s|2 (n−k−2)|S n−k−1 | 2 k+2 dP(x) = |s| ϕP (0) k+2 2 |S n−k−3 | {|x|≥|s|} |x|

By Proposition 4.7. below we have, for |s| ≤ c (ωn MP )−1/n ∼ (k+2)/n ωn−k−2

ϕk+2 P (0) ≤ c1 MP

1− k+2 ωn n

√ c0 n 1/n , MP

(k+2)/n

≤ ck1 MP

Γ( n−k−2 ) |S n−k−1 | 2 = π(n − k) ≤ c2 by Lemma 1.4 n−k−3 n−k |S | Γ( 2 ) and putting the estimates together, the second summand is bounded by (k+2)/n ck1 |s|2 MP , which finishes the proof of the lemma. 2 Finally, (n − k − 2)

´ J. BASTERO AND J. BERNUES

24

1/n

Step 3. For every probability P with MP

≤ c, |s| ≥

√ c0 n 1/n MP

and k ≤

n log n ,

¯ ¯ ck k k/2 ¯ k ¯ ¯ϕP (s) − γPk (s)¯ ≤ 1 k/2 n where c1 > 0 depends only on c. Proof of Step 3. By Lemma 1.3, M2 ≥ c2 > 0 (depending on c) and trivially ck1 γPk (s) ≤ ck1 exp(−c3 n) ≤ k/2 n On the other hand, by Lemma 4.3 ϕkP (s) ≤ max g|s| (x)P{x; |x| ≥ |s|} ≤ |s|≤|x|

k k/2 ck1 k k/2 ≤ (2πe)k/2 |s|k nk/2

This finishes the proof of 1).

√ Now Rwe prove 2). Let t ≤ C log n kM2 (for suitable C > 0). By integrating |s|≤t ds the result in 1) and using the identity (1.2) and Lemma 1.4 we have ¯ ¯ ck1 ck2 k k/2 ¯ k ¯ tk ≤ 1/(k+3) (log n)k ¯FK (t) − ΓkP (t)¯ ≤ 1/(k+3) n n √ In the range t ≥ C log n kM2 , we proceed as in Theorem 3.5 2). Observe P that if we write PE (x) = ki=1 hx, ui iui for some (ui ) orthonormal basis of E then k √ ©X ª 1 − FP (t, E) = P |hx, ui i|2 > t2 ≤ P{ k max |hx, ui i| > t} i=1

1≤i≤k

c3 t And so, by the hypothesis, 1 − FP (t, E) ≤ c2 k exp − √ kM2 By this estimate and Lemma 1.5 ii) 2 ¯ ¯ ¯ ¯ c t − t 2 − 3√ ¯ ¯ ¯ ¯ k/2 k k 4M2 M k 2 +2 e ¯FP (t, E) − ΓP (t)¯ = ¯(1 − FP (t, E)) − (1 − ΓP (t))¯ ≤ c2 ke ¯ ¯ 2 ¯ ¯ and we conclude, as in Theorem 3.5 2), that ¯FP (t, E) − ΓkP (t)¯ ≤ for every n k-dimensional subspace E. ¤ Remark 4.6. The hypothesis on MP , M2 and σP are necessary due to the behaviour at s = 0. Indeed, consider the probability given by P = 1 1 ˜n−1 where σ ˜n−1 is the Haar probability on 2S n−1 . Straightfor2 σn−1 + 2 σ √ 1/n ward computations show that M2 ∼ cn−1/2 , MP ∼ cn1/2 and σP ∼ c n and |ϕkP (0) − γPk (0)| ∼ c nk/2 and so, it tends to +∞ as n → ∞. 1/n

Examples. We give some examples with σP , M2 , MP

uniformly bounded.

AVERAGES OF k-DIMENSIONAL MARGINALS

25

1. Let P be uniform measure on K, the normalised unit ball of the space `np , p > 0. Clearly MP = 1. The parameters M2 (= LK ) and σK are uniformly bounded on n as it appears in [ABP]. (In that paper for p ≥ 1, but by similar arguments also for 0 < p < 1). 2. Let P be a Borel probability on R with finite forth moment. Consider ˜ = P ⊗ · · · ⊗ P on Rn and suppose M˜ = 1. A simple the product measure P P ˜ = M2 (P) and σ˜ = σP . computation show that M2 (P) P 3. Consider the density function on Rn given by f (|x|) where f : R → [0, ∞) is an even log-concave function. Then, we have that MP = f (0) and, by Lemma 2.6 in [Kl], σP , M2 are bounded by an absolute constant. 4. Let f (x) = exp −ap |x|p , 0 < p < 1 be a density function on Rn . Then, MP = 1 and σP , M2 are bounded by constants depending only on p. 4.2. Upper bounds for a fast growth of k. A Gaussian behaviour for −1

large k is not expected: Consider the case K = ωn n Dn . We have ϕkK (s)

 

k−n

³

ωn−k ωn n =  0

2

1 − |s|2 ωnn

´ n−k

−1

2

for |s| ≤ ωn n otherwise

For k = n − `, ` fixed or k = (1 − λ)n, 0 < λ < 1 the asymptotic behaviour of ϕkK (s) is k

- If k = n − `, ` fixed, then the equivalence ωn−k ωnn −`/2 → ω (2πe)−`/2 . implies ϕn−` ` K (s)n k

- If k = (1 − λ)n, 0 < λ < 1, we have ωn−k ωnn implies

(1−λ)n ϕK (s)λλn/2 n(1−λ)/2



−1

2 λ(2π)λ/2 e−πeλ|s| .



−1

∼ n`/2 (2πe)−`/2

λ(2π)λ/2 λλn/2 n(1−λ)/2

which

For general probabilities we find the following upper bounds of ϕkP (s). Proposition 4.7. Let P be a probability measure on Rn with MP < ∞. Then there exist numerical constants c, C > 0 so that i) If 1 ≤ k ≤ n − 2, k/n

k

ϕP (s) ≤ CMP whenever |s| ≤

ωn−k

¡ k ¢1/(n−k) n

k 1− n

ωn

Ã

k |s|n−k 1− n (ωn MP ) nk −1

(ωn MP )−1/n .

ii) If k = n − 1 and P has bounded density f , ϕn−1 P (s) ≤ Ckf k∞

√ −1/n whenever |s| ≤ c n kf k∞ .

!

´ J. BASTERO AND J. BERNUES

26

Proof. i) Case 1 ≤ k ≤ n − 2. Recall µ ¶ n−k−2 2 |s|2 dP(x) 1− 2 |x| |x|k {|s|≤|x|} Z |S n−k−1 | dP(x) ≤ n−1 |S | {|s|≤|x|} |x|k

|S n−k−1 | ϕP (s) = |S n−1 | k

Z

Let A ≥ |s| to be chosen later. |S n−k−1 | ϕkP (s) ≤ |S n−1 | ≤

|S n−k−1 | |S n−1 |

ÃZ

Z

|s|≤|x|≤A

dP(x) + |x|k

|s|≤|x|≤A

1 dP(x) + k k |x| A

ÃZ

A≤|x|

!

dP(x) |x|k

!

A A Fix I > 1 and let Ns be a natural number such that Ns +1 ≤ |s| < Ns . I I Z n Since dP(x) ≤ MP t ωn for all t > 0, we have tDn

Ns µ m+1 ¶k Z X dP(x) I dP(x) ≤ k A A |x| A |x|≤ IA m m=0 I m+1 ≤|x|< I m m=0 µ ¶n ¶m Ns Ns µ X Ik X A 1 mk k n−k ≤ k I ωn MP = I A ωn MP A Im I n−k m=0 m=0 Ã ! µ ¶Ns +1 µ ¶−1 1 1 k n−k ≤I A ωn MP 1 − 1 − n−k I n−k I Ã µ ¶−1 µ ¶n−k ! 1 |s| ≤ I k 1 − n−k An−k ωn MP 1 − I IA

Z

Ns Z X dP(x) ≤ k |s|≤|x|≤A |x|

We choose I = (n/k)1/(n−k) and we get Z |s|≤|x|≤A

dP(x) k ³ n ´n/(n−k) n−k ≤ A ωn MP |x|k n−k k

à 1−

µ

|s| A

¶n−k

k n

!

We now optimize by taking A = (k/n)1/(n−k) (ωn MP )−1/n , whenever |s| ≤ (k/n)1/(n−k) (ωn MP )−1/n , and we arrive at the result taking also into account that |S m−1 | = m ωm .

AVERAGES OF k-DIMENSIONAL MARGINALS

27

ii) Case k = n − 1. Z 2 f (x)dx p ϕn−1 (s) = ≤ P n−1 |S | {|s|≤|x|} |x|n−2 |x|2 − |s|2 ÃZ ! Z 2 ≤ n−1 + ≤ |S | |s|≤|x|≤|s|+A |s|+A≤|x| ! Ã p 1 2 n−1 p | kf k∞ (|s| + A)2 − |s|2 + ≤ n−1 |S |S | (|s| + A)n−2 2|s|A + A2 Assume |s| ≤ A then n−1

ϕP

(s) ≤

2

µ √ |S n−1 | kf k∞ 3A +

1



|S n−1 | An−1 µ ¶1/n n−1 We optimise by taking A = √ and then 3 |S n−1 | kf k∞ Ã µ ¶1−1/n ! ´1−1/n ³ √ 2 1 n−1 ϕn−1 (n − 1)1/n + P (s) ≤ |S n−1 | |S | kf k∞ 3 n−1 √ ≤ 2 3 kf k∞ √ −1/n whenever |s| ≤ C n kf k∞ for some absolute constant C > 0. ¤ Remark 4.8. Our result i) gives (assume MP = 1 for simplicity) an upper ¡ ¢1/(n−k) −1/n √ bound in the range |s| ≤ nk ωn (≤ c n). By looking at the trivial estimate given by µ ¶ k 1 ϕkP (s) ≤ 1 − ωn−k ωn−1 k n |s| √ −1/n we conclude that in the range |s| ≥ Cωn (∼ C n), ¡ ¢ 1 − nk ωn−k k ϕP (s) ≤ 1− k Ck ωn n The computations in the beginning of this section show that for k = (1 − λ)n or k = n − `, 2 ≤ `, the function ϕkP (s) is bounded in the range √ Therefore, in both cases the distribution of ϕkP (s) is |s| ≥ C n by c1 e−cn . √ concentrated on |s| ≤ c n (constants depending only on λ or ` respectively). Corollary 4.9. Under the hypothesis of Proposition 4.7 we have in particular, (1−λ)n i) if k = (1 − λ)n, then ϕP (s)λλn/2 n(1−λ)/2 ≤ C(λ)MP1−λ , for all s ∈ Rk and for some constant C(λ) > 0 depending on λ, 1− n`

−`/2 ≤ C(`)MP ii) if k = n − `, 2 ≤ ` fixed, then ϕn−` P (s)n and for some constant C(`) > 0 depending on `.

for all s ∈ Rk

´ J. BASTERO AND J. BERNUES

28

By comparing with the case of the Euclidean ball, we see that the bounds are sharp for all s in the case ii) and also in the range |s| ≤ 1 (say) for all values of k. Remark 4.10. We can improve the numerical constants for central sections of star-shaped bodies: Let 1 ≤ k ≤ n − 2 and let K ⊆ Rn be a star-shaped body of volume |K| = 1. Let rDn be the Euclidean ball of volume 1 (of −1/n radius r = ωn ). Then, k−n

ϕkK (s) ≤ ϕkK (0) ≤ ϕkrDn (0) = ωn−k ωn n ,

∀s ∈ Rk

Indeed, ϕkK (s)

= ≤

|S n−k−1 | |S n−1 | |S n−k−1 | |S n−1 |

Z

³

K∩{|x|≥|s|}

Z

K

|s|2 ´ n−k−2 dx 2 1− 2 |x| |x|k

dx ωn−k ˜ Wk (K) = ϕkK (0) = k |x| ωn

˜ k (K) denotes the k-th dual mixed volume of K (see [BBR]). Now, Where W k ˜ k (K) ≤ ωnn , since |K| = 1 and the result by the dual Minkowski inequality W follows. We aknowledge M. Romance for useful discussions in the preparation of the paper. References [ABP] M. Anttila, K. Ball and I. Perissinaki, The central limit problem for convex bodies, Transactions of A.M.S. 355 (2003), pp. 4723-4735. [B] K. Ball, Logarithmic concave functions and sections of convex sets in Rn , Studia Math. 88 (1988), pp. 69-84. [Bo] S.G. Bobkov On concentration of distributions of random weighted sumes, Ann. Probab. 31 (2003), pp. 195-215. ´s and M. Romance , From John to Gauss-John positions [BBR] J.Bastero, J. Bernue via dual mixed volumes, to appear in Jour. Math. Anal. and Appl. [BK] S.G. Bobkov and A. Koldobsky On the central limit property of convex bodies, GAFA Seminar, Lecture Notes in Math. 1807 (2003), pp. 44-52. [BV] U. Brehm and J. Voigt Asymptotics of cross sections for convex bodies, Beitr¨ age Algebra Geom. 41 (2000), pp. 437-454. [BHVV] U. Brehm, P. Hinow, H. Vogt, J. Voigt Moment inequalities and central limit properties of isotropic convex bodies, Math. Z. 240 (2002), pp. 37-51. [DF] P. Diaconis and D. Freedman, Asymptotics of graphical projection pursuit, Ann. of Stat. 12 (1984), pp. 793-815. M. Fradelizi Sections of convev bodies though their centroid, Arch. Math. 69 [F] (1997), pp. 515-522. A. Giannopoulos Notes on isotropic convex bodies Institute of Mathematics, Pol[G] ish Academy of Sciences, Warsaw (2003). [H] D. Hensley Slicing convex bodies-bounds for slice area in terms of the body’s covariance, Proceedings Amer. Math. Soc. 79(4) (1980), pp. 619-625.

AVERAGES OF k-DIMENSIONAL MARGINALS

29

[Kl]

B. Klartag, On convex perturbations with a bounded isotropic constant, to appear in GAFA. [Kl2] B. Klartag, A central limit theorem for convex sets, preprint. [Ko] A. Koldobsky, A functional analytic approach to intersection bodies, Geometric and Functional Analysis 10 (2000), pp. 1507-1526. [KL] A. Koldobsky and M. Lifshits, Average volume of section of star bodies, GAFA Seminar, Lecture Notes in Math. 1745 (2000), pp. 119-146. [M] M.W. Meckes, Gaussian marginals of probability measures with geometric symmetries, preprint. [MM] E.S. Meckes and M.W. Meckes, The central limit problem for random vectors with symmetries, preprint. [Mi] E. Milman, On gaussian marginals of uniformly convex bodies, preprint. [MP] V. Milman and A. Pajor, Isotropic positions and inertia ellipsoids and zonoids of the unit ball of a normed n-dimensional space, GAFA Seminar, Lecture Notes in Math. 1376 (1989), pp. 64-104. [MS] V. Milman and G. Schechtman, Asymptotic theory of finite dimensional normed spaces, Lecture Notes in Math. 1200, Springer, (1986). [NR] A. Naor, and D. Romik, Projecting the surface measure of the sphere of `n p , Ann. Inst. H. Poincar´e Probab. Statist. 39(2) (2003), pp. 241-261. [So] S. Sodin, Tail-sensitive Gaussian asymptotics for marginals of concentrated measures in high dimension, preprint. [Su] V.N. Sudakov, Typical distributions of linear functionals in finite dimensional spaces of higher dimensions, Soviet Math. Dokl. 19(6) (1949), pp. 1578-11582. [V] J. Voigt, A concentration of mass property for isotropic convex bodies in high dimensions, Israel J. Math. 115 (2000), pp. 235-251. [W] H. von Weizsacker, Sudakov’s typical marginals, random linear functionals and a conditional central limit theorem, Probab. Theor. and Rel. Fields 107 (1997), pp. 313-324. [Wo] J.O. Wojtaszczyk, The square negative correlation property for generalised Orlicz balls. To appear in GAFA Seminar Notes, 2005 ´ ticas, Universidad de Zaragoza, 50009 Zaragoza, Departamento de Matema Spain. E-mail address, (Jes´ us Bastero): [email protected] E-mail address, (Julio Bernues): [email protected]