ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

3 downloads 0 Views 620KB Size Report
limit theorems Edgeworth expansion can be obtained via an approach using sequences of. “influence” functions of individual random elements described by ...
ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

arXiv:1408.1360v1 [math.PR] 6 Aug 2014

1 ¨ F. GOTZE AND A. RESHETENKO2

Abstract. We study asymptotic expansions in free probability. In classes of classical limit theorems Edgeworth expansion can be obtained via an approach using sequences of “influence” functions of individual random elements described by vectors of real parameters (ε1 , . . . , εn ), that is by a sequence of functions hn (ε1 , . . . , εn ; t), |εj | ≤ √1n , j = 1, . . . , n, (depending on a complex parameter t) which are smooth, symmetric, compatible and have vanishing first derivatives at zero. In this work we expand this approach to free probability. As sequence Pof functions hn (ε1 , . . .n, εn ; t) we consider a sequence of the Cauchy transforms of the sum n j=1 εj Xj , where (Xj )j=1 are free identically distributed random variables with compact support P and derive Edgeworth type expansions for densities and distributions of the sum √1n n j=1 Xj within the interval (−2, 2).

1. Introduction Free probability theory was initiated by Voiculescu in 1980’s as a tool for understanding free group factors. The main concept in this theory is the notion of freeness, which is a counterpart of the classical independence for non-commutative random variables. The distribution of the sum of two free random variables is uniquely determined by the distributions of the summands and called the free convolution of the initial distributions. While classical convolutions are studied via Fourier transforms, free convolutions can be studied via Cauchy transforms. Numerous results concerning the distributional behaviour of the sum of several free random variables were proved in the recent years: Free limit theorems [15, 18], the law of large numbers [5], the Berry-Esseen inequality [7, 13], the Edgeworth expansion in the free central limit theorem [9] etc. These results parallel the classical ones. On the other hand some results in free probability theory have no counterparts in classical probability theory. For example, the so called superconvergence. This type of convergence appears in free limit theorems and is stronger then usual convergence. In this paper we develop a technique which was described in [11]. This approach (see Section 4) was introduced as a tool to derive asymptotic expansions and estimates for the reminder term in a class of classical functional limit theorems in abstract spaces. It is based on the Taylor expansion only and hence can be applied in free probability without additional modifications. We use this method and derive the Edgeworth expansions for distributions and densities of normalized sums of free bounded identically distributed random variables. The results we get extended those in [9]. The paper is organized as follows. In Section 2 we formulate and discuss the main results. Preliminaries are introduced in Section 3. In Section 4 we describe the general scheme. In Key words and phrases. Cauchy transform, free convolution, Central Limit Theorem, asymptotic expansion. 1 Research supported by CRC 701. 2 Research supported by IRTG 1132. 1

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

2

Section 5 we apply this general scheme to free probability. Section 6 is devoted to the proofs of results. In the Appendix we provide formulations of some results of the literature, in particular a more detailed and revised version of the expansion scheme outlined in [11] for the readers convenience. The results of this paper are part of the Ph.D. thesis of the second author in 2014 at the University of Bielefeld. 2. Results Denote by M the family of all Borel probability measures defined on the real line R. Let X1 , X2 , . . . be free self-adjoint identically distributed random variables with distribution µ ∈ M. We always assume that µ has zero mean and unit variance. Let µn be the 1 Pn √ distribution of the normalized sum Sn := n j=1 Xj . In free probability a sequence of measures µn converges to the semicircle law ω. Moreover, µn is absolutely continuous with respect to the Lebesgue measure for sufficiently large n [19]. We denote by pµn the density of µn . Define the Cauchy transform of a measure µ: Z µ(dx) Gµ (z) = , z ∈ C+ , R z−x where C+ denotes the upper half plane. In [9] Chistyakov and G¨ otze obtained a formal power expansion for the Cauchy transform of µn and the Edgeworth type expansions for µn and pµn . Below we review these results. Assume that µ has compact support. Denote by Un (x) the Chebyshev polynomial of the second kind of degree n, which is given by the recurrence relation: U0 (x) = 1, U1 (x) = 2x, Un+1 (x) = 2xUn (x) − Un−1 (x).

(2.1)

The formal expansion has the form Gµn (z) = Gω (z) +

∞ X Bk (Gω (z)) k=1

nk/2

,

(2.2)

where Bk (z) =

X

cp,m

(p,m)

zp (1/z − z)m

(2.3)

with real coefficients cp,m which depend on the free cumulants κ3 , . . . , κk+2 and do not depend on n. The free cumulants will be defined in Section 2. The summation on the right-hand side of (2.3) is taken over a finite set of non-negative integer pairs (p, m). The coefficients cp,m can be calculated explicitly. For the cases k = 1, 2 we have   κ3 z 3 (κ4 − κ23 )z 4 z5 z2 B1 (z) = , B2 (z) = + κ23 + . 1/z − z 1/z − z (1/z − z)2 (1/z − z)3 Let us introduce some further notations. Denote by βq the qth absolute moment of µ, and assume that βq < ∞ for some q ≥ 2. Moreover, denote κ3 an := √ , n

bn :=

κ4 − κ23 + 1 , n

dn :=

κ4 − κ23 + 2 , n

n ∈ N.

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

Introduce the Lyapunov fractions βq Lqn := (q−2)/2 and let n

Z

3

|x|q µ(dx), t > 0.

ρq (µt ) := |x|>t

Denote q1 := min{q, 3}, q2 := min{q, 4}, q3 := min{q, 5}. For n ∈ N, set ηqs (n) :=

inf

0 L) and

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

7

we can define its functional inverse Kµ (z) such that Kµ (Gµ (z)) = z, which converges in a neighbourhood of zero. Let us introduce the function 1 Rµ (z) = Kµ (z) − . (3.3) z This function is called the R-transform and can be expressed as formal power series: Rµ (z) =

∞ X

κl+1 z l ,

l=0

where the coefficients κk are called the free cumulants of a corresponding measure. In the case when m1 = 0 and m2 = 1 we note that κ1 = 0, κ2 = 1, κ3 = m3 , κ4 = m4 −2, κ5 = m5 −5m3 . For cumulants of higher order the following inequalities have been established in [13]: 2L (4L)l−1 , l ≥ 2. (3.4) l−1 Voiculescu in [17] proved that for two given compactly supported probability measures µ1 and µ2 the R-transform of the free convolution µ1  µ2 is given by the formula |κl | ≤

Rµ1 µ2 (z) = Rµ1 (z) + Rµ2 (z),

(3.5)

on the common domain of these functions. Moreover, (3.5) implies that the free convolution is commutative and associative. Next, we note some scaling properties of the Cauchy transform and the R-transform. We denote by Dt µ the dilation of a measure µ by the factor t: Dt µ(A) = µ(t−1 A),

(A ⊂ R

measurable).

Then the Cauchy transform and the R-transform of the rescaled measure Dt µ are GDt µ (z) = t−1 Gµ (t−1 z)

and RDt µ (z) = tRµ (tz).

(3.6)

Analytic approach to the definition of free convolution. Let us introduce the reciprocal Cauchy transform Fµ (z) = 1/Gµ (z), z ∈ C+ , which is an analytic self-mapping of C+ . The class of reciprocal Cauchy transforms can be described as a subclass of the Nevanlinna functions. Definition 3.1. The Nevanlinna class is the class of analytic functions f (z) : C+ → {z : =z ≥ 0} with the integral representation Z 1 + tz f (z) = a + bz + ρ(dt), z ∈ C+ , (3.7) t−z where b ≥ 0, a ∈ R and ρ is a non negative finite measure. From the integral representation (3.7) it follows that f (z) = (b + o(1))z for z ∈ C+ such that | 0 and some positive β = β(f, α). For more details about Nevanlinna functions we refer to [1], Section 3 and [2], Section 6.

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

8

The class of the reciprocal Cauchy transforms of all µ ∈ M is a subclass of the Nevanlinna functions such that f (z)/z → 1 as z → ∞ non tangentially to R. We will denote this class by F. It is easy to see that the reciprocal Cauchy transform Fµ admits the representation (3.7) with b = 1. The functions f ∈ F satisfy the inequality =f (z) ≥ =z,

z ∈ C+ .

Chistyakov and G¨ otze [8], Bercovici and Belinschi [4], Belinschi [3] proved using complex analytic methods that for µ1 , µ2 ∈ M the subordination functions Z1 , Z2 ∈ F satisfy the following equations for z ∈ C+ : z = Z1 (z) + Z2 (z) − Fµ1 (Z1 (z)); Fµ1 µ2 (z) = Fµ1 (Z1 (z)) = Fµ2 (Z2 (z)).

(3.8) (3.9)

The next result is due to Belinschi [3] (see Theorem 3.3 and Theorem 4.1). Theorem 3.2. Let µ1 , µ2 be two Borel probability measures on R, neither of them a point mass. The following hold: (1) The subordination functions from (3.8) and (3.9) have limits Zj (x) := limy↓0 Zj (x + iy), j = 1, 2, x ∈ R. (2) The absolutely continuous part of µ1 µ2 is always nonzero, and its density is analytic wherever positive and finite, and Fµ1 µ2 extends analytically in a neighbourhood of every point where the density is positive and finite. Semicircle law. The semicircle law plays a key role in free probability. The centered semicircle distribution of variance t is denoted by ωt and has the density 1 p pωt (x) = (4t − x2 )+ , x ∈ R, 2πt where a+ := max{a, 0}. We denote by ω the standard semicircle law that has zero mean, unit variance and the density 1 p (4 − x2 )+ , x ∈ R. pω (x) = 2π The Cauchy transform of ωt is given by √ z − z 2 − 4t Gωt (z) = , z ∈ C+ . 2t √ √ The function z 2 − 4t is double-valued and has branch points at z = ±2 t. We can define analytic branches on the complex plane cut along the segment √ two single-valued √ −2 t ≤ x ≤ 2 t of the real axis. Since the Cauchy has asymptotic behaviour √ transform + 1/z at infinity, we can choose a branch such that −1 = i on C . The Cauchy transform Gωt (z) has a continuous extension to C+ ∪ R which acts on R by √ √  (x − i√ 4t − x2 )/2t, if |x| ≤ 2√t; (3.10) (x − x2 − 4t)/2t, if |x| > 2 t. We see that for each √ δ> √ 0, the function Gωt can be continued analytically to the domain K = {x + iy : x ∈ (−2 t, 2 t), |y| < δ} and beyond to the whole Riemann surface √ This analytic continuation is again denoted by Gωt . It has the explicit formula Gωt (z) = (z−i 4t − z 2 )/2t,

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

where the branch of the square root on C+ is chosen such that satisfies the functional equation Gω (z) + Fω (z) = z,



9

−1 = i. The function Gω

z ∈ C+ ∪ K.

(3.11)

One can compute the R-transform of the semicircle law: Rω (z) = z. Berry–Esseen type inequality. The Berry–Esseen type inequality in free probability was proved by Chistyakov and G¨ otze [9]. Assume µ has zero mean, unit variance and finite third absolute moment β3 , then there exists an absolute constant c > 0 such that cβ3 sup |µn (−∞, x) − ω(−∞, x)| ≤ √ , n x∈R

n ∈ N.

(3.12)

4. A general scheme for asymptotic expansions We denote a vector (ε1 , . . . , εn ) ∈ Rn by εn . Let us consider a sequence of functions hn (εn ; t), where |εj | ≤ n−1/2 , j = 1, . . . , n and t ∈ A ⊆ R (or C). Assume that this sequence of functions satisfies the following conditions: hn (εn ; t) is symmetric in all εj ;

(4.1)

the sequence hn is compatible, which means hn+1 (ε1 , . . . , εj−1 , 0, εj+1 , . . . εn+1 ; t) = hn (ε1 , . . . , εj−1 , εj+1 , . . . , εn+1 ; t), j = 1, . . . , n + 1; and all first derivatives vanish at zero: ∂ = 0, hn (εn ; t) ∂εj εj =0

j = 1, . . . , n.

(4.2)

(4.3)

n Let us denote by Em,s (m ≥ n > s ≥ 3) the set of weight vectors εm+s where all but 2s components are equal to m−1/2 and the remaining 2s components are bounded by n−1/2 . ∂ α1 ∂ αm Let α = (α1 , . . . , αm ) denote an m-dimensional multi-index and set Dα = ∂ε α1 . . . αm . ∂ε m 1 Finally, we define n dsr (h, n) := sup{|Dα hm+s (εm+s ; t)| : |α| = r, t ∈ A, εm+s ∈ Em,s , m ≥ n}.

The following proposition from [11] shows that the limit h∞ (εs ; t) := lim hm+s (m−1/2 , . . . , m−1/2 , εs ; t), |εj | ≤ n−1/2 , j = 1, . . . , s m→∞

exists. n , m ≥ n ≥ s ≥ 3, t ∈ A satisfies Proposition 4.1. Assume hm+s (εm+s ; t), εm+s ∈ Em,s conditions (4.1) − (4.3) and the condition ds3 (h, n) < ∞. Then limit h∞ (εs ; t), |εj | ≤ n−1/2 , j = 1, . . . , s exists and the following estimate holds:

|hn+s (n−1/2 , . . . , n−1/2 , εs ; t) − h∞ (εs ; t)| ≤ cds3 (h, n)n−1/2 , where c is an absolute constant. We formulate an Edgeworth type expansion for hn (n−1/2 , . . . , n−1/2 ; t) in terms of derivatives of h∞ (εs ; t) with respect to εj , j = 1, . . . , s at εs = 0. Below we introduce all necessary notations.

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

10

We establish “cumulant” differential operators κp (D) via the formal identity   ∞ ∞ X X p!−1 εp κp (D) = ln 1 + p!−1 εp Dp  . p=2

(4.4)

p=2

Expanding in formal power series in the formal variable ε on the right-hand side of this identity we obtain the definition of the cumulant operators κp (D). Here Dp denotes pfold differentiation with respect to a single variable ε, and Dp1 · · · Dpr = D(p1 ,...,pr ) denotes differentiation with respect to r different variables ε1 , . . . , εr at the point εr = 0. Since the operators are applied to symmetric functions at zero, κp (D) is unambiguously defined by (4.4). The first cumulant operators are κ2 (D) = D2 , κ3 (D) = D3 , κ4 (D) = D4 − 3D2 D2 , etc. Then, we define Edgeworth polynomial operators Pr (κ. (D)) by means of the following formal series in κr and a formal variable ε. ! ∞ ∞ X X εr Pr (κ. ) = exp r!−1 εr−2 κr (4.5) r=0

r=3

which yields Pr (κ. ) =

r X m=1

m!−1

  X

(j1 + 2)!−1 κj1 +2 · · · (jm + 2)!−1 κjm +2



 

,

(4.6)



(j1 ,...jm )

P

where the sum (j1 ,...,jm ) means summation over all m-tuples of positive integers (j1 , . . . , jm ) P satisfying m q=1 jq = r and κ. = (κ3 , . . . , κr+2 ). Replacing the variables κ. in Pr (·) by the differential operators κ. (D) := (κ3 (D), . . . , κr+2 (D)) we obtain “Edgeworth” differential operators, say Pr (κ. (D)). The following theorem yields an asymptotic expansion for hn (n−1/2 , . . . , n−1/2 ; t) (for more details see [11]). n , m ≥ n ≥ s ≥ 3, t ∈ A fulfills Theorem 4.2. Assume that hm+s (εm+s ; t), εm+s ∈ Em,s conditions (4.1) − (4.3) together with

sup

dss (h, n) ≤ B, sup Dα hm+s (εm+s ; t) ≤ B,

(4.7) (4.8)

n t∈A εm+s ∈Em,s

where α = (α1 , . . . , αs−2 ) such that αi ≥ 2, i = 1, . . . , s − 2,

s−2 X

(αi − 2) ≤ s − 2.

i=1

Then s−3 X −1/2 −1/2 −r/2 ,...,n ; t) − n Pr (κ. (D))h∞ (εr ; t) hn (n ≤ cs Bn−(s−2)/2 , εr =0

(4.9)

r=0

where P0 (κ. (D)) = 1 and Pr (κ. (D)) are given explicitly in (4.6), cs is an absolute constant.

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

11

The first four terms of the expansion (4.9) are hn (n−1/2 , . . . , n−1/2 ; t)   1 ∂3 1 h (ε ; t) = h∞ (0; t) + 1/2 ∞ 1 6 ∂ε31 ε1 =0 n     4 2 2 3 1 ∂ ∂3 ∂ ∂ 1 1 ∂ + h (ε − 3 + ; t) ∞ 2 4 2 2 3 3 n 24 ∂ε1 72 ∂ε1 ∂ε2 ε2 =0 ∂ε1 ∂ε2    5 3 2 ∂ ∂ 1 1 ∂ − 10 3 2 + 5 3/2 5 ∂ε1 ∂ε ∂ε 48n  4  31 2    2 2 1 ∂ ∂ 1 ∂ ∂ 1 ∂3 ∂3 ∂3 + h∞ (ε3 ; t) +O . −3 2 2 + 4 3 3 3 3 3 ∂ε1 27 ∂ε1 ∂ε2 ∂ε3 n2 ε3 =0 ∂ε1 ∂ε2 ∂ε3

(4.10)

Remark 4.3. Conditions (4.7) and (4.8) guarantee that the functions α gm+s (εm+s ; t) := Dα hm+s (εm+s ; t),

for α = (α1 , . . . , αr ), where r ≤ s − 3, s ≥ 3 αi ≥ 2,

i = 1, . . . , r,

r X (αi − 2) = s − 3 i=1

satisfy the conditions of Proposition 4.1. In particular, due to Proposition 4.1 for each α α the functions gm+s (εm+s ; t) converge to gsα (εs ; t) as m → ∞ uniformly in εs , |εj | ≤ n−1/2 , n ≥ 1, j = 1, . . . , s and due to Theorem A.1 (see Appendix) we conclude that gsα (εs ; t) = Dα h∞ (εs ; t), r ≤ s − 3. 5. Application of the general scheme for asymptotic expansions We want to apply the general scheme in order to compute the asymptotic expansion for pµn , µn . Assume that µ ∈ M is compactly supported with zero mean and unit variance. We set hn (n−1/2 , . . . , n−1/2 ; z) := Gµn (z), z ∈ K, where Gµn (z) is the extension defined in Corollary 2.1. Let us introduce two notations: µ ˜m+s := Dε1 µ  . . .  Dεm+s µ, (εs )

µs

:= Dε1 µ  . . .  Dεs µ,

n , m ≥ n, εm+s ∈ Em,s

|εj | ≤ n−1/2 ,

j = 1, . . . , s.

The following results follow from Theorem 6.8 and allow for an easy application of the expansion scheme. Corollary 5.1. For every δ ∈ (0, 1/10) and n ≥ c(µ, s)δ −4 the Cauchy transform Gωµ(εs ) s has an analytic continuation to K such that Gωµ(εs ) (z) = Gω (z) + lεs (z), z ∈ K,

(5.1)

s

where |lεs (z)| ≤

c(s) √ n δ

on K.

Remark 5.2. In (5.1) we understand Gω (z) as an analytic continuation of the corresponding Cauchy transform, which is defined in the following way: p Gω (z) = (z − i 4 − z 2 )/2, z ∈ K.

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

12

Corollary 5.3. For each δ ∈ (0, 1/10) and m ≥ n ≥ c(µ, s)δ −4 the Cauchy transform Gµ˜m+s has an analytic continuation to K such that

where |l(z)| ≤

c(s) √ δ



Gµ˜m+s (z) = Gωµ(εs ) (z) + l(z), s  1 1 √ + n on K. m

z ∈ K,

(5.2)

Corollary 5.4. For each δ ∈ (0, 1/10), m ≥ n ≥ c(µ, s)δ −4 the analytic continuation Gµ˜m+s (z), z ∈ K is symmetric and compatible function of εj , j = 1, . . . , m + s. Theorem 5.5. For each δ ∈ (0, 1/10), m ≥ n ≥ c(µ, s)δ −4 the analytic continuation of Gµ˜m+s (z), z ∈ K is smoothly differentiable function of variables εj , j = 1, . . . , 2s. (Here we mean those 2s variables which are not fixed and just bounded by n−1/2 ). Moreover, the inequality holds: sup sup Dα Gµ˜m+s (z) ≤ c, |α| ≤ s. n z∈K εm+s ∈Em,s

Theorem 5.6. For each δ ∈ (0, 1/10), m ≥ n ≥ c(µ, s)δ −4 and z ∈ K ∂ Gµ˜m+s (z) = 0, j = 1, . . . , 2s. ∂εj εj =0 In view of the above results, we can choose the sequence of extensions of the Cauchy transforms Gµ˜m+s (z), z ∈ K as the sequence of functionals hm+s (εm+s ; z), i.e. hm+s (εm+s ; z) := Gµ˜m+s (z), z ∈ K, m ≥ n ≥ c(µ, s)δ −4 , and h∞ (εs ; z) := Gωµ(εs ) (z), z ∈ K, n ≥ c(µ, s)δ −4 . s

Now we can apply the general scheme and compute the expansion for Gµn in terms of derivatives of Gωµ(εs ) with respect to εj , j = 1, . . . , s at εs = 0. s

6. Proofs of results Positivity of the density of µ ˜m+s . Our aim is to find an interval where the density of µ ˜m+s is positive. The main idea is based on the Newton-Kantorovich Theorem (see Theorem A.2). Recall the definition of Levy distance. Definition 6.1. Let Q1 (x) and Q2 (x) be the cumulative distribution functions of the two measures µ1 and µ2 respectively. The Levy distance between these measures is defined by the formula dL (µ1 , µ2 ) = inf{s ≥ 0 : Q2 (x − s) − s ≤ Q1 (x) ≤ Q2 (x + s) + s, ∀x ∈ R}. The Levy distance metrizes the weak convergence on the space of probability measures. Let us consider a pair of measures ν1 and ν2 . We can rewrite the equations (3.8) and (3.9) as a system  (z − Z1 (z) − Z2 (z))−1 + Gν1 (Z1 (z)) = 0 (6.1) (z − Z1 (z) − Z2 (z))−1 + Gν2 (Z2 (z)) = 0, where Gν1 and Gν2 are the Cauchy transforms of ν1 and ν2 , correspondingly. Choose another pair of measures µ1 and µ2 such that the Levy distance between νj and µj is sufficiently

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

13

small for j = 1, 2. Then we can define subordination functions for the couple (µ1 , µ2 ) as a solution of (6.1), where Gν1 and Gν2 are replaced by the Cauchy transforms of µ1 and µ2 correspondingly. Denote these subordination functions by t01 and t02 . According to the Newton-Kantorovich Theorem (for a proof see [12]) one can show that the subordination functions Zj and t0j , j = 1, 2 are sufficiently close to each other. We can choose µ1 and µ2 to be equal, so that t01 = t02 . Such a choice essentially simplifies the structure of equations (3.8) and (3.9). We need the following result by Voiculescu and Bercovici [6] about continuity of free convolution with respect to the Levy distance. Theorem 6.2. If µ1 , µ2 , ν1 , and ν2 ∈ M, then dL (µ1  ν1 , µ2  ν2 ) ≤ dL (µ1 , µ2 ) + dL (ν1 , ν2 ). Finally, let us prove some further results for this distance. Lemma 6.3. Assume µ, ν ∈ M are measures with compact support, zero mean and unit variance and µ is supported on an interval [−L, L]. Then P (ε ) (1) dL (ν, ν  µs s ) ≤ L si=1 εi ; (2) dL (Dε1 µ, Dε2 µ) ≤ L|ε1 − ε2 |. Proof. First, we prove inequality (1). From Theorem 6.2, we get s X (ε ) (ε ) dL (δ0 , Dεi µ), dL (ν, ν  µs s ) ≤ dL (δ0 , µs s ) ≤ i=1

where δ0 is a delta function. We know that supp(µ) ⊂ [−L, L], hence s X (εs ) εi . dL (ν, ν  µs ) ≤ L i=1

Now, we prove inequality (2). Let Q(x) be the distribution function of µ, then dL (Dε1 µ, Dε2 µ) = inf{s ≥ 0 : Q((x − s)/ε1 ) − s ≤ Q(x/ε2 ) ≤ Q((x + s)/ε1 ) + s, x ∈ R} ≤ inf{s ≥ 0 : Q((x − s)/ε1 ) ≤ Q(x/ε2 ) ≤ Q((x + s)/ε1 ), x ∈ R}. We consider two situations: ε1 > ε2 and ε1 < ε2 (the case ε1 = ε2 is trivial). Let ε1 > ε2 . Since a distribution function does not decrease, we get       x−s x x+s inf{s ≥ 0 : Q ≤Q ≤Q , x ∈ R} ε1 ε2 ε1        x−s x x+s = max inf{s ≥ 0 : Q ≤Q ≤Q , ε1 L ≤ |x|}, ε1 ε2 ε1       x−s x x+s inf{s ≥ 0 : Q ≤Q ≤Q , ε2 L ≤ |x| ≤ ε1 L}, ε1 ε2 ε1        x−s x x+s inf{s ≥ 0 : Q ≤Q ≤Q , |x| ≤ ε2 L} . (6.2) ε1 ε2 ε1 We note that the first infimum in (6.2) is equal to zero. For the second term in (6.2), we consider x ≥ 0 (remember that µ has zero mean). But then the left inequality is trivial

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

14

and we only need to consider the right inequality which holds if s satisfies the inequality x/ε2 ≤ (x + s)/ε1 . Hence, (ε1 − ε2 )x ≤ ε2 s must hold for x ∈ [ε2 L, ε1 L]. To prove this, we consider the difference Q((x + s)/ε1 ) − Q(x/ε2 ). Since we have Q(x/ε2 ) = 1 for all x ≥ ε2 L, we can take s such that Q((ε2 L + s)/ε1 ) = 1, which implies Q((x + s)/ε1 ) = 1 for all x ≥ ε2 L. We see that we can set s = L(ε1 − ε2 ). For x < 0 the same arguments show that s = L(ε1 − ε2 ). For the third infimum in (6.2) we consider x ≥ 0 and the right inequality. If we set x = ε2 y, where y ∈ [0, L], then Q((x + s)/ε1 ) = Q((ε2 y + s)/ε1 ). We need s such that (ε2 y + s)/ε1 = y, hence s = (ε1 − ε2 )y, and we conclude that s = (ε1 − ε2 )L. For negative x the same arguments show that we can take s = (ε1 − ε2 )L and dL (Dε1 µ, Dε2 µ) ≤ (ε1 − ε2 )L. Let ε2 > ε1 . This case can be proved in the same way as the previous one and we obtain that s = (ε2 − ε1 )L. From these two cases we finally conclude that dL (Dε1 µ, Dε2 µ) ≤ |ε1 − ε2 |L. Thus the lemma is proved.



In the sequel we need the following estimates for Gω . A similar estimate for Gω had been used in [19]. Lemma 6.4. For each δ ∈ (0, 1/10) we define the set √ Kδ = {x + iy : x ∈ [−2 + δ, 2 − δ], |y| ≤ 2δ δ}. Then, we have Gω (Kδ ) ⊂ Dθ,1.4 = {z ∈ C− : arg z q ∈ (−π + θ, −θ); |z| < 1.4}, where the  angle θ = θ(δ) is chosen in such a way that 2 sin θ = 4δ 1 − 4δ . Proof. Figure 2 illustrates the sets Kδ and Dθ,1.4 .

δ −2



δ 2

θ

θ Dθ,1.4 −1.4

Figure 2

First we show that Gω (Kδ ) ⊆ Dθ,1.4 , where Gω is an analytic extension of the Cauchy transform of ω on Kδ . Fix a point z0 ∈ Kδ , and write Gω (z0 ) = Reiψ . In order to prove Gω (z0 ) ∈ Dθ,1.4 we need to verify that | sin ψ| > sin θ and R < 1.4. From the functional equation (3.11) we have     1 1 R+ cos ψ + i R − sin ψ = z0 . R R

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

15

From | sin θ. Thus we obtain the desired result | sin ψ| > sin θ. In order to estimate R we consider the imaginary part of z0 √ √ 1 |R2 − 1| δ 2δ δ > |=z0 | = | sin ψ| R − > . R R 2 If R > 1, we get the inequality R2 − 4δR − 1 < 0. Therefore, R must be bounded from above by the intercept of the positive x-axis and the parabola y = R2 − 4δR − 1. The roots of the equation R2 − 4δR − 1 = 0 are p R = 2δ ± 4δ 2 + 1. √  By the choice of δ we have 2δ + 4δ 2 + 1 < 1.22. This implies R < 1.4. The following inequalities are due to Kargin [14]. Lemma 6.5. Let dL (µ1 , µ2 ) ≤ p and z = x + iy, where y > 0. Then (1) |Gµ1 (z) − Gµ2 (z)| < e cpy −1 max{1, y −1 }, where e c > 0 is a numerical constant; r d −1−r −1 (2) | dz r (Gµ2 (z) − Gµ1 (z))| < e cr py max{1, y }, where e cr > 0 are numerical constants. Consider a pair of measures (ν1 , ν2 ) and introduce a function F (t) : C2 → C2 by the formula   (z − t1 − t2 )−1 + Gν1 (t1 ) F (t) = . (z − t1 − t2 )−1 + Gν2 (t2 ) The equation F (t) = 0 has a unique solution, say Z = (Z1 (z), Z2 (z)), where Z1 (z) and Z2 (z) are subordination functions. Let (µ1 , µ2 ) be another pair of measures. Assume t0 = (t01 , t02 ) = (t01 (z), t02 (z)) solves the system of equations  (z − t01 − t02 )−1 + Gµ1 (t01 ) = 0 (z − t01 − t02 )−1 + Gµ2 (t02 ) = 0. Then F (t0 ) has the form 0



F (t ) =

Gν1 (t01 ) − Gµ1 (t01 ) Gν2 (t02 ) − Gµ2 (t02 )

 .

The derivative of F with respect to t at t0 is 0

0

F (t ) =



G0ν1 (t01 ) + G2µ1 (t01 ) G2µ1 (t01 ) 2 0 0 Gµ2 (t2 ) Gν2 (t02 ) + G2µ2 (t02 )

 .

The inverse matrix of F 0 (t0 ) is 0

0 −1

[F (t )]

1 = det[F 0 (t0 )]



G0ν2 (t02 ) + G2µ2 (t02 ) −G2µ1 (t01 ) −G2µ2 (t02 ) G0ν1 (t01 ) + G2µ1 (t01 )

 ,

(6.3)

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

16

where det[F 0 (t0 )] = (G0ν2 (t02 ) + G2µ2 (t02 ))(G0ν1 (t01 ) + G2µ1 (t01 )) − G2µ1 (t01 )G2µ2 (t02 ). After simple computations, we obtain   1 (G0ν2 (t02 ) + G2µ2 (t02 ))S1 (t01 ) − G2µ1 (t01 )S2 (t02 ) 0 0 −1 0 [F (t )] F (t ) = , det[F 0 (t0 )] (G0ν1 (t01 ) + G2µ1 (t01 ))S2 (t02 ) − G2µ2 (t02 )S1 (t01 ) where Sj (t0j ) := Gνj (t0j ) − Gµj (t0j ). The second derivative of F with respect to t at t0 is   D1 (t01 ) 2G3µ2 (t02 ) 2G3µ1 (t01 ) 2G3µ2 (t02 ) 00 0 F (t ) = , 2G3µ1 (t01 ) 2G3µ2 (t02 ) 2G3µ1 (t01 ) D2 (t02 )

(6.4)

(6.5)

where Dj (t0j ) := G00νj (t0j ) − 2G3µj (t0j ). Proposition 6.6. Let ν1 , ν2 ∈ M be measures neither of them being a point mass. Then for all δ ∈ (0, 1/10) there exists c such that if dL (ω, ν1  ν2 ) ≤ cδ 2 then the density pν1 ν2 (x) is positive and analytic on [−2 + δ, 2 − δ]. Proof. We would like to find an interval where the density is positive. To this end, define a subordination function Zω1/2 (z) which solves the equations z = 2Zω1/2 (z) − Fω1/2 (Zω1/2 (z)) It easy to solve this equations obtaining

and Fω (z) = Fω1/2 (Zω1/2 (z)). √

z2 − 4 , 4 and an analytic continuation of Zω1/2 to Kδ is given by √ 3z + i 4 − z 2 . Zω1/2 (z) = 4 It easy to see that the following inequality holds: √ =Zω1/2 (x) > δ/3, x ∈ [−2 + δ, 2 − δ]. Zω1/2 (z) =

3z +

On C2 we choose the norm: k(z1 , z2 )k =

p |z1 |2 + |z2 |2 .

Now we apply the Newton-Kantorovich Theorem (see √ Theorem A.2) to the equation F (t) = 0 for z ∈ M := {x + iy : x ∈ [−2 + δ, 2 − δ], 0 < y < δ δ}. In formulas (6.3), (6.4) and (6.5) we set µ1 = µ2 = ω1/2 and t01 = t02 = Zω1/2 . Since |Zω1/2 (z)| < 2, z ∈ M , we choose the √ branch of Gω1/2 such that Gω1/2 (z) = z − i 2 − z 2 , |z| < 2. 1. First, we estimate k[F 0 (t0 )]−1 k. We computed det[F 0 (t0 )] above. Moreover, due to Lemma 6.5 with p := cδ 2 we have G0νj (t0j ) = G0ω1/2 (t0j ) + fj (t0j ), where |fj (t0j )| ≤ c˜1 pδ −3/2 on M , j = 1, 2. Hence, det[F 0 (t0 )] = (G2ω1/2 (t02 ) + G0ω1/2 (t02 ) + f2 (t02 ))(G2ω1/2 (t01 ) + G0ω1/2 (t01 ) + f1 (t01 )) − G2ω1/2 (t01 )G2ω1/2 (t02 ) = g(z) + (f1 (t01 ) + f2 (t01 ))(G0ω1/2 (t01 ) + G2ω1/2 (t01 )) + f1 (t01 )f2 (t01 ),

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

where

17

 2 g(z) = G2ω (z) + G0ω1/2 (Zω1/2 (z)) − G4ω (z).

We find that

√ 3iz − 4 − z 2 p =1+ , =1+ q p(z) 2 − Zω1/2 (z) √ √ where p(z) := 36 − 10z 2 − 6iz 4 − z 2 . The function p(z) has zeros at ±3/ 2, hence |G0ω1/2 (Zω1/2 (z))| is uniformly bounded on M . Finally, we obtain ! ! √ √ 2 p 3iz − 4 − z 2 1  3iz − 4 − z 2 2 p p 1+ + z−i 4−z . g(z) = 1 + 2 p(z) p(z) G0ω1/2 (Zω1/2 (z))

iZω1/2 (z)

First of all we estimate |g(z)| on an interval [−2 + δ, 2 − δ]. Obviously, √ √ 3ix − 4 − x2 2ix 4 − x2 − 3 p = √ 9 − 2x2 36 − 10x2 − 6ix 4 − x2 and hence   p g(x) = h1 (x) h1 (x) + x2 − 2 − 2ix 4 − x2 , where

√ 6 − 2x2 + 2ix 4 − x2 h1 (x) := , 9 − 2x2

|h1 (x)| = √

2 2 ≥ 3 9 − 2x2

and

2√4 − x2 p √ 2 2 ≥ c1 δ h1 (x) + x − 2 − 2ix 4 − x = √ 9 − 2x2 √ for x ∈ [−2 + δ, 2 − δ]. We conclude that |g(x)| ≥ c2 δ, x ∈ [−2 + δ, 2 − δ]. In order to estimate |g(z)| on M we expand g(x + iy) with respect to y at zero: √ g(x + iy) = g(x) + R(x, y), x ∈ [−2 + δ, 2 − δ], 0 < y < δ δ, where R(x, y) is a remainder term such that |R(x, y)| ≤

max

√ |g 0 (x + iy)|δ δ.

x∈[−2+δ,2−δ] √ 0 0, j = 1, 2. It follows that the estimate for the second derivative holds for t∗ such that kt∗ − t0 k < 2η0 , z ∈ M. The Newton-Kantorovich Theorem (see Theorem A.2) yields us that if β0 , η0 and K0 satisfy the inequality h0 := β0 η0 K0 ≤ 1/2, then the equation F (t) = 0 has the unique solution (Z1 (z), Z2 (z)) in a ball √   1 − 1 − 2h0 2 0 η0 . B0 := t ∈ C : kt − t k ≤ h0 It means that



1 − 2h0 η0 = c4 pδ −3/2 , j = 1, 2, z ∈ M. h0 Finally, we derive the following bound for the Cauchy transform 1 1 2|G2ω (z)|c5 pδ −3/2 − , < z − 2Zω1/2 (z) z − Z1 (z) − Z2 (z) |1 − 2c5 pδ −3/2 |G2ω (z)|| |Zω1/2 (z) − Zj (z)| ≤

1−

|Gω (z) − Gν1 ν2 (z)| < c6 pδ −3/2 ,

z ∈ M.

(6.6)

Due to Theorem 3.2 the limits Zj (x) := limy↓0 Zj (x + iy), x ∈ [−2 + δ, 2 − δ], j = 1, 2 exist. Hence the limit Gν1 ν2 (x) := limy↓0 Gν1 ν2 (x + iy) also exists and from (6.6) the estimate follows: |Gω (x) − Gν1 ν2 (x)| ≤ c6 pδ −3/2 , x ∈ [−2 + δ, 2 − δ]. Hence we conclude |pω (x) − pν1 ν2 (x)| ≤ c7 pδ −3/2 , x ∈ [−2 + δ, 2 − δ]. √ It easy to see pω (x) > δ/π on [−2 + δ, 2 − δ]. If we choose p such that c7 pδ −2 < 1/2π, then pν1 ν2 (x) > 0 on [−2 + δ, 2 − δ]. Analyticity of pν1 ν2 follows from Theorem 3.2. 

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

19

Corollary 6.7. For all δ ∈ (0, 1/10) and m ≥ n ≥ c(µ, s)δ −4 the following measures have (ε ) a positive and analytic density: 1) ω  µs s , 2) µn , 3) µ ˜m+s . Moreover, Cauchy transforms Gωµ(εs ) , Gµn and Gµ˜m+s extend analytically to a neighbourhood of [−2 + δ, 2 − δ]. s

Proof. 1) Due to Lemma 6.3 the following bound holds: (εs )

dL (ω, ω  µs

)≤L

s X j=1

sL |εj | ≤ √ . n

By Proposition 6.6 the density pωµ(εs ) (x) is positive and analytic on [−2 + δ, 2 − δ] for s

n ≥ c(µ, s)δ −4 . 2) By the Berry-Esseen inequality (3.12) c(µ) dl (ω, µn ) ≤ √ . n By Proposition 6.6 the density pµn (x) is positive and analytic on [−2 + δ, 2 − δ], for n ≥ c(µ)δ −4 . 3) By the Berry-Esseen inequality (3.12) c1 (µ) c2 (µ, s) . dL (ω, µ ˜m+s ) ≤ √ + √ m n By Proposition 6.6 the density pµ˜m+s (x) is positive and analytic on [−2 + δ, 2 − δ] for m ≥ n ≥ c(µ, s)δ −4 . Analyticity of the Cauchy transforms follows from Theorem 3.2.  Analytic continuation for Gµ˜m+s . Below we prove Theorem 6.8 which shows that the Cauchy transform Gµ˜m+s has an analytic continuation on √ K := {x + iy : x ∈ [−2 + 2δ, 2 − 2δ]; |y| < δ δ}. The idea of the proof is due to Wang [19]. Theorem 6.8. Let µ be a compactly supported measure on R with supp(µ) ⊂ [−L, L], zero mean and unit variance. For every δ ∈ (0, 1/10) and m ≥ n ≥ N (:= c(µ, s)δ −4 ) the Cauchy transform Gµ˜m+s has an analytic continuation on K such that Gµ˜m+s (z) = Gω (z) + ˜l(z), where |˜l(z)| ≤



c1 (s) √ m

+

c2 (s) n



√1 δ

(6.7)

on K.

Proof. The inverse function of Gµ˜m+s can be expressed as (−1)

Gµ˜m+s (w) =

2s X j=1

RDεj µ (w) +

m−s X

RD1/√m µ (w) +

j=1

P2s

1 , w

Pm−s for w, such that the series j=1 RDεj µ (w) and j=1 RD1/√n µ (w) converge. Due to the rescaling property of the R-transform (3.6) we have   m−s ∞ X sw m − s X w l RD1/√m µ (w) = w − + √ κl+1 √ . m m m j=1

l=2

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

20

For w ∈ Dθ,1.4 (see Lemma 6.4) by inequalities (3.4) we obtain the estimate:   ∞ X w l κl+1 √ ≤ m

16L3 |w| √ . m − 4L|w| m

l=2

16L3 |w| √ m−4L|w| m

We can choose m (≥ N ) such that m−s X



1 m.

Hence

RD1/√m µ (w) = w + g1 (w),

j=1

√ where |g1 (w)| ≤ c1 (s)/ m on Dθ,1.4 , m ≥ N . In the same way we obtain the estimate: 2s 2s ∞ X X X l εj Rµ (εj w) ≤ εj κl+1 (εj w) j=1 j=1 l=1 ≤



2s X

|εj |2 |w| +

2s X

j=1

j=1

2s X

2s X

|εj |2 |w| +

j=1

|εj |

∞ X

|κl+1 |(|εj ||w|)l

l=2

|εj |

j=1

32L3 |εj |2 |w|2 . 1 − 4L|εj ||w|

We can choose n such that 32L3 |εj ||w| ≤ 1, 1 − 4L|εj ||w|

w ∈ Dθ,1.4 ,

which leads to the estimate X X 2s 2s c2 (s) εj Rµ (εj w) ≤ , 2|εj |2 |w| ≤ n j=1 j=1

w ∈ Dθ,1.4 .

Due to Lemma 6.4 we know Gω (Kδ ) ⊂ Dθ,1.4 . Thus replacing w by Gω the we get in view of the functional equation (3.11) (−1)

f (z) := Gµ˜m+s (Gω (z)) = z + g(z),

z ∈ Kδ ,

where g(z) considered as a power series in z g(z) =

2s X j=1

m−s εj Rµ (εj Gω (z)) + √ Rµ (m−1/2 Gω (z)) m

converges uniformly on Kδ to zero as n → ∞ and the estimate c1 (s) c2 (s) |g(z)| ≤ √ + n m

(6.8)

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

21

holds uniformly on Kδ for m ≥ n ≥ N . The uniform bound of |g(z)| and (6.8) imply that the rectangle K is contained in the set f (Kδ ). Rouch´e’s Theorem (see [16]) implies that each function f has an analytic inverse f (−1) defined on K. Due to (6.8) it follows that     z = f f (−1) (z) = f (−1) (z) + g f (−1) (z) f (−1) (z) = z − ge(z),

z ∈ K,  where ge(z) = −g f (−1) (z) , f (−1) (z) ∈ Kδ for z ∈ K, hence c1 (s) c2 (s) |e g (z)| ≤ √ + , z ∈ K, m ≥ n ≥ N. n m By Corollary 6.7 the function Gµ˜m+s has an analytic continuation to the interval [−2 + (−1) δ, 2 − δ] for m ≥ n ≥ N . The composition Gω ◦ Gµ˜m+s is defined and analytic in a neighbourhood of the interval [−2 + δ, 2 − δ] and hence, it coincides with the function f (−1) on [−2 + 2δ, 2 − 2δ]. We conclude G(−1) (Gµ˜m+s (z)) = f (−1) (z) = z + ge(z), ω

z ∈ K,

m ≥ n ≥ N.

Let us estimate |G0ω (z)| on K. It is easy to see √ iz 1 i(2 − iδ δ) 1 1 ≤ + √ |G0ω (z)| = + √ ≤ √ , 2 2 2 2 4−z 4 2δ 2 δ

(6.9)

z ∈ K.

Applying Gω on (6.9), we get Gµ˜m+s (z) = Gω (z + ge(z)) = Gω (z) + ˜l(z),

z ∈ K,

m ≥ n ≥ N,

where |˜l(z)| ≤ sup |G0ω (z)||e g (z)| ≤ z∈K



c1 (s) c2 (s) √ + n m



1 √ , z ∈ K, m ≥ n ≥ N. δ

Thus the theorem is proved.



Proof of Corollary 2.1. The statement follows from Theorem 6.8 with m = n and s = 0.  Proof of Corollary 5.1. In Theorem 6.8 we put g1 (z) = 0, thus the corollary is proved.



Proof of Corollary 5.3. Combining Corollary 5.1 and Theorem 6.8 we obtain the statement.  (−1)

Proof of Corollary 5.4. The function Gµ˜m+s (w), w ∈ Dθ,1.4 is symmetric and compatible. Hence by (6.8) and (6.9) we may conclude that Gµ˜m+s (z), z ∈ K is symmetric and compatible. 

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

22

Proofs of Theorem 5.5 and Theorem 5.6. The results obtained so far allow us to prove Theorem 5.5. Proof of Theorem 5.5. Let us define the set

√ U0 := {η 2s ∈ C2s : |ηj | ≤ 1/ n, j = 1, . . . , 2s}

and the function ∞

G(−1) (η 2s , w) = w +

1 sw m − s X − + √ κl+1 w m m



l=2

w √ m

l +

2s X

ηj

j=1

∞ X

κl+1 (ηj w)l ,

l=1

where w ∈ Dθ,1.4 , η 2s ∈ U0 , such that G(−1) (η 2s , w)

(−1)

η 2s =ε2s

= Gµ˜m+s (w).

The function G(−1) (η 2s , w) is analytic on U0 × Dθ,1.4 . Consider the function F (η 2s , z, w) = G(−1) (η 2s , w) − z, for w ∈ Dθ,1.4 , z ∈ G(−1) (η 2s , Dθ,1.4 ) and η 2s ∈ U0 . This function is analytic on U0 × G(−1) (η 2s , Dθ,1.4 ) × Dθ,1.4 . For fixed ε02s ∈ R2s ∩ U0 , w0 ∈ Dθ,1.4 and fixed z0 = G(−1) (ε02s , w0 ) ∈ G(−1) (ε02s , Dθ,1.4 ) we have F (ε02s , z0 , w0 ) = 0

and

∂ F (ε02s , z0 , w0 ) ∂w ∞

1 s m−sX = 1− 2 − + lκl+1 m m w0 l=2



w √0 m

l−1 +

2s X

(ε0j )2

j=1

∞ X

lκl+1 (ε0j w0 )l−1.

l=1

Using the estimates |w02 − 1| > sin2 θ > δ/16 on Dθ,1.4 and  l−1 X ∞ 2s ∞ X X m − s w0 c s 0 2 0 l−1 lκl+1 √ + (εj ) lκl+1 (εj w0 ) − ≤ √ , m m m n j=1 l=2 l=1 we conclude

∂ 0 ∂w F (ε2s , z0 , w0 ) > cδ > 0.

Due to the Implicit Function Theorem (see Theorem A.3) for each point (ε02s , z0 , w0 ) there ˜0 × Uz × Uw ⊂ U0 × G(−1) (ε0 , Dθ,1.4 ) × Dθ,1.4 and an is an open neighbourhood U = U 0 0 2s ˜0 × Uz → Uw such that G(η , z; ε0 , z0 ) = w0 . Moreover, analytic function G : U 0 0 2s 2s G(η 2s , z; ε02s , z0 ) = Gµ˜m+s (z), z0 ∈ K ⊂ G(−1) (ε02s , Dθ,1.4 ). η 2s =ε2s

0,2 Note, that for z01 6= z02 , z ∈ Uz01 ∩ Uz02 and ε0,1 2s 6= ε2s , η 2s ∈ Uε0,1 ∩ Uε0,2 the functions 2s

2s

0,2 2 1 G(η 2s , z; ε0,1 2s , z0 ) and G(η 2s , z; ε2s , z0 ) do not necessarily coincide, however 0,2 2 1 G(ε2s , z; ε0,1 ˜m+s (z), 2s , z0 ) = G(ε2s , z; ε2s , z0 ) = Gµ

z01 , z02 ∈ K,

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

23

since Gµ˜m+s (z) is uniquely defined for z ∈ K by Corollary 5.1. We conclude that Gµ˜m+s (z) is real analytic with respect to the 2s variables εj such that |εj | ≤ n−1/2 and complex analytic with respect to z ∈ K for m ≥ n ≥ N . Moreover, |G(η 2s , z, ε02s , z0 )| is uniformly bounded in a neighbourhood of (ε02s , z0 ), ε02s ∈ n × K.  2s R ∩ U0 , z0 ∈ K, n ≥ m ≥ N . Therefore, |Gµ˜m+s (z)| is uniformly bounded on Em,s Proof of Theorem 5.6. Consider the rescaled measures µ ˜m−s := Dm−1/2 µ  . . .  Dm−1/2 µ . | {z } m−s times

Let us calculate ∂ε∂ j Gµ˜m+s (z) at εj = 0, j = 1, . . . , 2s for z ∈ K, m ≥ n ≥ N . For this purpose, we differentiate the equation z = Rµ˜m+s (Gµ˜m+s (z)) +

1 Gµ˜m+s (z)

,

and arrive at 

∂ Gµ˜ (z) + Rµ (εj Gµ˜m+s (z)) ∂εj m+s ∂ Gµ˜ + εj Rµ0 (εj Gµ˜m+s (z))(Gµ˜m+s (z) + εj (z)) ∂εj m+s

0 =

+

Rµ0˜m−s (Gµ˜m+s (z))

2s X i=1

where

P2s



i=1

∂ ∗ 2 0 Gµ˜ (z) − εi Rµ (εi Gµ˜m+s (z)) ∂εj m+s

# ∂ G (z) µ ˜ m+s ∂εj , G2µ˜m+s (z) εj =0

(6.10)

means summation over all i 6= j. After simple computations we get ∂ Gµ˜m+s (z) ∂ ∂ε j 0 = Rµ0˜m−s (Gµ˜m+s (z)) Gµ˜ (z) − ∂εj m+s εj =0 G2µ˜m+s (z) εj =0

+

2s X

∗ 2 0 εi Rµ (εi Gµ˜m+s (z))

i=1

∂ Gµ˜m+s (z) , ∂εj εj =0

By the definition of the R-transform and taking into account that µ has zero mean and unit variance we obtain      ! ∞ X z m−s 0 s z l−1 0 Rµ˜m−s (z) = = 1+ . Rµ √ 1+ lκl+1 √ m m m m l=2

Finally,

∂ ˜m+s (z) ∂εj Gµ

"



satisfies the equation: ∞

X s 1+ 1+ lκl+1 m

+



Gµ˜m+s (z) √ m

l=2 2s ∞ X X 2 ∗ 2 Gµ˜m+s (z) εi lκl+1 i=1 l=2

l−1 !

G2µ˜m+s (z) − 1 #

l−1 εi Gµ˜m+s (z) )

∂ Gµ˜m+s (z) ∂εj

(6.11)

= 0. εj =0

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

24

Using the representation Gµ˜m+s (z) = Gω (z) + ˜l(z),

z ∈ K, m ≥ n ≥ N

where 

c1 (s) c2 (s) √ + n m



1 √ , z ∈ K, m ≥ n ≥ N δ we rewrite equation (6.11) in the following way ∂ = 0, (G2ω (z) − 1 + f (z)) Gµ˜m+s (z) ∂εj εj =0 |˜l(z)| ≤

where  2s  f (z) := 2Gω (z)˜l(z) + ˜l2 (z) + 1 + G2µ˜m+s (z) m 2s ∞ X X l−1 ∗ 2 + G2µ˜m+s (z) lκl+1 εi Gµ˜m+s (z) εi i=1

l=2

 ∞    Gµ˜m+s (z) l−1 2s X 2 √ lκl+1 . + Gµ˜m+s (z) 1 + m m l=2

Thus |f (z)| may be bounded as   1 1 c √ √ + , z ∈ K, m ≥ n ≥ N. |f (z)| ≤ m n δ Finally, we can find an N ∗ such that for all m ≥ n ≥ N |G2ω (z) − 1| > |f (z)|,

z ∈ ∂K,

see (6.14) below. By Rouch´e’s theorem we conclude that G2ω (z) − 1 + f (z) has no roots on K, m ≥ n ≥ N , thus ∂ε∂ j Gµ˜m+s (z) = 0 for z ∈ K, m ≥ n ≥ N . Thus the theorem is εj =0

proved.



Proofs of Theorem 2.2, Corollary 2.3 and Corollary 2.4. We start by computing the derivatives of Gωµ(εs ) . The extension Gωµ(εr ) is defined by (see (5.1)) s

r

z=

s X

RDεi µ (Gωµ(εs ) (z)) + Gωµ(εs ) (z) + s

i=1

s

1 Gωµ(εs ) (z)

.

s

In view of the rescaling property of the R-transform we arrive at s X 1 z= εi Rµ (εi Gωµ(εs ) (z)) + Gωµ(εs ) (z) + . s s Gωµ(εs ) (z) i=1

s

Below we will use the notation: h∞ (εs ; z) := Gωµ(εs ) (z). s

We set F (εs , z, h∞ (εs ; z)) :=

s X i=1

εi Rµ (εi h∞ (εs ; z)) + h∞ (εs ; z) +

1 − z. h∞ (εs ; z)

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

25

Using these representations we may determine the derivatives of h∞ (εs , z) as solutions of the equations = 0, |α| ≤ s. (6.12) Dα F (εs , z, h∞ (εs ; z)) εs =0

Let us compute the first derivative of h∞ (ε; z) at ε = 0, z ∈ K. Setting in (6.12) α = 1 we obtain ∂ F (ε, z, h∞ (ε; z)) = 0. ∂ε ε=0 The last equation can be rewritten in the following way    ∂ 0 Rµ (εh∞ (ε; z)) + εRµ (εh∞ (ε; z)) h∞ (ε; z) + ε h∞ (ε; z) (6.13) ∂ε # ∂ h (ε; z) ∂ ∞ ∂ε + h∞ (ε; z) − 2 = 0. ∂ε h∞ (ε; z) ε=0

After some computations, we arrive at the equation:   ∂ 1 h∞ (ε, z) = 0. 1− 2 Gω (z) ∂ε ε=0 q  Due to Lemma 6.4, Gω (K) ⊂ Dθ,1.4 , where 2 sin θ = 4δ 1 − 4δ . Hence |G2ω (z)| ≤ 1 − δ/16 and |G2ω (z) − 1| ≥ δ/16 > 0, z ∈ K. (6.14) ∂ Thus, we get ∂ε h∞ (ε; z) = 0. From (6.13) it follows that ε=0

Rµ (εh∞ (ε; z)) + εh∞ (ε; z)Rµ0 (εh∞ (ε; z)) ∂ . h∞ (ε; z) = 2 0 ∂ε h−2 ∞ (ε; z) − ε Rµ (εh∞ (ε; z)) − 1 Let us denote g(ε) : = Rµ (εh∞ (ε; z)) + εh∞ (ε; z)Rµ0 (εh∞ (ε; z)); 2 0 f (ε) : = h−2 ∞ (ε; z) − ε Rµ (εh∞ (ε; z)) − 1.

We have ∂3 h (ε; z) ∞ 3 ∂ε

 2g(ε)(f 0 (ε))2 2f 0 (ε)g 0 (ε) g(ε)f 00 (ε) g 00 (ε) = − − + . f 3 (ε) f 2 (ε) f 2 (ε) f (ε) ε=0 ε=0 It is easy to see that g(ε) = 0 and ε=0   ∂ 0 g (ε) = h∞ (ε; z) + ε h∞ (ε; z) ∂ε ε=0  × 2Rµ0 (εh∞ (ε; z)) + εh∞ (ε; z)Rµ00 (εh∞ (ε; z)) = 0. ε=0 g 00 (ε) ∂3 Finally, we see that ∂ε3 h∞ (ε; z) = f (ε) . 

ε=0

ε=0

In the next step we compute g 00 (ε) at zero that is, g 00 (ε) = 3G2ω (z)R00 (0), ε=0

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

26

where R00 (0) = 2κ3 . Summarizing all these relations we conclude that 6κ3 G4ω (z) ∂3 = h (ε; z) . ∞ ∂ε3 1 − G2ω (z) ε=0 Continuing this scheme we compute all necessary derivatives:    5 (z) 12 − 6G2 (z) + 6κ 1 − G2 (z) 2 4G 4 4 ω ω ω ∂ h∞ (ε; z) = ; 3 4 2 ∂ε ε=0 (1 − Gω (z))    6 (z) κ (120 − 72G2 (z)) + 48κ 1 − G2 (z) 2 5G 5 3 5 ω ω ω ∂ ; h∞ (ε; z) = 3 5 2 ∂ε ε=0 (1 − Gω (z))  8G5ω (z) 2 − G2ω (z) ∂4 = h∞ (ε2 ; z) ; ε2 =0 ∂ε21 ∂ε22 (1 − G2ω (z))3  12κ3 G6ω (z) 5 − 3G2ω (z) ∂5 ; h∞ (ε2 ; z) = ε2 =0 ∂ε21 ∂ε32 (1 − G2ω (z))3  72κ23 G7ω (z) 3 − 2G2ω (z) ∂6 = h∞ (ε2 ; z) ; ε2 =0 ∂ε31 ∂ε32 (1 − G2ω (z))3 ∂7 ; z) h (ε ∞ 2 ε2 =0 ∂ε31 ∂ε42 =

 144κ3 G8ω (z) (1 − G2ω (z))(κ4 (5G4ω (z) − 12G2ω (z) + 7) + 21) + 6G4ω (z) (1 − G2ω (z))5  144κ3 G8ω (z) 7 − 7G2ω (z) + 2G4ω (z)

;

∂7 h∞ (ε3 ; z) = ; ε3 =0 ∂ε31 ∂ε22 ∂ε23 (1 − G2ω (z))5  2 4 1296κ33 G10 ∂9 ω (z) 12 − 15Gω (z) + 5Gω (z) h∞ (ε3 ; z) . = ε3 =0 ∂ε31 ∂ε32 ∂ε33 (1 − G2ω (z))5 These calculations have been checked by computer algebra programs. Proof of Theorem 2.2. In order to compute the expansion for Gµn we apply Theorem 4.2. By Corollary 5.4 the extension Gµ˜m+s is symmetric and compatible, thus conditions (4.1), (4.2) hold. Due to Theorem 5.5 the extension Gµ˜m+s is infinitely differentiable with respect to ε2s , z ∈ K, m ≥ n ≥ N and conditions (4.7) and (4.8) hold. Theorem 5.6 shows that condition (4.3) holds. Therefore, we get an expansion together with estimates for the error term based on (4.9). In order to determine the expansion for Gµn (z), z ∈ K, n ≥ N we need to compute the derivatives of Gωµ(εs ) (z), z ∈ K at zero and plug the result into (4.10). s Using the derivatives of Gωµ(εs ) (z) equation (4.9) leads to s

κ3 G4ω (z) √ Gµn (z) = Gω (z) + (1 − G2ω (z)) n    1  Gω (z)5 Gω (z)7 Gω (z)5 2 2 + κ4 − κ3 + κ3 + 1 − G2ω (z) (1 − Gω (z)2 )2 (1 − Gω (z)2 )3 n

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

− −

4 2 κ33 G10 κ5 G6ω (z) ω (z) 5Gω (z) − 15Gω (z) + 12 + (G2ω (z) − 1) (G2ω (z) − 1)5 !    κ3 κ4 G8ω (z) 5G2ω (z) − 7 1 1 +O 3 3/2 2 n2 n (Gω (z) − 1)

27



(6.15)

for z ∈ K, n ≥ c(µ)δ −4 .



Proof of Corollary 2.3. In order to determine the expansion for densities we have to substitute the extension Gω (z) by formula (3.10) on the left-hand side of (6.15) and get the density using Stieltjes inversion formula (3.1) by taking the imaginary part.  Proof of Corollary 2.4. We integrate the expansion for densities and obtain the desired expansion for distributions.  Appendix A. Auxiliary results Theorem A.1 ([20]). Consider vector spaces X, Y over R and a sequence {fn }n of functions fn : A → Y , A ⊂ X. If all functions fn are differentiable on A and the sequence {fn0 }n converges uniformly on A, and if the sequence {fn }n converges at one point x0 ∈ A, then {fn }n converges to f uniformly on A. Moreover, f is differentiable and f 0 (x) = limn→∞ fn0 (x), x ∈ A. Theorem A.2 (Newton-Kantorovich, [12]). Consider vector spaces X, Y over C and a functional equation F (t) = 0, where F : X → Y . Assume that the conditions hold: (1) F is differentiable at t0 ∈ X, kF 0 (t0 )−1 kY ≤ β0 . (2) t0 solves approximately F (t) = 0 with estimate kF 0 (t0 )−1 F (t0 )kY ≤ η0 . (3) F 00 (t) is bounded in B0 (see below): kF 00 (t)kY ≤ K0 . (4) β0 , η0 , K0 satisfy the inequality h0 = β0 η0 K0 ≤ 12 . Then there is the unique root t∗ of F in B0 := {t ∈ X : kt − t0 kX ≤

√ 1− 1−2h0 η0 }. h0

Theorem A.3 (Implicit function theorem, [10]). Let B ⊂ Cr+1 × C be an open set, F : B → C an analytic mapping, and (z0 , w0 ) ∈ B a point with F (z0 , w0 ) = 0 and   ∂F det (z0 , w0 ) 6= 0. ∂zr+2 Then there is an open neighbourhood U = U 0 × U 00 ⊂ B and an analytic map g : U 0 → U 00 such that {(z, w) ∈ U 0 × U 00 : F (z, w) = 0} = {(z, g(z)) : z ∈ U 0 }. Appendix B. Proof of the general scheme for asymptotic expansions For the simplicity we will use the following short cut: hn (εn ) := hn (εn ; t). Proof of Proposition 4.1. As before, we denote εm := (ε1 , . . . , εm ) ∈ Rm , where if not specified otherwise ε1 = · · · = εm = m−1/2 . Let us denote σ 2 := (σ1 , σ2 ) ∈ R2 such that |σj | ≤ n−1/2 , j = 1, 2, m ≥ n > 3. We will identify (εm , σ 2 ), and (εm , 0, σ 2 ) ∈ Rm+3 . In particular, notice that hm+3 (εm , 0, σ 2 ) = hm+2 (εm , σ 2 ).

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

28

We will also use the following notation hm (εm−k ) := hm (εm−k , 0, . . . , 0), m ≥ k > 0. | {z } k

Now we expand the function hm+3 (εm+1 , σ 2 ) at the point (εm , σ 2 ) and get hm+3 (εm+1 , σ 2 ) = hm+3 (εm , σ 2 ) X α!−1 Dα hm+3 (εm , σ 2 )((εm+1 , σ 2 ) − (εm , σ 2 ))α + R3 (m), +

(B.1)

|α|≤2

where R3 (m) is a remainder in the Lagrange form:  3 ∂ ∂ 1 t1 + · · · + tm+1 R3 (m) = hm+1 (εm+1 − θtm+1 ), 3! ∂ε1 ∂εm+1

(B.2)

where tj = m−1/2 − (m + 1)−1/2 , j = 1, . . . , m, tm+1 = m−1/2 and 0 < θ < 1. We can deduce the estimate for R3 (m) from |m−1/2 − (m + 1)−1/2 | ≤ cm−3/2 and counting number of terms in (B.2): |R3 (m)| ≤ cd3 (h, n)m−3/2 ,

m ≥ n > s.

(B.3)

We rewrite (B.1) in the following way: hm+3 (εm , σ 2 ) − hm+3 (εm+1 , σ 2 ) X = − α!−1 Dα hm+3 (εm , σ 2 )((εm+1 , σ 2 ) − (εm , σ 2 ))α − R3 (m).

(B.4)

|α|≤2

The next step is expanding the derivatives on the right-hand side and making use of condition (4.3). We start with the second mixed derivatives in (B.4) ∂ ∂ ∂ ∂ hm+3 (εm , σ 2 ) = hm+3 (εm , σ 2 ) + O(d3 (h, n)m−1/2 ) ∂εj ∂εk ∂εj ∂εk εj =εk =0 = O(d3 (h, n)m−1/2 ),

j 6= k.

The other derivatives in (B.4) have the expansions ∂ ∂2 hm+3 (εm , σ 2 ) = h (ε , σ ) m−1/2 + O(d3 (h, n)m−1 ), m+3 m 2 ∂εj εj =0 ∂ε2j ∂2 hm+3 (εm , σ 2 ) = ∂ε2j

∂2 h (ε , σ ) + O(d3 (h, n)m−1/2 ). m+3 m 2 2 εj =0 ∂εj

Replacing the derivatives in (4.3) by their expansions we obtain hm+3 (εm , σ 2 ) − hm+3 (εm+1 , σ 2 )   m X ∂2 1 −1 −1 = hm+3 (εm , σ 2 ) (m − (m + 1) ) εj =0 2 ∂ε2j j=1   1 ∂2 − (m + 1)−1 2 hm+3 (εm , σ 2 ) + O d3 (h, n)m−3/2 . 2 εm+1 =0 ∂εm+1

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

29

Since the function hm+3 (·) is symmetric we arrive at hm+3 (εm , σ 2 ) − hm+3 (εm+1 , σ 2 ) (B.5)  2  2 ∂ 1 ∂ hm+3 (εm , σ 2 ) = − 2 hm+3 (εm , σ 2 ) 2 2(m + 1) ∂ε1 ε1 =0 εm+1 =0 ∂εm+1   + O d3 (h, n)m−3/2 . ∂2 , j = 1, . . . , m we In order to eliminate zero at the (m + 1)st place of ∂ε 2 hm+3 (εm , σ 2 ) εj =0

j

apply the Taylor series in the following way: ∂2 ∂2 = 2 hm+3 (εm , σ 2 ) hm+3 (εm , σ 2 ) εj =0 εj =0, ∂ε2j ∂εj



εm+1 =m−1/2

−1/2

+ O d3 (h, n)m



. (B.6)

Plugging (B.6) into (B.5) and using the symmetry condition we conclude   hm+3 (εm , σ 2 ) − hm+3 (εm+1 , σ 2 ) = O d3 (h, n)m−3/2 . It is easy to see that   hm+k+2 (εm+k , σ 2 ) − hm+k+3 (εm+k+1 , σ 2 ) = O d3 (h, n)(m + k)−3/2 . Summing up these differences for r ≥ m, we obtain r−1 X

(hm+k+2 (εm+k , σ 2 ) − hm+k+3 (εm+k+1 , σ 2 )) = O (d3 (h, n))

r−1 X

(m + k)−3/2 .

k=0

k=0

Hence, hm+2 (εm , σ 2 ) − hm+r+2 (εm+r , σ 2 ) = O (d3 (h, n))

r−1 X

(m + k)−3/2 .

(B.7)

k=0

Finally, (B.7) shows that hm+2 (εm , σ 2 ), m = n, n + 1, . . . is a Cauchy sequence in m with a limit which we denote by h∞ (σ 2 ), |σj | ≤ n−1/2 , j = 1, 2. Taking m = n and letting r → ∞ in (B.7) we obtain   hn+2 (n−1/2 , . . . , n−1/2 , σ 2 ) − h∞ (σ 2 ) = O d3 (h, n)n−1/2 , which proves the proposition.



The following lemma describes the procedure of eliminating zeros like the one that is used in (B.6). The lemma shows that additional variables can be introduced (according to the compatibility property of hm ). Then we can differentiate with respect to the additional variables at zero instead of differentiating with respect to εj , j = 1, ..., m + 1. Lemma B.1. Suppose that conditions (4.1) − (4.3) hold. Then k X ∂j h (ε, ε , . . . , ε ) j!−1 (η j − εj ) m+1 2 m+1 ∂εj ε=0

(B.8)

j=1

=

k X r=1

Per ((η . − ε. )κ. (D)) hm+1+k (λ1 , . . . , λk , ε, ε2 , . . . , εm+1 )

λk =0

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

30

+ O(m−(k+1)/2 ), where the differential operators Per and κp are defined in (B.9) below and (4.4), and (η . − ε. )κ. (D) := ((η p − εp )κp (D), p = 1, . . . , r). Proof. The differential operators Per (τ. κ. ) are polynomials in the cumulant operators κp (see (4.4)) multiplied by formal variables τp , p = 1, . . . , r. These polynomials are defined by the formal power series in τp   ∞ ∞ X X Pej (τ. κ. (D))µj = exp  j!−1 τj κj (D)µj  . (B.9) j=0

j=2

When τj = τ j , j ≥ 1, then due to (4.4) we have ∞ X

Pej (τ. κ. (D)) = 1 +

j=0

∞ X

j!−1 τj Dj .

j=2

Hence, Pe0 (τ. κ. (D)) = 1, Pe1 (τ. κ. (D)) = 0 and Pej (τ. κ. (D)) = j!−1 τj Dj , j ≥ 2, which means that the differential operators Per are nothing else than derivatives of order r multiplied by r!−1 and the corresponding power of the formal variable τr . It easy to see that Per gives the rth term in the Taylor expansion so that we can write hm (εm ) =

k X

Pej (ε.i κ. (D))hm (εm )

εi =0

j=0

+ O(m−(k+1)/2 ),

i = 1, . . . , m.

Notice that Per depends on the cumulant differential operators κ. (D). These operators consist of derivatives with respect to multi-variables, for instance κ4 (D) = D4 −3D2 D2 . Here D2 D2 denotes differentiation with respect to two different variables (we do not need to specify the variables because of the symmetry condition). Therefore, we introduce additional variables, say λk , and write hm (εm ) =

k X

Pej (ε.i κ. (D))hm+k (λk , εm )

j=0

λk =εi =0

+ O(m−(k+1)/2 ),

i = 1, . . . , m. The advantage of the operators Per is that they are defined by exponents which can be easily reordered by the properties of exponential functions. Due to (B.9) and the multiplication theorem for exponential functions we obtain X Pej (τ. κ. )Pel (τ.0 κ. ) = Per ((τ. + τ.0 )κ. ) (τ. = (τ1 , . . . , τr )). j+l=r

In order to prove the theorem we start from the right-hand side of (B.8): k X

. . e Pr ((η − ε )κ. (D))hm+1+k (λ1 , . . . λk , ε, ε2 , . . . , εm+1 )

r=1

=

k X r=1

Per ((η . − ε. )κ. (D))

k−r X l=0

Pel (ε. κ. (D))

λ1 =···=λk =0

ASYMPTOTIC EXPANSIONS IN FREE LIMIT THEOREMS

× hm+1+k (λ1 , . . . , λk , ε, ε2 , . . . , εm+1 )

λ1 =···=λk =ε=0

=

k X X j=1

+ O(m−(k+1)/2 )

Per ((η . − ε. )κ. (D))Pel (ε. κ. (D))

l+r=j r≥1

× hm+1+k (λ1 , . . . , λj , ε, ε2 , . . . , εm+1 ) =

31

k  X

λ1 =···=λk =ε=0

+ O(m−(k+1)/2 )

 Pej (η . κ. (D) − Pej (ε. κ. (D) hm+1+k (λk , ε, ε2 , . . . , εm+1 )

j=1

λk =0,ε=0

+ O(m−(k+1)/2 ) k X ∂j h (ε, ε , . . . , ε ) = j!−1 (η j − εj ) + O(m−(k+1)/2 ). m+1 2 m+1 j ∂ε ε=0 j=1

The last expression coincides with the left-hand side in (B.8), thus the theorem is proved.  Proof of Theorem 4.2. The theorem will be proved by induction on the length of the expansion, starting with s = 4. The case s = 3 was shown in Proposition 4.1. Assume that m ≥ n (n ≥ 1). We start with the expansion hm+1 (εm ) − hm+1 (εm+1 ) X = − α!−1 Dα hm+1 (εm )(εm+1 − εm )α + Rs (m),

(B.10)

0