Tail Dependence Coefficients of Multivariate Elliptical Distributions

2 downloads 0 Views 548KB Size Report
Abstract—In this article, the tail dependence coefficient of the multivariate elliptical distribution and its properties are obtained. Furthermore, some specific tail ...
Tail Dependence Coefficients of Multivariate Elliptical Distributions Lixin Song

Dawei Lu and Xiuli Wang

Zexi Song

School of Mathematical Sciences Dalian University of Technology Dalian, 116023, China Email: [email protected]

School of Mathematical Sciences Dalian University of Technology Dalian, 116023, China Email: ludawei− [email protected]

Yuming High School Dalian, 116023, China

Abstract—In this article, the tail dependence coefficient of the multivariate elliptical distribution and its properties are obtained. Furthermore, some specific tail dependence coefficients (𝑇 𝐷𝐶) and regularly varying elliptical distribution are given. From these results, it is easy to see that the 𝑇 𝐷𝐶 is only determined by its tail index and the correlation coefficient of the multivariate elliptical distribution. Finally, for comparing with the results in [4], some simulations are presented. Index Terms—Tail dependence coefficient, Elliptical distribution, Regularly varying.

hypersphere distributions. Such expression can be used to investigate the structural property of tail dependence. The rest of this paper is organized as follows: In Section 2, the multivariate tail dependence coefficient is given, we also give the specific forms of the tail dependence coefficient on elliptical distributions and regularly varying elliptical distributions. In Section 3, a simulation based on uniform distribution example is presented to compare with the results in [4]. Finally, some future works are discussed in Section 4.

I. I NTRODUCTION

II. 𝑇 𝐷𝐶

In financial risk analysis, the tail properties of a random vector play an important role. The conditional probability 𝑃 (𝑋 > 𝑥∣𝑌 > 𝑦)

(1)

reflects the tail dependence of the random variable 𝑋 and 𝑌 for sufficiently large 𝑥, 𝑦. Traditional investigation of dependence is based on bivariate distributions, the lower and upper tail dependence coefficients (For the details see [9]) of a pair of random variables 𝑋 and 𝑌 are defined by 𝜆𝐿 = lim+ 𝑃 {𝐹𝑌 (𝑌 ) ≤ 𝑢∣𝐹𝑋 (𝑋) ≤ 𝑢}, 𝑢→0

A. 𝑇 𝐷𝐶 of Elliptical Distributions Before giving the main results of this paper, let us introduce some well known conclusions. Schmidt in [11] extended a general case of upper tail dependence as follow: Definition II.1. Let 𝑋 = (𝑋1 , 𝑋2 , ⋅ ⋅ ⋅ , 𝑋𝑛 ) be a 𝑛𝑑𝑖𝑚𝑒𝑛𝑠𝑖𝑜𝑛𝑎𝑙 random vector with joint c.d.f. F and marginal distribution functions 𝐹1 , 𝐹2 , ⋅ ⋅ ⋅ , 𝐹𝑛 . For some subset 𝜙 ∕= 𝐽 ⊂ {1, 2, ⋅ ⋅ ⋅ , 𝑛}, if the following limit exists, X is said to be upper tail dependent, and 𝜆𝐽𝑈 is said to be tail dependence coefficient (TDC). /𝐽 𝜆𝐽𝑈 = lim− 𝑃 {𝑋𝑗 > 𝐹𝑗−1 (𝑢), ∀𝑗 ∈

𝜆𝑈 = lim 𝑃 {𝐹𝑌 (𝑌 ) > 𝑢∣𝐹𝑋 (𝑋) > 𝑢},

𝑢→1

𝑢→1−

where 𝐹𝑋 and 𝐹𝑌 are the marginal cumulative distribution functions (c.d.f.) of 𝑋 and 𝑌 respectively. This has been done for the bivariate tail dependence (See the details in [6]) and for the bivariate extremal dependence (See the details in [7]). In [11], Schmidt extended the bivariate tail dependence coefficient to the multivariate one. It is well known that the bivariate normal distribution is asymptotically tail independent if its correlation 𝜌 < 1. Schmidt in [11] proved that bivariate elliptical distributions possess the lower (or upper) tail dependence property if their generating random variables are heavy tailed. In [7], Frahm derived the formula for the extreme dependence of bivariate elliptical distributions. And Chan in [4] investigated the tail dependence of multivariate t-copula. In this paper, we extend the formulas of tail dependence to multivariate elliptical distributions, and express the tail dependence coefficient as ratio of integral of their underlying

978-1-61284-774-0/11/$26.00 ©2011 IEEE

∣𝑋𝑖 > 𝐹𝑖−1 (𝑢), ∀𝑖 ∈ 𝐽},

(2)

where 𝐹𝑖−1 (𝑢) = 𝑖𝑛𝑓 {𝑡 : 𝐹𝑖 (𝑡) ≥ 𝑢, 0 < 𝑡 < 1} is the generalized inverse of a distribution function. Similarly, the lower tail dependence coefficient of 𝑋 can be defined by 𝜆𝐽𝐿

=

lim 𝑃 {𝑋𝑗 < 𝐹𝑗−1 (𝑢), ∀𝑗 ∈ /𝐽

𝑢→0+

∣𝑋𝑖 < 𝐹𝑖−1 (𝑢), ∀𝑖 ∈ 𝐽}.

𝑋 is said to be tail dependence if 𝜆𝐽𝑈 or 𝜆𝐽𝐿 is positive. For all 𝜙 ∕= 𝐽 ⊂ (1, 2, ⋅ ⋅ ⋅ , 𝑛), if 𝜆𝐽𝑈 = 0 or 𝜆𝐽𝐿 = 0, then we say 𝑋 is tail independent. If the characteristic function of random vector 𝑋 has the ′ form exp (𝑖𝑡𝜇)𝜑(𝑡 Σ𝑡), we say that 𝑋 is generated from the multivariate elliptical distribution with parameters 𝜇, Σ and 𝜑, and use 𝐸𝐶𝑛 (𝜇, Σ, 𝜑) to indicate it, where 𝜇 : 𝑛×1, Σ : 𝑛×𝑛

2640

and Σ ≥ 0. Especially, 𝐸𝐶𝑛 (0, 𝐼𝑛 , 𝜑) is said to be spherically distribution when 𝜇 = 0 and Σ = 𝐼𝑛 , and indicated by 𝑆𝑛 (𝜑). Any n-dimensional elliptical distributed random vector X 𝑑 may be represented stochastically by 𝑋 = 𝜇 + RΛ𝑈𝑛 with Σ = ΛΛ′ . The parameter vector 𝜇 is a location parameter, and the matrix Σ determines the scale and the correlation of the random vector 𝑋. Here 𝑈𝑛 is a n-dimensional random vector uniformly distributed on the unit hypersphere 𝑆 = {𝑥 ∈ 𝑅𝑛 : ∥𝑥∥𝐸 = 1}(where ∥ ⋅ ∥𝐸 denotes the Euclidean norm), R is a nonnegative random variable (call the generating variate of X) being stochastically independent of 𝑈𝑛 , and Λ ∈ 𝑅𝑛×𝑛 has full rank (For the details in [3]). In [8], Henrik calculated the covariance matrix of 𝑋, i.e. cov(𝑋) = Σ⋅𝐸(R 2 )/𝑛, provided 𝐸(R 2 ) is finite. Let ⎛ ⎞ ⎞ ⎛ 𝜎11 ⋅ ⋅ ⋅ 𝜎1𝑛 𝜎1 ⋅ ⋅ ⋅ 0 ⎜ .. ⎟ , 𝜎 = ⎜ .. . . .. ⎟ , .. Σ = ⎝ ... ⎝ . . . . ⎠ . ⎠ 𝜎𝑛1

⋅⋅⋅

𝜎𝑛𝑛

0

⋅⋅⋅

where 𝑇𝑛−1,𝑣+1 denotes the c.d.f. of the√(n-1)-variate tdistribution with 𝜈 + 1 √ degrees freedom, 𝑃𝑖 is a nonsin√ of ′ gular matrix such that 𝑃𝑖 𝑃𝑖 = 𝑃𝑖 , 𝑖 = (1, 2, ⋅ ⋅ ⋅ , 𝑛) and the symbol ’1’ in the equation will generally indicate vector of ones. Proof: Due to the symmetry of 𝑋, we obtain 𝜆𝐽𝐿 = 𝜆𝐽𝑈 = 𝜆𝐽 . Taking the lower 𝑇 𝐷𝐶 without loss of the generality. Considering the standardized version of 𝑋, namely 𝑋 ∗ = (𝑋1∗ , 𝑋2∗ , ⋅ ⋅ ⋅ , 𝑋𝑛∗ ) with 𝑋𝑖∗ = (𝑋𝑖 − 𝜇𝑖 )/𝜎𝑖 , (𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝑛). Then we obtain 𝜆𝐽

where 𝐹 ∗ is the marginal c.d.f. of each component of 𝑋 ∗ . Applying L’Hospital rule, we find 𝜆𝐽

𝑑𝑃 (𝑋𝑖∗ < 𝑥, 𝑖 ∈ (1, 2, ⋅ ⋅ ⋅ , 𝑛))/𝑑𝑥 𝑥→−∞ 𝑑𝑃 (𝑋𝑖∗ < 𝑥, 𝑖 ∈ 𝐽)/𝑑𝑥 𝑛 ∑ ∂𝑃 (𝑋𝑖∗ < 𝑥, 𝑖 ∈ (1, 2, ⋅ ⋅ ⋅ , 𝑛))/∂𝑥𝑖 𝑖=1 lim . 𝑛 ∑ 𝑥→−∞ ∂𝑃 (𝑋𝑖∗ < 𝑥, 𝑖 ∈ 𝐽)/∂𝑥𝑖

=

lim



Lemma II.1. [2] Let 𝑋 ′ = (𝑋1 , 𝑋2 , ⋅ ⋅ ⋅ , 𝑋𝑛 ) ∼ 𝑡𝑛,𝑣 (0, 𝐼𝑛 , 𝑃 ) with 0 < 𝜈 ≤ ∞ degrees of freedom and positive definite correlation matrix P. Let 𝑋𝑖 be the (𝑛 − 1)dimensional 𝑠𝑢𝑏vector of X after dropping the i-th component. Further, let 𝑃¯𝑖 be the submatrix of P after deleting the i-th row and the i-th column whereas 𝛾𝑖 corresponds to the i-th column of P without the element 𝜌𝑖𝑖 = 1. Then,

=

𝑖=1

Note that

2

𝜆𝐽

=

𝜆𝐽 =

𝑖=1 𝑚 ∑ 𝑖=1



𝑇𝑛−1,𝑣+1 (− 𝑣 + 1 𝑃𝑖 √

𝑇𝑚−1,𝑣+1 (− 𝑣 + 1



(1) 𝑃𝑖

−1

−1

(1 − 𝛾𝑖 ))

(1(1)



(1) 𝛾𝑖 ))

, (3)

√ 𝑣+1 √𝑥 𝑇𝑛−1,𝑣+1 ( 𝑣+𝑥 (1 − 𝛾𝑖 )) 2 𝑃𝑖 𝑖=1 lim √ 𝑚 ∑ 𝑥→−∞ (1) 𝑣+1 √ 𝑥 𝑓𝑋𝑗∗ (𝑥) 𝑇𝑚−1,𝑣+1 ( 𝑣+𝑥 (1 − 𝛾𝑖 )) 2 (1) 𝑓𝑋𝑗∗ (𝑥)

𝑛 ∑

Using Lemma II.1, we obtain one of the main result in this paper as follow:



< 𝑥, 𝑖 ∈ (1, 2, ⋅ ⋅ ⋅ , 𝑛)}/∂𝑥𝑖 corresponds to −−−−→ ∗ 𝑓𝑋𝑖∗ ⋅ 𝑃 (𝑋 𝑖 < 𝑥(𝑛−1) ∣𝑋𝑖∗ = 𝑥),

¯ 𝑖 ∣(𝑋𝑖 = 𝑥) ∼ 𝑡𝑛−1,𝑣+1 (𝛾𝑖 𝑥, 𝑣 + 𝑥 𝑃𝑖 ), 𝑖 = 1, ⋅ ⋅ ⋅ , 𝑛, 𝑋 𝑣+1 thus

𝑖

𝑛 ∑

∂𝑃 {𝑋𝑖∗

¯ ∗ is the (n-1)-dimensional subvector of 𝑋 ∗ after where 𝑋 𝑖 −−−−→ dropping the 𝑖-th component , 𝑥(𝑛−1) = (𝑥, 𝑥, ⋅ ⋅ ⋅ , 𝑥) is a (n-1)-dimensional vector and 𝑓𝑋𝑖∗ is the density function of 𝑋𝑖∗ . From Lemma II.1 we know that

2 ¯ 𝑖 ∣(𝑋𝑖 = 𝑥) ∼ 𝑡𝑛−1,𝑣+1 (𝛾𝑖 𝑥, 𝑣 + 𝑥 𝑃𝑖 ), 𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝑛, 𝑋 𝑣+1 ′ ¯ where 𝑃𝑖 = 𝑃𝑖 − 𝛾𝑖 𝛾 .

Theorem II.1. Let 𝑋 ∼ 𝑡𝑛,𝑣 (𝜇, Σ, 𝑃 ) with 0 < 𝜈 ≤ ∞ degrees of freedom and positive definite correlation matrix P, and 𝑃 (1) is a submatrix of 𝑃 according to the dimension of (1) 𝐽 ∈ (1, 2, ⋅ ⋅ ⋅ , 𝑛) whereas 𝛾𝑖 corresponds to the i-th column (1) without the element 𝜌𝑖𝑖 = 1. Then both the lower and of 𝑃 the upper 𝑇 𝐷𝐶 of 𝑋 are

lim+

=

𝜎𝑛

⎞ 1 ⋅ ⋅ ⋅ 𝜌1𝑛 ⎜ .. ⎟ , .. 𝑃 = ⎝ ... . . ⎠ 𝜌𝑛1 ⋅ ⋅ ⋅ 1 √ where 𝜎𝑖 = 𝜎𝑖𝑖 , 𝜌𝑖𝑗 = 𝜎𝑖𝑗 /𝜎𝑖 𝜎𝑗 , 𝑖, 𝑗 = 1, 2, ⋅ ⋅ ⋅ , 𝑛, such that Σ = 𝜎𝑃 𝜎. It is well known that t-distribution as a very useful distribution belongs to elliptical distributions. And let 𝑡𝑛,𝑣 (𝜇, 𝜎, 𝑃 ) ≡ 𝑡𝑛,𝑣 (𝜇, Σ) be the n-variate t-distribution with v degrees of freedom.

𝑃 (𝐹1∗ (𝑋1∗ ) < 𝑢, ⋅ ⋅ ⋅ , 𝐹𝑛∗ (𝑋𝑛∗ ) < 𝑢) 𝑃 (𝐹𝑖∗ (𝑋𝑖∗ ) < 𝑢, ∀𝑖 ∈ 𝐽) 𝑢→0 𝑃 (𝑋𝑖∗ < 𝑥, 𝑖 ∈ (1, 2, ⋅ ⋅ ⋅ , 𝑛)) , lim 𝑥→−∞ 𝑃 (𝑋𝑖∗ < 𝑥, 𝑖 ∈ 𝐽)

=

=

𝑖=1

lim

𝑚 𝑥→−∞ ∑ 𝑖=1

In view of

√ 𝑥⋅

𝑛 ∑

𝑖=1

𝑇𝑛−1,𝑣+1 (𝑥 ⋅

𝑇𝑚−1,𝑣+1 (𝑥 ⋅





𝑣+1 𝑣+𝑥2

𝑣+1 𝑣+𝑥2

√ 𝑣+1 → − 𝑣 + 1, 2 𝑣+𝑥





𝑃𝑖

𝑃𝑖 (1)

𝑃𝑖

−1

(1 − 𝛾𝑖 ))

−1

(1)

.

(1 − 𝛾𝑖 ))

𝑥 → −∞,

we obtain the formula in Theorem II.1. Hence, the tail dependence structure of a multivariate tdistributed random vector 𝑋 is essentially determined by its number 𝜈 of degree of freedom and 𝑃 . Particularly, the 𝑇 𝐷𝐶 always vanishes if 𝜈 = ∞, i.e. 𝑋 ∼ 𝑁 (𝜇, Σ).

2641

B. 𝑇 𝐷𝐶 on regular varying Elliptical Distribution A multivariate random vector 𝑋 (or its joint c.d.f.) is said to be ’regularly varying’ with tail index 𝛼 > 0 (For the details see [1]) if for every 𝑡 > 0, as 𝑥 → ∞, 𝑃 ({∥𝑋∥ > 𝑡𝑥} ∩ {(𝑋/∥𝑋∥) ∈ (⋅)}) 𝑣 −𝛼 → 𝑡 ⋅ 𝑃 (Θ ∈ (⋅)). 𝑃 (∥𝑋∥ > 𝑥) (4) Here Θ denotes a random vector distributed on the unit hypersphere with respect to the norm ∥ ⋅ ∥ and the symbol ’𝑣’ stands for vague convergence. It is worth to point out that whether a random vector is regularly varying or not does not depend on the choice of norm in (2.4) (See the details in [8] and [5]). Suppose that 𝑋 is elliptical distributed with 𝜇 = 0. Let R = ∥𝑋∥𝐸 , Λ𝑈𝑛 = 𝑋/∥𝑋∥𝐸 , since R and Λ𝑈𝑛 are independent, and Λ𝑈𝑛 lives on the unit hypersphere with respect to the Euclidean norm, we obtain

=

𝑃 ({∥𝑋∥𝐸 > 𝑡𝑥} ∩ {(𝑋/∥𝑋∥𝐸 ) ∈ (⋅)}) 𝑃 (∥𝑋∥𝐸 > 𝑥) 𝑃 (R > 𝑡𝑥) ⋅ 𝑃 (Λ𝑈𝑛 ∈ (⋅)). 𝑃 (R > 𝑥)

So for every 𝑡 > 0, we have lim

𝑥→∞

𝑃 (R > 𝑡𝑥) = 𝑡−𝛼 , 𝑃 (R > 𝑥)

i.e. R is regularly varying at ∞ with index 𝛼 > 0 for all 𝑡 > 0. Lemma II.2. [2] Let 𝑋 ∼ 𝐸𝐶𝑛 (𝜇, Σ, 𝜑) , then 𝑋 (1) ∼ 𝐸𝐶𝑚 (𝜇(1) , Σ11 , 𝜑), and 𝑋 (2) ∼ 𝐸𝐶𝑛−𝑚 (𝜇(2) , Σ22 , 𝜑), where ( (1) ) ( ) ( (1) ) 𝜇 Σ11 Σ12 𝑋 ,𝜇 = ,Σ = . 𝑋= Σ21 Σ22 𝑋 (2) 𝜇(2) The next Theorem follows from Lemma II.2. Theorem II.2. Let 𝑋 be a n-dimensional regularly varying elliptical distributed random vector with positive correlation matrix P and tail index 𝛼 > 0, then both the lower and the upper 𝑇 𝐷𝐶 of 𝑋 correspond to /∫ 1/2 1/2 ∫ 𝜆𝐽 =

(1/𝑛)

0

𝑦 𝛼 𝑑𝐹𝑌 (𝑦)

(1/𝑚)

0

𝑧 𝛼 𝑑𝐹𝑍 (𝑧) ,

(5)

𝑌 = where √ 𝐹𝑌 and 𝐹𝑍 are the √ of random variable ⋀𝑚c.d.f. ⋀ ⋀ 𝑛 (1) 𝑈 ), and 𝑎 ( 𝑃 𝑈 ) and 𝑍 = ( 𝑃 𝑏 means 𝑖 𝑖 𝑖=1 𝑖=1 the minimum of a and b respectively. √ Proof: Let 𝑋 ∗ = (𝑋 − 𝑢)/𝜎 = 𝑅 𝑃 𝑈𝑛 . Using a proof similar to that of Theorem II.1, we can get that the 𝑇 𝐷𝐶 is 𝜆𝐽

=

lim

𝑥→∞

𝑃 (𝑋𝑖∗

> 𝑥, 𝑖 ∈ (1, 2, ⋅ ⋅ ⋅ , 𝑛)) , 𝑃 (𝑋𝑗∗ > 𝑥, 𝑗 ∈ 𝐽)

From the Lemma II.2, we know that 𝑋𝐽∗ , 𝐽 ∈ {1, 2, ⋅ ⋅ ⋅ , 𝑛} is also a random vector of elliptical distribution, so 𝜆

𝐽

= =

−−→ √ 𝑃 (R ⋅ 𝑃 𝑈𝑛 > 𝑥(𝑛) ) lim √ −−→ 𝑥→∞ 𝑃 (R ⋅ 𝑃 (1) 𝑈𝑚 > 𝑥(𝑚) ) 𝑃 (R ⋅ 𝑌 > 𝑥) . lim 𝑥→∞ 𝑃 (R ⋅ 𝑍 > 𝑥)

From the conditional expectation, we can obtain ∫ (1/𝑛)1/2 𝜆

𝐽

=

0

𝑃 (R > 𝑥𝑦 )𝑑𝐹𝑌 (𝑦)

0

𝑃 (R > 𝑥𝑧 )𝑑𝐹𝑍 (𝑧)

lim 𝑥→∞ ∫ (1/𝑚)1/2 ∫ (1/𝑛)1/2

=

0 ∫ (1/𝑚)1/2 0

𝑦 𝛼 𝑑𝐹𝑌 (𝑦) 𝑧 𝛼 𝑑𝐹𝑍 (𝑧)

.

√ From the proof above we can see that the distribution of 𝑃 𝑈𝑛 is the key point for the tail dependence coefficient. Thus, in order to get it, we introduce the following Lemma. Lemma II.3. [2] Let 𝑋 is a random vector of spherically 𝑑 distribution, and 𝑋 ∼ 𝑆𝑛 (𝜑) if and only if 𝑋 = R ′ 𝑧, where d means the same distribution, and R ′ is a nonnegative random variable independent of 𝑧 ∼ 𝑁 (0, 𝐼𝑛 ). The probability density function (p.d.f.) of 𝑋 is given below ∫ ∞ 𝑥′ 𝑥 1 exp(− 2 )𝑑𝐹∞ (𝑟), 2 𝑛/2 2𝑟 (2𝜋𝑟 ) 0 here 𝐹∞ (⋅) is the c.d.f. of R ′ .

√ Now let us calculate the distribution of 𝑃 𝑈𝑛 . Since 𝑑 Λ𝑈𝑛 ∼ 𝑆𝑛 (𝜑), so Λ𝑈𝑛 = R ′ 𝑧. From the beginning of √ √ 𝑑 subsection 2.2,√we have Λ = 𝜎 𝑃 , hence 𝜎 𝑃 𝑈𝑛 = R ′ 𝑧 𝑑 if and only if 𝑃 𝑈𝑛 = R ′ ⋅ 𝜎 −1 𝑧. Let 𝑇 = 𝜎 −1 𝑧, 𝜎1 = 𝜎 −1 , √ 𝑑 and 𝑇 ∼ 𝑁 (0, 𝜎12 ). So 𝑃 𝑈𝑛 = R ′ 𝑇 , and its p.d.f. has the form of ∫ ∞ 𝑥′ 𝑥 1 exp(− )𝑑𝐹∞ (𝑟), 2𝑟2 𝜎12 (2𝜋𝑟2 𝜎12 )𝑛/2 0 where R ′ , z and 𝐹∞ (⋅) all satisfy the definition of Lemma II.3. Then the c.d.f. will be obtained from the form of p.d.f.. Frahm in [7] investigated that the tail index 𝛼 is equivalent to the freedom degree of t-distribution, this property is also correct for 𝑇 𝐷𝐶. It is easy to see that the last equality of the proof is √ ∫ √1/𝑛 𝛼 𝑦 𝑑𝐹𝑌 (𝑦) 𝐸𝑦 𝛼 ⋅ 𝕀{𝑦 ∈ (0, 1/𝑛)} 0 √ , = ∫ √1/𝑚 𝐸𝑧 𝛼 ⋅ 𝕀{𝑧 ∈ (0, 1/𝑚)} 𝛼 𝑑𝐹 (𝑧) 𝑧 𝑍 0 where 𝕀{⋅} denotes the indicator function. Since the 𝑇 𝐷𝐶 is only determined by its tail index and correlation coefficient of the elliptical distribution, we can construct a equation to estimate it.

2642

III. A SIMULATION STUDY In view of definition II.1, it is easy to see the greater is the dimension of 𝐽, the larger is the value of 𝜆𝐽𝐿 and 𝜆𝐽𝑈 , 𝑚+1 𝑚+1 , 𝜆𝑚 , 𝑚 = 1, 2, ⋅ ⋅ ⋅ , 𝑛 − thus we obtain 𝜆𝑚 𝐿 < 𝜆𝐿 𝑈 < 𝜆𝑈 1. Theorem II.2 shows that 𝜆 is only determined by its tail index and correlation coefficient of the random vector 𝑋. So we can estimate 𝜆 via classical Monte Carlo simulation . Let 𝑈 (1) , 𝑈 (2) , ⋅ ⋅ ⋅ , 𝑈 (𝑁 ) be a sample generated from the uniform distribution on the unit hypersphere, then the tail dependence index 𝜆𝐽 can be estimated by 𝑁 ⋀ ∑ (𝑘) 𝑛 √ ( 𝑖=1 𝑃 𝑈𝑖 )𝛼 ⋅ 𝕀{𝑈𝑖 ≥ 0} 𝑘=1 ˆ𝐽 = 𝜆 , 𝛼 ≥ 2. (6) 𝑁 ⋀ √ ∑ (𝑘) ( 𝑖∈𝐽 𝑃 (1) 𝑈𝑖 )𝛼 ⋅ 𝕀{𝑈𝑖 ≥ 0}

0.9

0.7

lambda

0.6

(𝑘)

is a vector and is the i-th element of 𝑈 . where 𝑈 In the following example, we choose the same Σ appeared in [4]. For each covariance matrix, we simulate 1,000,000 trivariate uniform vectors from the corresponding uniform distribution on hypersphere, and also calculate the sample ˆ 𝐽 for 𝛼 = 10 using 1000 independent runs. variances of 𝜆 And we use 𝜆𝑖𝑗 to indicate 𝜆𝐽 with 𝐽 = {𝑖, 𝑗}. Consider ⎛ ⎞ ⎛ ⎞ 4 0 0.5 4 1 0.5 3 1.5 ⎠ , Σ2 = ⎝ 1 3 1.5 ⎠ . Σ1 = ⎝ 0 0.5 1.5 2 0.5 1.5 2 1 lambda12 of sigma1 lambda12 of sigma2

0.9 0.8 0.7

lambda

0.6 0.5 0.4 0.3 0.2 0.1 0

10

20

0.4

0.2 0.1 0

0

10

20

30

40

50

alpha

Fig. 2: Comparison of TDC of Σ2

ACKNOWLEDGMENT

(𝑘) 𝑈𝑖

0

0.5

0.3

𝑘=1

(𝑘)

lambda23 of sigma1 lambda23 of sigma2

0.8

30

40

50

alpha

Fig. 1: Comparison of TDC of Σ1 Note that the only difference between Σ1 and Σ2 is that 𝜎12 are 0 and 1 in Σ1 and Σ2 respectively. As is showed in Fig.1, ˆ 23 is increasing, ˆ 12 is locally decreasing in 𝜎12 , however 𝜆 𝜆 12 ˆ The sample variance of 𝜆 for 𝛼 = 10 is 0.0002554. Whereas ˆ 12 are the different 𝜏ˆ12 = 0.00027624 in [4], where 𝜏ˆ12 and 𝜆 expressions of 𝑇 𝐷𝐶 with 𝐽 = {1, 2}. Our result is smaller than the one in [4].

The authors would like to thank the referee for careful reading of our manuscript and for helpful and valuable comments and suggestions. The research of the first author is supported by the Funds for Frontier Interdisciplines of DUT in 2010 (Grant No.DUT10JS06). The second author is supported by the Fundamental Research Funds for the Central Universities under number 3004-852005. R EFERENCES [1] B. Basrak, R.A. Davis, T. Mikosch, A characterization of multivariate regular variation. Ann. Appl. Probab. 12 (2002) 908 -920. [2] M. Bilodeau, D. Brenner, Theory of Multivariate Statistics. Springer, Berlin. (1999). [3] S. Cambanis, S. Huang, G. Simons, On the theory of elliptically contoured distributions. J. Multivariate Anal. 11 (1981) 368-385. [4] Y. Chan, Tail dependence for multivariate t-copulas and its monotonicity. Insurance: Mathematics and Economics. 42 (2008) 763 -770. [5] D. Lu, W.V. Li, A note on multivariate gaussian estimates. J. Math. Anal. Appl. 354 (2009) 704–707. [6] P. Embrechts, F. Lindskog, A. McNeil, Modeling dependence with copulas and applications to risk management. In: Rachev, S. (Ed.), Handbook of Heavy Tailed Distributions in Finance. Elsevier, (2003) 329 -384. (Chapter 8). [7] G. Frahm, On the extreme dependence coefficient of multivariate distributions. Statist. Probab. Lett 76 (2006) 1470 -1481. [8] H. Henrik, L. Filip, Multivariate extremes, aggregation and dependence in elliptical distributions. Adv. Appl. Porb. 34 (2002) 587-608. [9] H. Joe, Multivariate models and dependence conceps. Chapman & Hall. (1997). [10] R.B. Nelsen, An Introduction to Copulas. Springer, Berlin, (1999). [11] R. Schmidt, Tail dependence for elliptical contoured distributions. Math. Method. Oper. Res. 55 (2002) 301 -327.

IV. C ONCLUDING REMARKS Tail dependence of random variables is the limiting proportion of joint exceedance of these random variables over a large threshold given that some of them have already exceeded the threshold. As we showed in this paper, the formulas 3 and 5 express various tail dependence coefficients in terms of their tail index and covariation matrix.

2643