Random sums of random variables and vectors - Springer Link

0 downloads 0 Views 300KB Size Report
Jul 3, 2015 - 433–450. Random sums of random variables and vectors: Including infinite means and unequal length sums. Edward Omeya and Rein Vesilob.
DOI 10.1007/s10986-015-9290-z

Lithuanian Mathematical Journal, Vol. 55, No. 3, July, 2015, pp. 433–450

Random sums of random variables and vectors: Including infinite means and unequal length sums Edward Omey a and Rein Vesilo b a

Research Centre for Quantitative Business Functions, KU Leuven, Campus Brussels, Warmoesberg 26, 1000 Brussels, Belgium b Department of Engineering, Macquarie University NSW 2109, Australia (e-mail: [email protected]; [email protected]) Received May 19, 2014; revised October 28, 2014

Abstract. Let {X, Xi , i = 1, 2, . . . } be independent nonnegative random variables with common distribution function F (x), and let N be an integer-valued random variable ∞ independent of X. Using S0 = 0 and Sn = Sn−1 + Xn , the random sum SN has the distribution function G(x) = i=0 P(N = i)P(Si  x) and tail distribution G(x) = 1 − G(x). Under suitable conditions, it can be proved that G(x) ∼ E(N )F (x) as x → ∞. In this paper, we extend previous results to obtain general bounds and asymptotic bounds and equalities for random sums where the components can be independent with infinite mean, regularly varying with index 1 or O-regularly varying. In the multivariate case, we obtain asymptotic equalities for multivariate sums with unequal numbers of terms in each dimension. MSC: 60G50, 62E20, 60E99 Keywords: random sum, regular variation, subexponential distribution, O-regular variation, infinite mean, dependence

1 Introduction Random sums of random variables and vectors with heavy-tailed distribution functions have applications in many areas including risk theory, finance, teletraffic modeling, and queueing system analysis. In this paper, we consider a number of open issues for such sums. Let {Xi , i = 1, 2, . . . } be independent nonnegative random variables (r.vs) with distribution functions (d.f.) Fi . We do not necessarily assume that the components of the sum are identically distributed nor that the mean E(Xi ) is finite. Let N be an integer-valued r.v., independent of {Xi }, with p.d.f. (probability d.f.) pn = P(N = n). To avoid the trivial case, we assume that p0 < 1. The partial sums are given by S0 = 0 and Sn = X 1 + X2 + · · · + Xn , n  1. Replacing the index n by the random index N , we obtain the random sum SN = N i=1 Xi . The d.f. of SN is given by G(x) = P(SN  x). The tail distribution of G is given by G(x) = 1 − G(x). In the case of i.i.d. (independent identically distributed) random variables where F = Fi , we have P(Sn  x) = F ∗n (x), where F ∗n denotes the n-fold convolution of F with itself. The d.f. of SN n∗ is given by G(x) = ∞ n=0 pn F (x), and we say that the distribution function G is subordinated to F with subordinator N . We assume throughout that E(N ) is finite. c 2015 Springer Science+Business Media New York 0363-1672/15/5503-0433 

433

434

E. Omey and R. Vesilo (1)

(2)

Later, we also study random sums (SN , SM ) of i.i.d. random vectors {(Xi , Yi ), Xi  0, Yi  0} with d.f. F (x, y) = P(X1  x, Y1  y) and random indices (N, M ). Random sums of bivariate vectors can be used to model insurance claims where each claim event can involve two claim components that may be dependent, for example, automobile accidents with personal injury and property damage components (see Léveillé [16]). 1.1

Classes of functions

To formulate our results, we recall some basic definitions. Let g denote a positive and measurable real function. The function g is regularly varying with real index α (g ∈ RV (α)) if it satisfies limx→∞ g(tx)/g(x) = tα , t > 0. The function g is in the class L if it satisfies limx→∞ g(t + x)/g(x) = 1, t > 0. The function g is in the class of O-regularly varying (ORV) if lim sup x→∞

g(tx) = λg (t) < ∞, g(x)

t > 0.

(1.1)

The upper- and lower-Matuszewska indices α(g) and β(g) for a function g are defined as follows: log lim supx→∞ g(xy)/g(x) y→∞ log y

α(g) = lim

log lim inf x→∞ g(xy)/g(x) . y→∞ log y

and β(g) = lim

It is well known that g ∈ ORV if and only if α(g) and β(g) are finite. The classes RV and ORV have been useful for limit theory in probability theory with applications in extreme-value theory, domains of attraction for sums of i.i.d. random variables, and renewal theory. For a survey of definitions, properties, and further applications of RV and ORV, we refer to Bingham et al. [3], Geluk and de Haan [14], Seneta [28], and Resnick [26]. A distribution function F is in the class S of subexponential distributions (F ∈ S ) if limx→∞ F ∗2 (x)/ F (x) = 2. It is well known that if F ∈ L ∩ ORV, then F ∈ S . The class S was introduced by Chistyakov [4] and Teugels [32] and studied by Chover et al. [5] and Cline [6]. For additional references, refer to the book by Embrechts et al. [11]. For a more recent reference, we refer to Foss et al. [13]. If F ∈ S , then it is well known that limx→∞ F ∗n (x)/F (x) = n. 1.2

Motivation, contributions, and related work

In the case of E(N ) < ∞, the results collected in Proposition 1 are well known; see, for example, Embrechts et al. [10], Chover et al. [5], Stam [31], Shneer [29], Faÿ et al. [12], and Daley et al. [9]. If the Xi are i.i.d. and regularly varying with E(N ) < ∞, then the best result to date for the tail of a random sum is given by (b). Proposition 1. (a) Suppose that E((1 + ε)N ) < ∞ for some ε > 0. If F ∈ S , then G ∈ S and G(x) ∼ E(N )F (x). (b) If F ∈ RV(−α), α > 1 and if P(N > x) = o(P(X > x)), then G ∈ RV(−α) and G(x) ∼ E(N )F (x). (c) If X  0 is an α-stable (0 < α < 1) r.v., then F ∗n (x)  nF (x) and G(x)  E(N )F (x). We address a number of gaps in the study of univariate and multivariate random sums. In the univariate case (Section 2), the main contributions are the derivation of bounds and asymptotic expressions for the tails of random sums with infinite-mean components in both the RV and ORV cases, when the components are independent, but not necessarily i.i.d., and in the case where the components are regularly varying with index α = 1 and the integrated tail is in the class Π (see Section 2.4). This last case represents the boundary between the finite-mean and infinite-mean cases. In the second part of the paper (Section 3), we consider multivariate sums. We generalize a number of the univariate results of Section 2 and extend existing results on sums with subexponential marginal distributions from sums with equal numbers of component terms to sums with unequal numbers of component terms.

Random sums of random variables and vectors

435

In particular, we extend Proposition 11 of Baltr¯unas et al. [2] and apply this to sums with jointly regularly varying components. This is described in more detail in Section 3. In this paper, all limits are always taken as x → ∞, or t → ∞, or min(x, y) → ∞. The notation a(x) ≈ b(x) means that ua(x)  b(x)  va(x) for some u, v > 0 and all x  x◦ . The notation a(x) ∼ b(x) means that a(x)/b(x) → 1. We use a similar notation for bivariate functions. We assume throughout that E(N ) < ∞.

2 Univariate results In this section, we consider univariate sums. We relax the regularly varying and the i.i.d. assumptions to obtain universal inequalities and asymptotic inequalities and equalities that are applied to the RV, ORV, and independent cases. The focus is on when the Xi have infinite mean. 2.1

Integrated tail function

The integrated tail of F , defined by   mF (x) = E min(X, x) =

x F (t) dt, 0

provides a means of deriving useful bounds. If Xi has d.f. Fi (x), then we denote the integrated tail by mi (x). For the sum Sn , n  1, we set x mSn (x) = P(Sn > z) dz. 0

Clearly, we have xF (x)  mF (x). The following result extends Lemma 5(i) of Daley et al. [9]. Lemma 1. We have:  (i) mSn (x)  ni=1  mi (x) and (ii) P(Sn > x)  ni=1 mi (x)/x. Proof. Result (i) follows from the following inequality, obtained by applying the inequality min(a + b, c)  min(a, c) + min(b, c): n n       mSn (x) = E min(Sn , x)  E min(Xi , x) = mi (x). i=1

Using the inequality xP(Sn > x)  mSn (x), we obtain (ii). 2.2

i=1

 

Infinite means – i.i.d. regularly varying components (0  α < 1)

The first result presented is for {Xi } i.i.d. regularly varying with index 0  α < 1, that is, E(X) = ∞. We obtain the results given by Faÿ et al. [12] in Propositions 4.1 and 4.8 using a different approach. Theorem 1. For 0  α < 1, we have F ∈ RV(−α) if and only if G ∈ RV(−α). Both statements imply that G(x) ∼ E(N )F (x). To prove this theorem, we will use the following results of Daley et al. [9]. Lemma 2. (See [9, Lemma 5].) (i) We have xG(x)  mG (x)  E(N )mF (x). (ii) If E(X) = ∞, then mG (x) ∼ E(N )mF (x). Lith. Math. J., 55(3):433–450, 2015.

436

E. Omey and R. Vesilo

Lemma 3. (See [9, Lemma 6].) For 0  α < 1, we have F (x) ∈ RV(−α) iff mF (x) ∈ RV(1 − α). Both statements imply that mF (x) ∼ xF (x)/(1 − α). Proof of Theorem 1. Since 0  α < 1, E(X) = ∞. Applying Lemma 2(ii) gives mG (x) ∼ E(N )mF (x). From Lemma 3, F (x) ∈ RV(−α) iff mF (x) ∈ RV(1 − α). Hence, mG (x) ∈ RV(1 − α). The result G ∈ RV(−α) follows from the monotone density theorem. In the reverse direction, if G ∈ RV(−α), then mG (x) ∈ RV(1−α) follows from Karamata’s theorem. Since E(X) = ∞, we can apply Lemma 2(ii) to get mF (x) ∼ mG (x)/E(N ) ∈ RV(1 − α).   2.3

Infinite means – independent components

We assume here that the components of the sum have infinite mean with independent but not necessarily identical distributions. We add extra assumptions about the asymptotic behavior of the tails F i . We use a similar approach to that used by Skuˇcait˙e [30] and Maejima [17] and introduce a locally integrable function μ(x)  0 to control tail behavior. The condition required for μ(x) appears under Assumption (ii.a) in Theorem 2 and implies that E(Xi ) = ∞. lim inf x→∞ Fi (x)/μ(x)  d(i), i = 1, 2, . . . . Then Theorem 2. Assume that μ(x)  0. (i) Suppose that  lim inf x→∞ G(x)/μ(x)  E(D(N )), where D(n) = n1 d(i). (ii) Suppose that F i (x)/μ(x) → d(i), i = 1, 2, . . . . Assume further that: x (a) ν(x) = 0 μ(z)dz ↑ ∞ as x ↑ ∞; (b) there n exist a sequence {a(n), n = 1, 2, . . .} and constant c  0 such that E(a(N )) < ∞ and i=1 mi (x)  ν(x)a(n), x  c.

Then mG (x) ∼ ν(x)E(D(N )). The proof requires the following lim inf lower bounds that are valid for all distributions with unbounded support on x > 0. The first bound is given by Theorem 2.11 of Foss et al. [13]. Lemma 4. For n = 1, 2, . . . , we have P(Sn > x) lim inf n  1. x→∞ i=1 F i (x)

(2.1)

The second bound is obtained by adapting the proof used for the first bound. Lemma 5. For n = 1, 2, . . . , we have mG (x) lim inf n  1. x→∞ i=1 mi (x)

Proof. Using the same argument as in Theorem 2.11. of Foss et al. [13], we have x mG (x) 

dz 0

n 

F i (z)

i=1



Fj (z) ≡ S(x).

j=i

The following upper bound is immediate: x S(x) 

dz 0

n  i=1

F i (z).

(2.2)

Random sums of random variables and vectors

437

For a lower bound, given  > 0 small enough, we can find a fixed x◦ such that x◦ S(x) =

dz 0 x◦



dz

n 

F i (z)



i=1

j=i

n 



F i (z)

i=1

0

x Fj (z) +

dz x



n 

F i (z)

i=1

j=i

Fj (z)

j=i

x Fj (z) + (1 − )



dz

n 

F i (z).

i=1

x◦

Since the first term is bounded and E(Xi ) = ∞, we conclude lim  x

x→∞

0

dz

S(x) n

i=1 F i (z)

= 1,

and the proof of the lemma follows immediately.   Proof of Theorem 2. (i) First, note that ∞

G(x)  P(Sn > x) = pn . μ(x) μ(x) n=1

Using Fatou’s lemma gives ∞

G(x)  P(Sn > x) lim inf  pn lim inf n x→∞ μ(x) x→∞ i=1 F i (x) n=1

n

i=1 F i (x)

μ(x)

.

Using assumption (i), we have that n lim inf x→∞

i=1 F i (x)

μ(x)



n 

d(i) = D(n).

i=1

Applying lim inf f g  lim inf f lim inf g and Lemma 4 completes the proof. (ii) By assumption we have F i (x)/μ(x) → d(i) for each fixed i = 1, 2, . . . and ν(x) → ∞ (x → ∞). Thus, given ε > 0, there exists x0 such that for x > x0 , we have F i (x)/μ(x) > d(i) − ε. Hence, x x (d(i) − ε) x0 μ(z) dz + 0 0 F i (z) dz F i (z) dz  ν(x) ν(x)  x0 (d(i) − ε)ν(x) + 0 (F i (z) − (d(i) − ε)μ(z)) dz . = ν(x)

mi (x) = ν(x)

x 0

Letting x → ∞, since ν(x) ↑ ∞, gives lim inf x→∞ mi (x)/ν(x)  d(i) − ε. Since ε is arbitrary, we have lim inf x→∞ mi (x)/ν(x)  d(i). Similarly, lim supx→∞ mi (x)/ν(x)  d(i), and so it follows that mi (x)/ν(x) → d(i) for each fixed i. Thus, we obtain that n  mi (x) i=1 Lith. Math. J., 55(3):433–450, 2015.

ν(x)



n  i=1

d(i) = D(n)

438

E. Omey and R. Vesilo

for an arbitrary fixed n  1. Using Lemma 1, we have ∞



mG (x)  mSn (x)  = pn  pn ν(x) ν(x) n=1

n

i=1 mi (x)

ν(x)

n=1

.

∞

The right-hand side of this is bounded above by n=1 pn a(n) if x  c. Hence, by assumption (b) we can apply Lebesgue’s theorem on dominated convergence to conclude that lim

x→∞

∞ 

pn

n=1

n  mi (x) i=1

=

ν(x)

∞ 

  pn D(n) = E D(N ) .

n=1

Hence, it follows that lim supx→∞ mG (x)/ν(x)  E(D(N )). On the other hand, from (i) we have lim inf x→∞ G(x)/μ(x)  E(D(N )). Applying Lemma 5, along similar lines as before, we have lim inf x→∞ mG (x)/ν(x)  E(D(N )), and we conclude that mG (x) ∼ E(D(N ))ν(x).   Example 1. Suppose F i (x) = F (x) for all i  1, where F ∈ RV(−α), 0 < α < 1, and E(N ) < ∞. We take μ(x) = F (x) and d(i) = 1, giving D(n) = n and x ν(x) =

since E(Y ) = ∞. Trivially,



 1 − F (z) dz ↑ ∞

0

n

i=1 mi (x)

= nν(x), and we can use a(n) = n.

Example 2. Suppose that the r.v.s Yi are i.i.d. with d.f. F , where F ∈ RV(−α), 0 < α < 1. Let c(i) > 0, h(i) > 0. We consider subordination for random variables of the type d

Xi =

1 max(Y1 , Y2 , . . . , Yh(i) ), c(i)

which have d.f.s Fi (x) = F h(i) (c(i)x). As before, we use the integrated tails and for i  1, set mi (x) = mFi (x), where x mi (x) =



  1 − F h(i) c(i)z dz.

0

Using 1 − y b ∼ b(1 − y), y → 1 (b h(i)(c(i))−α (1 − F (x)). Here we

> 0), and the regular variation of F , we obtain Fi (x) = 1 − F h(i) (c(i)x) ∼ have μ(x) = 1 − F (x),

x ν(x) =



 1 − F (z) dz ↑ ∞,

0

and d(i) = h(i)(c(i))−α . Using 1 − F h(i) (z)  h(i)(1 − F (z)), we have 1 mi (x) = c(i)

c(i)x 



0

1−F

h(i)

 h(i) (z) dz  c(i)

c(i)x 



0

 1 − F (z) dz,

Random sums of random variables and vectors

439

from which we obtain n 

mi (x) 

n  h(i)

i=1

i=1

c(i)

  mF c(i)x .

Since mF (x) ∈ RV(1 − α), Potter’s bounds show that n

i=1 mi (x)

sup

xx◦

mF (x)

 sup

n  h(i) mF (c(i)x)

xx◦ i=1

c(i)

mF (x)

 a(n),

where, with C > 0 a constant, a(n) = C

n  h(i) i=1

c(i)

 1−α+ε  1−α−ε  max c(i) , c(i) .

Suppose E(a(N )) < ∞. We can apply Theorem 2 to find that mG (x)/mF (x) → E(D(N )). In this example, we also have by Karamata’s theorem that mF (x) ∼ xF (x)/(1 − α). Using the monotone density theorem, we find that G(x) ∼ E(D(N ))F (x). 2.4

Subordination with i.i.d. regularly varying components (α = 1)

In the examples in the previous section, we considered distribution functions in the class RV(−α) for 0 < α < 1. In this section, we consider the case where α = 1. Since mF ∈ RV(0), we utilize the following subclass of slowly varying functions to make this case more manageable. Suppose that L ∈ RV(0) and that c  0. We say that f ∈ Πc (L) if f satisfies lim

x→∞

f (tx) − f (x) = c log(t), L(x)

t > 0.

(2.3)

The class Πc (L) appears in extreme-value theory and has been studied, among others, by Geluk and de Haan [14] and Bingham et al. [3]. Recall that the Laplace–Stieltjes transform (LST) of a function K(x) on [0, ∞) that has locally bounded variation and is right continuous is given by K(s) =

∞ exp(−sx) dK(x). 0

Clearly, the LST of F (x) = P(X  x) is given by F (s) = E(exp(−sX)). Moreover, we have m F (s) =  n and m = ∞ p ( F (s)) (s) = (1 − G(s))/(1 − s) . (1 − F (s))/s, G(s) G n=0 n Define the functions ∞  1 − A(s) . A(s) = pn s n , B(s) = 1−s 0

We clearly have A(1) = 1. Now suppose that E(N 2 ) < ∞. Applying l’Hôpital’s rule gives lim B(s) = A (1) = E(N ),

s→1

  E(N ) − B(s) = A (1) = E N (N − 1) . s→1 1−s lim

Lith. Math. J., 55(3):433–450, 2015.

440

E. Omey and R. Vesilo

 n We can express G(s) = ∞ G (s) = (1 − G(s))/(1 − F (s)) · (1 − F (s))/ n=0 pn (F (s)) = A(F (s)) and m (1 − s) = B(F (s))m F (s). G (s) ∼ B(1)m F (s) = E(N )m F (s) and We find that, as s → 1, m    E(N )m F (s) − m G (s) = m F (s) E(N ) − B F (s)    ∼ E N (N − 1) 1 − F (s) m F (s)   2 = E N (N − 1) sm F (s).

(2.4)

In the case α = 1, we have the following analogue of Lemma 3. Lemma 6. (See [3, Thm. 3.9.1].) Suppose that L ∈ RV(0) and c  0. We have mF (x) ∈ Πc (L)

⇐⇒

1 m F ∈ Πc (L) x

⇐⇒

xF (x) → c. L(x)

Moreover, each of the statements implies that mF (x) ∼ m F (1/x) ∈ RV(0) and that mF (x) − m F (1/x) → cγ, L(x)

where γ denotes Euler’s constant. Using these results, we have the following theorem. Theorem 3. Suppose that L ∈ RV(0) and c  0 and assume that E(N 2 ) < ∞. (i) If xF (x)/L(x) → c, then xG(x)/L(x) → d, where d = cE(N ). (ii) If xG(x)/L(x) → d, then xF (x)/L(x) → c, where c = d/E(N ). In both cases, we have (E(N )mF (x) − mG (x))/L(x) → 0. Proof. (i) If xF (x)/L(x) → c, then from Lemma 6 we have that m F (1/x) ∈ Πc (L), m F (1/x) ∼ mF (x) ∈ RV(0) and mF (x) − m F (1/x) → cγ. L(x)

Using (2.4), we have E(N )m F (1/x) − m G (1/x) ∼ E(N (N − 1))m 2F (1/x)1/x, which gives, noting that m F (1/x) ∼ mF (x) and mF , L ∈ RV(0),   m2 (x) E(N )m F (1/x) − m G (1/x) ∼ E N (N − 1) F → 0. L(x) xL(x) Since m F (1/x) ∈ Πc (L), from (2.3) and the slow variation of L it follows that m G (1/x) ∈ Πd (x) with d = cE(N ). From this it follows that xG(x)/L(x) → d and E(N )mF (x) − mG (x) → 0. L(x)

(ii) Similar.

 

Random sums of random variables and vectors

2.5

441

Random sums with O -regularly varying increments

In this section, we extend regularly varying results in earlier sections to the O-regularly varying case. For ORV classes (β(F ) > −1), we have E(X) = ∞. We obtain the ORV version of Theorem 1, which is new. Theorem 4. F (x) ∈ ORV with β(F ) > −1 if and only if G ∈ ORV with β(G) > −1, and both statements imply that G(x) ≈ F (x). For the proof, we will use the following lemma. Lemma 7. (See [9, Lemma 6].) If F (x) ∈ ORV with β(F ) > −1, then mF (x) ≈ xF (x). Proof of Theorem 4. First, assume that F (x) ∈ ORV with β(F ) > −1. From Lemma 7 we have mF (x) ≈ xF (x). Using Lemma 2(ii), we have mG (x) ∼ E(N )mF (x), and so mG (x) ≈ xF (x). By Lemma 2(i), xG(x)  mG (x), giving, for some constant A > 0 and sufficiently large x, G(x) xG(x) mG (x) =   A. F (x) xF (x) xF (x)

For a lower bound, application of Fatou’s lemma together with Lemma 4 gives lim inf x→∞ G(x)/F (x)  E(N ). By the preceding we have that G(x) ≈ F (x). But then it follows that G(x) ∈ ORV with β(G) > −1. Now assume that G(x) ∈ ORV with β(G) > −1. Using mF (x) ≈ mG (x) ≈ xG(x) (cf. Lemma 7 and Lemma 2(ii)), we can find constants 0 < a < b and x◦ such that axG(x)  mF (x)  bxG(x), x  x◦ . For the only if part of the lemma, take t > 1 and observe that, on the one hand, xt F (z) dz  F (x)x(t − 1).

mF (xt) − mF (x) = x

On the other hand, we have mF (xt)−mF (x)  axtG(xt)−bxG(x), and we obtain F (x)(t−1)  atG(xt)− bG(x), so that F (x) G(xt) (t − 1)  at − b. G(x) G(x)

Since G(x) ∈ ORV, we have that β(G) is the supremum of those β for which, for some D > 0 and all T > 1, G(tx)/G(x)  D(1 + o(1))tβ (x → ∞) uniformly in t ∈ [1, T ] (see [3, Sect. 2.1.2]). Hence, with β(G) > −1, for t sufficiently large, we find that lim inf x→∞

F (x) (t − 1) > 0. G(x)

Since we always have the lim sup result, the lemma follows.  

3 Multivariate case In this section, we first generalize the univariate results for random sums given in Section 2 to the multivariate case. In Section 3.1, we derive lower bounds based on using the integrated tail distribution. In Section 3.2 (see Lemma 9), we generalize the asymptotic lower bound of Lemma 4. In Section 3.3, we derive bounds for sums with infinite mean components in the RV and ORV cases. In Section 3.4, we derive asymptotic expressions for the tail distribution of multivariate sums with subexponential marginals when the sums have unequal Lith. Math. J., 55(3):433–450, 2015.

442

E. Omey and R. Vesilo

numbers of component terms and apply this to random sums with jointly regularly varying components. For convenience and without loss of generality, we only discuss the bivariate case. Let {(Xi , Yi ), i = 1, 2, . . .} and (X, Y ) be i.i.d. componentwise nonnegative random vectors with bivariate d.f. F (x, y) = P(X  x, Y  y) and marginals F1 (x) = FX (x) and F2 (x) = FY (x). Partial sums will be (1) (2) (1) (2) (1) denoted by (Sn , Sm ) (n, m  1), where, as before, we set S0 = S0 = 0, Sn = X1 + X2 + · · · + Xn , (2) (1) n  1, and Sm = Y1 + Y2 + · · · + Ym , m  1. For convenience, we also define Fn,m (x, y) = P(Sn  x, (2) Sm  y), n, m  1. For n = m, we use the notation F ∗n (x, y) = 1 − F n∗ (x, y), where F n∗ (x, y) is the d.f. (1) (2) of (Sn , Sn ) for which the convolution of two bivariate distribution functions G1 and G2 is given by x y G1 ∗ G2 (x, y) =

G1 (x − u, y − v) dG2 (u, v). 0

0

We consider random indices (N, M ) independent of Xi , Yj . The d.f. of the random vector of random sums is given by

(1) (2) (SN , SM )

∞ ∞  

H(x, y) =

pn,m Fn,m (x, y),

n=0 m=0

where pn,m = P(N = n, M = m). To avoid the trivial case, we assume that p0,0 < 1. The tail is given by (1) (2) H(x, y) = 1 − H(x, y). If N = M , we have the vector (SN , SN ) whose distribution function is denoted ∞ n∗ by G(x, y) = n=0 pn F (x, y), where pn = P(N = n). The tail is given by G(x, y) = 1 − G(x, y). (1) (2) We denote by Hi , i = 1, 2, the distribution functions of the random sums SN and SM , respectively, so that (1) (2) H1 (x) = P(SN  x) and H2 (y) = P(SM  y). To obtain exact asymptotic results for G(x, y) and H(x, y), we need to restrict the classes of multivariate distribution functions considered. We require the following classes of multivariate distribution functions. We write F ∈ L (long-tailed along radial directions) if and only if lim

t→∞

F (tx − a, ty − b) =1 F (tx, ty)

for all x, y > 0 with min(x, y) < ∞ and all a, b  0. We write F ∈ S (subexponential along radial directions) if and only if F ∗2 (tx, ty) =2 t→∞ F (tx, ty) lim

for all x, y > 0 with min(x, y) < ∞. We write F ∈ RV(h) (regularly varying(h)) if and only if there exist functions h(t) and λ(x, y) such that, as t → ∞, we have h(t) ↑ ∞ and limt→∞ h(t)F (tx, ty) = λ(x, y). We write F ∈ RV(a, b) (regularly varying(a, b)) if and only if there exist functions a(t) and b(t) such that, as t → ∞, we have a(t) ↑ ∞ and b(t) ↑ ∞ and limt→∞ tF (a(t)x, b(t)y) = λ(x, y). 3.1

General inequalities

As in the univariate case, we use integrated tails. For F and Fi , i = 1, 2, we define mF and mi , respectively, as x y mF (x, y) =

x F (u, v) du dv

0

0

and mi (x) =

F i (x) dx, 0

i = 1, 2.

Random sums of random variables and vectors

443

Using F (x, y)  F 1 (x) + F 2 (y) and F i (xi )  F (x1 , x2 ), we see that mF (x, y) satisfies mF (x, y)  ym1 (x) + xm2 (y) and that ym1 (x)  mF (x, y), xm2 (y)  mF (x, y). We always have xyF (x, y)  mF (x, y). To obtain some general inequalities, we first observe that     (2)  max 1 − P Sn(1)  x , 1 − P Sm  y  1 − Fn,m (x, y) and that

   (2)  1 − Fn,m (x, y)  1 − P Sn(1)  x + 1 − P Sm y .

Among other results, it follows that H(x, y)  H 1 (x) + H 2 (y). Using Lemmas 1 and 2, we obtain the following result. Lemma 8. (i) xyH(x, y)  mH (x, y)  (E(N ) + E(M ))mF (x, y); (ii) xyG(x, y)  mG (x, y)  2E(N )mF (x, y). Proof. (i) Starting with    (2)  1 − Fn,m (x, y)  1 − P Sn(1)  x + 1 − P Sm x 

nm1 (x) mm2 (y) + , x y

it follows that mH (x, y)  ymH1 (x) + xmH2 (y)  E(N )ym1 (x) + E(M )xm2 (y)    E(N ) + E(M ) mF (x, y).

(ii) Take N = M . 3.2

 

Asymptotic lower bound

An asymptotic lower bound that generalizes Lemma 4 is the following. Lemma 9. For all n, m  1, we have 1 − Fn,m (x, y)  min(n, m). min(x,y)→∞ 1 − F (x, y) lim inf

Proof. First take n = m. We will prove the result by induction on n. First, note that for n = 1, the result holds. Assume that the result holds for n = 1, 2, . . . , k and consider the case where n = k + 1. We have x y 1 − Fk+1,k+1 (x, y) = 1 −

Fk,k (x − u, y − v) dF (u, v) u=0 v=0

x y = u=0 v=0



 1 − Fk,k (x − u, y − v) dF (u, v) + 1 − F (x, y)

     1 − Fk,k (x, y) F (x, y) + 1 − F (x, y) . Lith. Math. J., 55(3):433–450, 2015.

444

E. Omey and R. Vesilo

By the induction step we find that 1 − Fk+1,k+1 (x, y)  k + 1. 1 − F (x, y) min(x,y)→∞ lim inf

Hence, the result for n = m follows. (2) (2) For the case m > n, let m = n + k, k  1, and write Sm = Sn + Rk . Define F˜k (z) = P(Rk  z). We have   1 − Fn,m (x, y) = 1 − P Sn(1)  x, Sn(2) + Rk  y y = 1 − Fn,n (x, y − z) dF˜k (z). 0

We find that y 1 − Fn,m (x, y) =



 1 − Fn,n (x, y − z) dF˜k (z) + 1 − F˜k (y)

0



  1 − Fn,n (x, y) F˜k (y) + 1 − F˜k (y)    1 − Fn,n (x, y) F˜k (y). Using the result for n = m, we obtain that 1 − Fn,m (x, y)  n. min(x,y)→∞ 1 − F (x, y) lim inf

This proves the result for m > n. The case n > m is similar.

 

Going to random sums, we have the following corollary. Corollary 1. lim inf min(x,y)→∞ H(x, y)/F (x, y)  E(min(N, M )). 3.3

Infinite mean case

Here we consider bivariate random sums with infinite mean components. The next result is the bivariate analogue of Lemma 2. Lemma 10. If E(X) = E(Y ) = ∞, then, as min(x, y) → ∞, we have mG (x, y) ≈ mF (x, y) and mH (x, y) ≈ mF (x, y). Proof. In Lemma 8, we proved that mG (x, y) = O(1)mF (x, y) and mH (x, y) = O(1)mF (x, y). To prove the lemma, choose c and x◦ such that H(x, y)  cF (x, y), x, y  x◦ . Taking integrals, we obtain that x y mH (x, y)  c

  F (u, v) du dv = c mF (x, y) − R ,

x◦ x◦

where

x◦ y R=

x x◦ +

0 x



x◦ 0

x◦ x◦ + F (u, v) du dv = R(1) + R(2) + R(3). 0

0

Random sums of random variables and vectors

445

For the first term, we have x◦ y R(1) 



 F 1 (u) + F 2 (v) du dv  m1 (x◦ )y + x◦ m2 (y),

0 x◦

and it follows that y R(1) m2 (y)  m1 (x◦ ) + x◦ mF (x, y) mF (x, y) mF (x, y) 1 1 + x◦ → 0  m1 (x◦ ) m1 (x) x

as min(x, y) → ∞. In a similar way, we have R(2)/mF (x, y) → 0. Since R(3), is bounded, we finally have R(3)/mF (x, y) → 0. We conclude that lim inf

mH (x, y) > 0. mF (x, y)

This proves the result.   Using Lemmas 1 and 2, we obtain the following result. Theorem 5. Suppose that F i ∈ ORV with β(F i ) > −1 or that F i ∈ RV(−αi ) with 0  αi < 1. Then, as min(x, y) → ∞: (i) H(x, y) = O(1)F (x, y) and G(x, y) = O(1)F (x, y); (ii) H(x, y) ≈ F (x, y) and G(x, y) ≈ F (x, y). Proof. (i) For H, we have xyH(x, y)  (E(N ) + E(M ))mF (x, y). If F i ∈ ORV with β(F i ) > −1 or if F i ∈ RV(−αi ) with 0  αi < 1, then we have mi (x) ≈ xF i (x), and it follows that xyF (x, y)  mF (x, y)  ym1 (x) + xm2 (y)

 C1 yxF 1 (x) + C2 xyF 2 (y)  (C1 + C2 )xyF (x, y). We find that xyF (x, y) ≈ mF (x, y) as min(x, y) → ∞. But then, as min(x, y) → ∞, we have H(x, y) = O(1)F (x, y). The result for G follows as a special case. (ii) This follows from Lemma 9 and Corollary 1.   3.4

Asymptotic equalities for sums with subexponential marginals

The random sums S(N, N ) and S(N, M ) with F ∈ RV(h) were considered by Omey [22]. Subsequent work by the same author and others was directed at relaxing the assumptions about the component distributions. Omey [23] considered the case where F ∈ L and the marginals were subexponential, so that F ∈ S , to derive asymptotic expressions for H and G. Mallor et al. [19] initially made no assumptions about marginals, except that the joint distribution F ∈ L, but then introduced their Condition (C6) on the marginals: F i (tx)/ F (tx, ty) → Di (x, y) for (x, y) > (0, 0) with min(x, y) < ∞, in order to obtain an asymptotic expression for the tail distribution of S(N, M ) [19, Cor. 3]. Condition (C6) follows directly if F ∈ RV(h) because the marginals of F are regularly varying with the same index. However, if F ∈ RV(a, b), then the marginals may have different indexes of regular variation, making Condition (C6) more difficult to establish. Lith. Math. J., 55(3):433–450, 2015.

446

E. Omey and R. Vesilo

Proposition 11 of Baltr¯unas et al. [2] considered fixed sums of the form Sn,n , assuming nothing more than that the marginals were subexponential to prove that the joint distribution was subexponential. The extension of this proposition to sums with unequal numbers of component terms is given further in Theorem 6. The application of this to random sums with components in the class RV(a, b) is given in Theorem 8. Theorem 6. Suppose that F1 ∈ S , F2 ∈ S . Then for all n, m  1 and as min(x, y) → ∞, we have: 1 − Fn,m (x, y) = (n − m)+ F 1 (x) + (m − n)+ F 2 (y) + min(n, m)F (x, y) + o(1)F (x, y)

and

(3.1)

  (2) P Sn(1) > x, Sm > y = min(n, m)P(X > x, Y > y) + o(1)F (x, y).

(3.2)

Proof. We consider Fn,m (x, y) and first assume that m = n + k, where n, k  1. Now consider the partial maxima Mn(1) = max(X1 , X2 , . . . , Xn ),

(2) Mm = max(Y1 , Y2 , . . . , Ym ).

We write 1 − Fn,m (x, y) = An,m (x, y) + Cn,m (x, y), where     (2) (2) An,m (x, y) = P Mn(1)  x, Mm  y − P Sn(1)  x, Sm y ,   (2) Cn,m (x, y) = 1 − P Mn(1)  x, Mm y . First, consider An,m (x, y). Writing An,m (x, ∞) = A1,n (x) and An,m (∞, y) = A2,m (y), we have 0  An,m (x, y)  A1,n (x) + A2,m (y).

For A1,n (x), we have     A1,n (x) = F1n (x) − F1∗n (x) = F1n (x) − 1 + 1 − F1∗n (x) = −F 1

n−1 

F1n (x) + F1∗n (x).

i=0

Since F1 ∈ S , we have limx→∞ A1,n (x)/F 1 (x) = 0. That is, after using F 1 (x)  F (x, y), A1,n (x) = o(1)F 1 (x) = o(1)F (x, y). Similarly, A2,m (x) = o(1)F (x, y). Now, for m = n + k, we have     Cn,m (x, y) = 1 − F n (x, y) + F n (x, y) 1 − F2k (y) ∼ nF (x, y) + kF 2 (y) = min(m, n)F (x, y) + (m − n)+ F 2 (y) + (n − m)+ F 1 (x).

Combining the expressions for A1,n (x), A2,m (y), and Cn,m (x, y), we obtain (3.1). To prove (3.2), we use the identity   P(X > x, Y > y) = P(X > x) + P(Y > y) − 1 − P(X  x, Y  y) .

 

(3.3)

Without assuming more, it is not clear which of the terms in (3.3) is dominant. It should be noted that this expression holds as min(x, y) → ∞. In the next section, we will assume that min(x, y) → ∞ in a more precise way.

Random sums of random variables and vectors

3.5

447

Subexponential marginals with joint regular variation

Now assume that F ∈ RV(a, b), that is,    t 1 − F a(t)x, b(t)y → λ(x, y) < ∞

(3.4)

for all x, y > 0 and min(x, y) < ∞. Taking 0 < x < ∞ and y = ∞, we have t(1 − F1 (a(t)x) → λ(x, ∞). Replacing t by a← (t), the inverse function of a(t), we have a← (t)(1 − F1 (tx)) → λ(x, ∞). If λ(x, ∞) > 0, then we find that λ(x, ∞) is of the form λ(x, ∞) = cx−α , where c > 0, and then it follows that F 1 (x) ∈ RV(−α). Moreover, a← (t) and consequently a(t) are regularly varying functions. Relations of the type (3.4) were studied, among others, by de Haan et al. [15] and Omey [21, 22]. Using (3.4) and Theorem 6, we have the following result. Theorem 7. Suppose that F1 , F2 ∈ S and suppose that (3.4) holds. Then for all x, y  0 and x + y > 0, we have    t 1 − Fn,m a(t)x, b(t)y → (n − m)+ λ(x, ∞) + (m − n)+ λ(∞, y) + min(n, m)λ(x, y) (1)

(2)

and tP(Sn > a(t)x, Sm > b(t)y) → min(n, m)λ(x, y). Remark. If F 1 ∈ RV, then automatically F1 ∈ S . (1)

(2)

Now consider the subordinated process and H(x, y) = P(SN  x, SM  y). For the marginals, there are many situations (cf. Proposition 1) under which we have  (1)  P SN > x ∼ E(N )F 1 (x) as x → ∞,  (2)  P SM > y ∼ E(M )F 2 (y) as y → ∞.

(3.5) (3.6)

We prove the following result. Theorem 8. Suppose that F1 (x), F2 (x) ∈ S and suppose that (3.4), (3.5), and (3.6) hold. Then for all x, y > 0, we have: (1)

(2)

(i) P(SN > a(t)x, SM > b(t)y) → E(min(N, M ))λ(x, y), (ii) t(1 − H(a(t)x, b(t)y)) → E(N − M )+ λ(x, ∞) + E(M − N )+ λ(∞, y) + E(min(N, M ))λ(x, y). Proof. We have

 (1)     (2) (2) P SN > x, SM > y = pn,mP Sn(1) > x, Sm >y .

Now observe the following facts: (1)

(2)

(1)

(2)

1. pn,m tP(Sn > a(t)x, Sm > b(t)y) → pn,m min(n, m)λ(x, y); (1)

2. pn,m tP(Sn > a(t)x, Sm > b(t)y)  pn,m tP(Sn > a(t)x); (1)

3. pn,m tP(Sn > a(t)x) → pn,m nλ(x, ∞);  (1) (1) pn,m tP(Sn > a(t)x) = tP(SN > a(t)x) → E(N )λ(x, ∞); 4.  pn,m nλ(x, ∞) = E(N )λ(x, ∞). 5. Using Pratt’s extension of the Lebesgue’s theorem [25], we obtain that  (1)    (2) tP SN > a(t)x, SM > b(t)y → E min(N, M ) λ(x, y). The second result follows from the first result and from (3.3), (3.5), (3.6). Lith. Math. J., 55(3):433–450, 2015.

448

E. Omey and R. Vesilo

Remark 1. If X and Y are independent and if (3.4) holds, we have       tP X > a(t)x, Y > b(t)y = tP X > a(t)x P Y > b(t)y → λ(x, ∞) × 0 = 0. In this case, using (3.3), we obtain that λ(x, y) = λ(x, ∞) + λ(∞, y). Remark 2. Suppose that the d.f. F is defined by F (x, y) = min(F 1 (x), F 2 (y)). If (3.4) holds, then it follows that λ(x, y) = min(λ(x, ∞), λ(∞, y)). 3.6

Examples

We assume throughout this section that 0  θ  1. We also assume that the marginals satisfy tF 1 (a(t)x) → x−α and tF 2 (b(t)y) → y −β for some auxiliary functions a(t) and b(t). First, we consider the mixture of Fréchet bounds: F (x, y) = θ max(F 1 (x), F 2 (y)) + (1 − θ)(F 1 (x) + F 2 (y)). It is easy to see that tF (a(t)x, b(t)y) → θ max(x−α , y −β ) + (1 − θ)(x−α + y −β ) = x−α + y −β − θ min(x−α , y −β ). In this case, we also find that tP(X > a(t)x, Y > b(t)y) → θ min(x−α , y −β ). The same result is valid for the d.f. considered by Cuadras and Augé [8]: F (x, y) = (min(F1 (x), F2 (y)))θ × (F1 (x)F2 (y))1−θ . Next, we consider the Farlie–Gumbel–Morgenstern distribution as in Schucany et al. [27] and Conway [7]: F (x, y) = F1 (x)F2 (y)(1 + θF 1 (x)F 2 (y)). Now we obtain that tF (a(t)x, b(t)y) → x−α + y −β and that tP(X > a(t)x, Y > b(t)y) → 0. The same results hold for the d.f. considered by Ali et al. [1]: F (x, y) = F1 (x)F2 (y)/(1 + θF 1 (x)F 2 (y)). The following bivariate Pareto density was considered by Mardia [20]: f (x, y) = C(x + y − 1)−(λ+2) , where x > 1, y > 1, and λ > 0. For the marginals, we find that f1 (x) = Cx−(λ+2) ,

F 1 (x) ∼ C(1)x−λ−1 ,

f2 (y) = Cy −(λ+2) ,

F 2 (y) ∼ C(2)y −λ−1 .

Note that for x, y > 0, we have tλ+2 f (tx, ty) → g(x, y) ≡ (x + y)−(λ+2) and that ∞ ∞ t P(X > tx, Y > ty) → λ

g(u, v) du dv. x

y

Using a(t) = t1/(1+λ) , we get tP(X > a(t)x, Y > a(t)y) → 0. For the marginal distributions, we have tF i (a(t)x) ∼ C(i)x−λ−1 .

4 Concluding remarks 1. In our future work, we will consider the remainder terms in Theorems 6 and 7. 2. In the case where E(N ) = ∞, our results no longer hold. In this case, it turns asymptotic  out that the ∗n (x), where behavior of the following function W (x) plays an important role: W (x) = ∞ w(n)F n=0 w(n) = P(N  n). Weighted renewal functions of this type have been studied, among others, by Omey and Teugels [24] and Mallor and Omey [18]. Acknowledgments. Part of this work was conducted while the second author was on sabbatical at KU Leuven. With pleasure, the author expresses his gratitude for this opportunity. We also express our appreciation to the reviewers for their careful reading and thoughtful suggestions and thank them for help in simplifying the proof of Theorem 6.

Random sums of random variables and vectors

449

References 1. M.M. Ali, N.N. Mikhail, and M.S. Haq, A class of bivariate distributions including the bivariate logistic, J. Multivariate Anal., 8:405–412, 1978. 2. A. Baltr¯unas, E. Omey, and S. Van Gulck, Hazard rates and subexponential distributions, Publ. Inst. Math., Nouv. Sér., 80(94):29–46, 2006. 3. N.H. Bingham, C.M. Goldie, and J.L. Teugels, Regular Variation. Encyclopedia of Mathematics and Its Applications, Cambridge Univ. Press, Cambridge, 1987. 4. V.P. Chistyakov, A theorem on sums of independent positive random variables and its application to branching random processes, Theory Probab. Appl., 9:640–648, 1964. 5. J. Chover, P. Ney, and S. Wainger, Functions of probability measures, J. Anal. Math., 26:255–302, 1973. 6. D.B.H. Cline, Convolutions of distributions with exponential and subexponential tails, J. Aust. Math. Soc., Ser. A, 43:347–365, 1987. 7. D.A. Conway, Farlie–Gumbel–Morgenstern distributions, in S. Kotz and N.L. Johnson (Eds.), Encyclopedia of Statistical Sciences, Vol. 3, John Wiley & Sons, New York, 1979, pp. 28–31. 8. C.M. Cuadras and J. Augé, A continuous general multivariate distribution and its properties, Commun. Stat., Theory Methods, 10(4):339–353, 1981. 9. D.J. Daley, E. Omey, and R. Vesilo, The tail behaviour of a random sum of subexponential random variables and vectors, Extremes, 10:21–39, 2007. 10. P. Embrechts, C.M. Goldie, and N. Veraverbeke, Subexponentiality and infinite divisibility, Z. Wahrscheinlichkeitstheor. Verw. Geb., 49:335–347, 1979. 11. P. Embrechts, C. Klüppelberg, and T. Mikosch, Modelling Extremal Events for Insurance and Finance, Springer, Berlin, 1997. 12. G. Faÿ, B. González-Arévalo, T. Mikosch, and G. Samorodnitsky, Modeling teletraffic arrivals by a Poisson cluster process, Queueing Syst., 54:121–140, 2006. 13. S. Foss, D. Korshunov, and S. Zachary, An Introduction to Heavy-tailed and Subexponential Distributions, Vol. 38, Springer, New York, 2011. 14. J. Geluk and L. de Haan, Regular Variation, Extensions and Tauberian Theorems, CWI Tracts, Vol. 40, Centre for Mathematics and Computer Science, Amsterdam, 1987. 15. L. de Haan and E. Omey, Integrals and derivatives of regularly varying functions in Rd and domains of attraction of stable distributions II, Stochastic Processes Appl., 16:157–170, 1983. 16. G. Léveillé, Bivariate compound renewal sums with discounted claims, Eur. Actuar. J., 2:273–288, 2012. 17. M. Maejima, A generalization of Blackwell’s theorem for renewal processes to the case of non-identically distributed random variables, Rep. Stat. Appl. Res., Un. Jap. Sci. Eng., 19:1–9, 1972. 18. F. Mallor and E. Omey, Univariate and Multivariate Weighted Renewal Theory, Collection of Monographs from the Department of Statistics and Operations Research, No. 2, Public University of Navarre, Spain, 2006. 19. F. Mallor, E. Omey, and J. Santos, Multivariate subexponential distributions and random sums of random vectors, Adv. Appl. Probab., 38(4):1028–1046, 2006. 20. K.V. Mardia, Families of Bivariate Distributions, Griffin, London, 1965. 21. E. Omey, Multivariate Reguliere Variatie en Toepassingen in Kanstheorie, PhD thesis, KU Leuven, 1982 (in Dutch). 22. E. Omey, Random sums of random vectors, Publ. Inst. Math., Nouv. Sér., 48(62):191–198, 1990. Lith. Math. J., 55(3):433–450, 2015.

450

E. Omey and R. Vesilo

23. E. Omey, Subexponential distributions in Rd , Journal of Mathematical Sciences. Proceedings of the Seminar on Stability Problems for Stochastic Models, Pamplona, Spain, 2003, Part III, 138(1):5434–5449, 2006. 24. E. Omey and J.L. Teugels, Weighted renewal functions: A hierarchical approach, Adv. Appl. Probab., 34:394–415, 2002. 25. J.W. Pratt, On interchanging limits and integrals, Ann. Math. Stat., 31:74–77, 1960. 26. S.I. Resnick, Extreme Values, Regular Variation and Point Processes, Springer, New York, 1987. 27. W. Schucany, Parr W., and J. Boyer, Correlation structure in Farlie–Gumbel–Morgenstern distributions, Biometrica, 65:650–653, 1978. 28. E. Seneta, Regularly Varying Functions, Lect. Notes Math., Vol. 508, Springer, New York, 1976. 29. V.V. Shneer, Estimates for the distributions of the sums of subexponential random variables, 45(6):1143–1158, 2004.

Sib. Math. J.,

30. A. Skuˇcait˙e, Large deviations for sums of independent heavy-tailed random variables, Lith. Math. J., 44(2):198–208, 2004. 31. A. Stam, Regular variation of the tail of a subordinated probability distribution, Adv. Appl. Probab., 5:308–327, 1973. 32. J.L. Teugels, The class of subexponential distributions, Ann. Probab., 3:1000–1011, 1975.