787-792 Peculiarities of exponential random variables

1 downloads 0 Views 97KB Size Report
Uniform spacings play an important role in applications. One of the examples is order picking in carousel systems (see [3], [6], [7]). The carousel can be seen as ...
J. Appl. Prob. 38, 787–792 (2001) Printed in Israel  Applied Probability Trust 2001

SOME PECULIARITIES OF EXPONENTIAL RANDOM VARIABLES NELLY LITVAK,∗ EURANDOM Abstract In this paper we utilize a particular transformation of i.i.d. exponential random variables to derive two distributional identities. Throughout the analysis we discover some peculiar properties of exponentials. We also discuss possible generalizations and applications of the results. Keywords: Partial sums; memory-less property AMS 2000 Subject Classification: Primary 60E99 Secondary 90B99

1. Introduction Exponential random variables are frequently used in different stochastic models, since the memory-less property considerably simplifies the analysis. However, properties of exponentials deserve special attention themselves. There has been a lot of work on the characterization of the exponential distribution. See, for example, [2], [4], [5], [9] and [10]. This note is also devoted to some specific properties of exponential random variables. Let X1 , X2 , . . . be independent exponential random variables with mean 1. Denote S0 = 0; Sn =

n 

Xj ,

n ≥ 1.

j =1

Recently, Litvak and Adan [6] proved that n  min{Xj , Sj −1 } j =2

Sn

= D

n−1  

1−

j =1

1 2j



Xj , Sn

(1)

where we use the common notation X = Y to indicate that the random variables X and Y have the same distribution. The expression on the left-hand side of (1) arises in performance analysis of warehousing systems. Formula (1) was proved by subsequently expanding min{Xi , Si−1 } for i = 2, 3, . . . , n. To do so the authors explore probabilistic arguments mentioned for the first time by Litvak et al. [7]. However, it was not clear whether the method would work in other cases. In Section 2, we consider the sum and the maximum of the terms (Xj − bSj −1 )+ , j = 1, 2, . . . , n, where x+ = x if x > 0 and x+ = 0 if x ≤ 0. The sum is only interesting when b > 0. The maximum is nontrivial if b > −1. For such values of b we show that both the D

Received 5 February 2001; revision received 11 June 2001. ∗ Postal address: EURANDOM, PO Box 513, 5600 MB, Eindhoven, The Netherlands. Email address: [email protected]

787

788

N. LITVAK

sum and the maximum are distributed as a linear combination of exponentials. In the proof we utilize the transformation of exponentials introduced in [6] and [7] and generalized in the present work. The distributional identity for the sum is actually a generalization of (1). The identity for the maximum can be seen as a generalization of the following well-known, even classical, result of Rényi: n  1 Xj , max Xj = 1≤j ≤n j D

n ≥ 1.

(2)

j =1

In Section 3, we discuss the results and consider possible applications and generalizations. In particular, we show that our results hold not only for i.i.d. exponentials, but also for uniform spacings. 2. Main result In this section we prove the following result. Theorem 1. For any n = 1, 2, . . . , n 

(Xj − bSj −1 )+ = D

j =1

max {Xj − bSj −1 } = D

1≤j ≤n

n  j =1 n  j =1

1 Xj , (b + 1)j −1 b Xj , (b + 1)j − 1

b ≥ 0;

(3)

b > −1.

(4)

Remark 1. In Section 3 we will show that (1) arises from (3) when b = 1. Also, it will be shown that in (4) we could, without loss of generality, assume b ≥ 0. Remark 2. Note that letting b → 0 in (4) we obtain (2). We give the proof of Theorem 1 at the end of this section after deriving some auxiliary results. Our arguments are based on a transformation of exponentials defined as follows. For c = (c1 , c2 , . . . ) and ci ≥ 0 for i ≥ 1 denote S0 (c) = 0; Si−1 (c) =

i−1 

ci−j Xj ,

(5)

i ≥ 2.

j =1

The reversed order of the indices in (5) will simplify the presentation below. Note that Si−1 (c) becomes cSi−1 if c = (c, c, . . . ). For fixed i, partial sums of ci−j Xj s partition a time-line into the i intervals [0, ci−1 X1 ), [ci−1 X1 , ci−1 X1 + ci−2 X2 ), . . . , [Si−1 (c), ∞). Let the random variable Ki ∈ {1, 2, . . . , i} be a label of the interval in which Xi falls, i.e. [Ki = 1] = [Xi < ci−1 X1 ]; [Ki = k] = [ci−1 X1 + ci−2 X2 + · · · + ci−k+1 Xk−1 ≤ Xi < ci−1 X1 + ci−2 X2 + · · · + ci−k Xk ], [Ki = i] = [ci−1 X1 + ci−2 X2 + · · · + c1 Xi−1 ≤ Xi ].

2 ≤ k ≤ i − 1;

Peculiarities of exponential random variables

789

Using the label Ki we transform X1 , X2 , . . . as  1   Yj + 1[Ki =j ] YKi +1 , 1 ≤ j ≤ min{Ki , i − 1};   c i−j + 1     Ki < j < i;  Yj +1 , Xj = min{Ki ,i−1}   ci−j   Yj + 1[Ki =i] Yi , j = i;    ci−j + 1  j =1    Yj , j > i,

(6)

where Y1 , Y2 , . . . are independent exponentials with mean 1. This follows by observing that min{Xi , ci−j Xj } is an exponential with mean ci−j /(ci−j + 1) and the overshoot of the bigger term is again an exponential with the same mean. For more detail see [6]. Let us now prove that for any i = 1, 2, . . . i−1 

Ci−j −1 Xj + (Xi − Si−1 (c))+ = D

j =1

where C0 = 1, Cm = yielding i−1 

i 

Ci−j Xj ,

(7)

j =1

m

j =1 (1

+ cj )−1 , m ≥ 1. To see (7) we apply the transformation (6)

Ci−j −1 Xj + (Xi − Si−1 (c))+

j =1

=

min{K i ,i−1} 

Ci−j −1 (ci−j + 1)−1 Yj +

j =1

=

i 

i−1 

Ci−j −1 Yj +1 + 1[Ki =i] Yi

j =Ki

Ci−j Yj .

(8)

j =1

Here the crucial and surprising feature is that the last expression in (8) is independent of Ki although it was obtained by using the transformation which was defined by Ki . Now (7) immediately follows from (8), the law of total probability and the identical joint distribution of Xj s and Yj s. Also, note that for some aj ≥ 0, j ≥ 0, and a = (a1 , a2 , . . . ) the sum ai−1 X1 + ai−2 X2 + · · · + a1 Xi−1 + a0 Xi = Si−1 (a) + a0 Xi under the transformation (6) becomes  Ki  i−1   ai−j a0 ci−j ai−j Yj +1 , + Yj + Si−1 (a) + a0 Xi = ci−j + 1 ci−j + 1 j =1

j =Ki

where we put c0 = 0. For this paper, the key issue is that if (and only if) a0 = a1 = · · · = a, then under the transformation (6) the sum Si−1 (a) + a0 Xi = aSi remains ‘of the same form’ independently of Ki :  Ki  i−1   ci−j 1 Yj +1 = a (Y1 + Y2 + · · · + Yi ) . (9) + Yj + a aSi = a ci−j + 1 ci−j + 1 j =1

j =Ki

790

N. LITVAK

Now we can prove Theorem 1. Proof of Theorem 1. We prove (3) by expanding the terms on the left-hand side one by one, and then using induction as in [6]. Specifically, we show that if for some i = 2, 3, . . . , n n 

(Xj − bSj −1 )+ = D

j =1

i−1  j =1

=

i−1  j =1

+

n

 1 Xj + (Xj − bSj −1 )+ i−j −1 (b + 1) j =i

1 Xj + (Xi − bSi−1 )+ (b + 1)i−j −1 n 

(Xj − bSj −1 )+ ,

(10)

j =i+1

then it is also valid for i + 1. Note that (10) holds trivially for i = 2. Now we put c1 = c2 = · · · = b and transformation (6) to the last expression of (10). The first two terms give i apply the−(i−j ) Y according to (8), where C = (b + 1)−j , j ≥ 1. Furthermore, it j j j =1 (b + 1) follows from (9) that the form of the third term does not change. Hence, the whole expression becomes i  j =1

n  1 Y + (Yj − b(Y1 + Y2 + · · · + Yj −1 ))+ j (b + 1)i−j j =i+1

= D

i  j =1

n  1 X + (Xj − bSj −1 )+ , j (b + 1)i−j j =i+1

as required. The proof of (4) is very similar. We assume that for some i = 2, 3, . . . , n max {Xj − bSj −1 }

1≤j ≤n

= max D

 i−1

j =1



b Xj , max {Xj − bSj −1 } i≤j ≤n (b + 1)i−j − 1

= max max = max

 i−1 j =1

where

 i−1 j =1

b Xj , Xi − bSi−1 , max {Xj − bSj −1 } i+1≤j ≤n (b + 1)i−j − 1

b + (X − S (c)) , max {X − bS } X j i i−1 + j j −1 , i+1≤j ≤n (b + 1)i−j − 1

 cj = b +

b (b + 1)j − 1

 =

b(b + 1)j , (b + 1)j − 1

j ≥ 1.

(11)

Peculiarities of exponential random variables

791

Equation (11) is trivial for i = 2. Now it suffices to show that if (11) holds for i then it also holds for i + 1. To see that we apply the transformation (6) to the last expression of (11). Then we use (8) and (9), yielding

 i b Yj , max {Yj − b(Y1 + Y2 + · · · + Yj −1 )} max i+1≤j ≤n (b + 1)i−j +1 − 1 j =1

= max D

 i j =1

b (b + 1)i−j +1 − 1

Xj , max {Xj − bSj −1 } . i+1≤j ≤n

This proves the statement of the induction. 3. Discussion This section contains several remarks on the results obtained above. First of all, note that the proof of (3), as well as the proof of (4), establishes n distributional identities, not just two. That is, the random variables i  j =1

n  1 X + (Xj − bSj −1 )+ , j (b + 1)i−j

1 ≤ i ≤ n,

j =i+1

are identically distributed. The same holds for the random variables

 i b , max {X − bS } , X max j j j −1 i+1≤j ≤n (b + 1)i−j +1 − 1

1 ≤ i ≤ n.

j =1

Due to (9), we can generalize (3) and (4) in the following way. Let f (·, ·) be a function defined on a positive octant of R2 . Assume that the first argument of f (·, ·) is the left-hand side of (3), and the second argument is Sn . Then the function f (·, ·) of these two random arguments will have the same distribution as f (·, ·) with the first argument being the right-hand part of (3). The same holds for (4). The proof is the same as the proof of Theorem 1 with the only additional remark being that according to (9) the term Sn under the transformation (6) always becomes Y1 + Y2 + · · · + Yn . In particular, observe that Sn − max {Xj − bSj −1 } = Sn − max {Sj − (b + 1)Sj −1 } 1≤j ≤n 1≤j ≤n

D = (b + 1) Sn − max {Sj − (b + 1)−1 Sj −1 } , 1≤j ≤n

where the last equality is obtained by reversing Xj , 1 ≤ j ≤ n. Thus, in (4) we could assume b ≥ 0 without loss of generality. Furthermore, putting f (x, y) = 1 − x/y we derive from (3) that n−1  

n  Xj Xj bSj −1 D  1 1− min , = , (12) j Sn Sn Sn (b + 1) j =2

j =1

and from (4) that

 n−1   j −1 n  Xj (b + 1)Xl Xl D  b 1− + = . min j +1 1≤j ≤n Sn Sn (b + 1) − 1 Sn l=1

l=j +1

j =1

(13)

792

N. LITVAK

Now let U1 , U2 , . . . , Un−1 be independent random variables uniformly distributed on [0, 1), and let U(1) , U(2) , . . . , U(n−1) denote their order statistics. Put U(0) = 0, U(n) = 1. We define uniform spacings D1 , D2 , . . . , Dn as Di = U(i) − U(i−1) ,

1 ≤ i ≤ n.

(14)

A well-known property of these spacings (cf. [8]) is that they are distributed as i.i.d. exponentials divided by their sum: (D1 , D2 , . . . , Dn ) = (X1 /Sn , X2 /Sn , . . . , Xn /Sn ). D

Therefore, (12) and (13) in fact establish the results like (3) and (4) for uniform spacings. Note that (1) is just (12) with b = 1. The distribution of the right-hand side of (12) and (13) can be easily found by using Theorem 2 from [1]. Uniform spacings play an important role in applications. One of the examples is order picking in carousel systems (see [3], [6], [7]). The carousel can be seen as a circle of length 1. The picker starts at point 0, and he has to collect n − 1 items uniformly distributed on a circle. In this model the travel time can be written in terms of uniform spacings defined by (14). In [6], the formula (1) was used to obtain the distribution of the travel time under the nearest items heuristic where the next item to be picked is always the nearest one. Formula (13) with b = 1 gives the distribution of so-called ‘one-side optimal’ route. This is the shortest of n − 1 routes, where the picker must perform the last step in a given direction (say, clockwise). Although this algorithm is not very wise, its analysis is extremely helpful for studying the optimal order picking strategy. Acknowledgements The author is grateful to Ivo Adan for fruitful collaboration. The author thanks a referee for extended comments and helpful suggestions which have considerably improved the presentation of the paper. References [1] Ali, M. M. and Obaidullah, M. (1982). Distribution of linear combination of exponential variates. Commun. Statist. Theory Meth. 11, 1453–1463. [2] Azlarov, T. A. and Volodin, N. A. (1986). Characterization problems associated with the exponential distribution. Springer, Berlin. [3] Bartholdi, J. J., III and Platzman, L. K. (1986). Retrieval strategies for a carousel conveyor. IIE Trans. 18, 166–173. [4] Dimitrov, B. and Khalil, Z. (1990). On a new characterization of the exponential distribution related to a queueing system with an unrealiable server. J. Appl. Prob. 27, 221–226. [5] Gilat, D. (1994). On a curious property of the exponential ditribution. In Proc. 12th Prague Conf. Information, Statistical Decision Functions and Random Processes, eds P. Lachout and J. Á. Višek, pp. 77–80. [6] Litvak, N. and Adan, I. (2001). The travel time in carousel systems under the nearest item heuristic. J. Appl. Prob. 38, 45–54. [7] Litvak, N., Adan, I., Wessels, J. and Zijm, W. H. M. (2001). Order picking in carousel systems under the nearest item heuristic. Prob. Eng. Inf. Sci. 15, 135–164. [8] Pyke, R. (1965). Spacings. J. R. Statist. Soc. B 27, 395–436. [9] Van Harn, K. and Steutel, F. W. (1991). On a characterization of the exponential distribution. J. Appl. Prob. 28, 947–949. [10] Wilf, H. S. (1987). The exponential distribution. Amer. Math. Monthly 94, 515–518.