Joint Latency and Cost Optimization for Erasure-coded Data Center ...

21 downloads 76209 Views 2MB Size Report
Apr 19, 2014 - cloud storage services like Amazon's Cloud drive, Apple's iCloud, ..... of scheduling policies (and resulting latency analysis), which we call the.
Joint Latency and Cost Optimization for Erasure-coded Data Center Storage Yu Xiang, Tian Lan

Vaneet Aggarwal, Yih-Farn R Chen

Department of ECE George Washington University, DC 20052 Email: {xy336699, tlan}@gwu.edu

AT&T Labs-Research Bedminster, NJ 07921 Email: {vaneet, chen}@research.att.com

arXiv:1404.4975v1 [cs.DC] 19 Apr 2014

Abstract Modern distributed storage systems offer large capacity to satisfy the exponentially increasing need of storage space. They often use erasure codes to protect against disk and node failures to increase reliability, while trying to meet the latency requirements of the applications and clients. This paper provides an insightful upper bound on the average service delay of such erasure-coded storage with arbitrary service time distribution and consisting of multiple files. Not only does the result supersede known delay bounds that only work for a single file, it also enables a novel problem of joint latency and storage cost minimization over three dimensions: selecting the erasure code, placement of encoded chunks, and optimizing scheduling policy. The problem is efficiently solved via the computation of a sequence of convex approximations with provable convergence. We further prototype our solution in an open-source, cloud storage deployment over three geographically distributed data centers. Experimental results validate our theoretical delay analysis and show significant latency reduction, providing valuable insights into the proposed latency-cost tradeoff in erasure-coded storage.

I. I NTRODUCTION A. Motivation Consumers are engaged in more social networking and E-commerce activities these days and are increasingly storing their documents and media in the online storage. Businesses are relying on Big Data analytics for business intelligence and are migrating their traditional IT infrastructure to the cloud. These trends cause the online data storage demand to rise faster than Moore’s Law [1]. The increased storage demands have led companies to launch cloud storage services like Amazon’s S3 [2] and personal cloud storage services like Amazon’s Cloud drive, Apple’s iCloud, DropBox, Google Drive, Microsoft’s SkyDrive, and AT&T Locker. Storing redundant information on distributed servers can increase reliability for storage systems, since users can retrieve duplicated pieces in case of disk, node, or site failures. Erasure coding has been widely studied for distributed storage systems [6, and references therein] and used by companies like Facebook [3] and Google [4] since it provides space-optimal data redundancy to protect against data loss. There is, however, a critical factor that affects the service quality that the user experiences, which is the delay in accessing the stored file. In distributed storage, the bandwidth between different nodes is frequently limited and so is the bandwidth from a user to different storage nodes, which can cause a significant delay in data access and perceived as poor quality of service. In this paper, we consider the problem of jointly minimizing both service delay and storage cost for the end users. While a latency-cost tradeoff is demonstrated for the special case of a single file [33], [38], [43], [42], much less is known about the latency performance of multiple files that are coded with different parameters and share common storage servers. The main goal of this paper can be illustrated by an abstracted example shown in Fig. 1. We consider two files, each partitioned into k = 2 blocks of equal size and encoded using maximum distance separable (MDS) codes. Under an (n, k) MDS code, a file is encoded and stored in n storage nodes such that the chunks stored in any k of these n nodes suffice to recover the entire file. There is a centralized scheduler that buffers and schedules all incoming requests. For instance, a request to retrieve file A can be completed after it is successfully processed by 2 distinct nodes chosen from {1, 2, 3, 4} where desired chunks of A are available. Due to shared storage nodes and joint request scheduling, delay performances of the files are highly correlated and are collectively

determined by control variables of both files over three dimensions: (i) the scheduling policy that decides what request in the buffer to process when a node becomes available, (ii) the placement of file chunks over distributed storage nodes, and (iii) erasure coding parameters that decides how many chunks are created. A joint optimization over these three dimensions is very challenging because the latency performance of different files are tightly entangled. While increasing erasure code length of file B allows it to be placed on more storage nodes, potentially leading to smaller latency (because of improved load-balancing) at the price of higher storage cost, it inevitably affects service latency f file A due to resulting contention and interference on more shared nodes. In this paper, we present a quantification of service latency for erasure-coded storage with multiple files and propose an efficient solution to the joint optimization of both latency and storage cost.

5

3

1

4 File A (4,2) coding 1: a1 2: a2 3: a1+a2 4: a1+2a2

2

File B

Scheduler

(3,2) coding 5: b1 6: b2 7: b1+b2

…… Requests

Fig. 1. An erasure-coded storage of 2 files, which partitioned into 2 blocks and encoded using (4, 2) and 3, 2 MDS codes, respectively. Resulting file chunks are spread over 7 storage nodes. Any file request must be processed by 2 distinct nodes that have the desired chunks. Nodes 3, 4 are shared and can process request for both files.

B. Related Work Quantifying the exact service delay in an erasure-coded storage is an open problem. Prior works focusing on asymptotic queuing delay behaviours [35], [36] are not applicable because redundancy factor in practical data centers typically remain small due to storage cost concerns. Due to the lack of analytical delay models for erasure-coded storage, most of the literature is focused on reliable distributed storage system design, and latency is only presented as a performance metric when evaluating the proposed erasure coding scheme, e.g., [14], [16], [19], [24], [22], which demonstrate latency improvement due to erasure coding in different system implementations. Related design can also be found in data access scheduling [7], [9], [12], [13], access collision avoidance [10], [11], and encoding/decoding time optimization [25], [26]. Restricting to the special case of a single file, service delay bounds of erasure-coded storage have been recently studied in [33], [38], [43], [42]. Queueing-theoretic analysis. For a single file and under an assumption of exponential service time distribution, the authors in [33], [38] proposed a block-one-scheduling policy that only allows the request at the head of the buffer to move forward. An upper bound on the average latency of the storage system is provided through queueing-theoretic analysis for MDS codes with k = 2. Later, the approach is extended in [42] to general (n, k) erasure codes, yet for a single file. A family of MDS-Reservation(t) scheduling policies that block all except the first t of file requests are proposed and lead to numerical upper bounds on the average latency. It is shown that as t increases, the bound becomes tighter while the number of states concerned in the queueing-theoretic analysis grows exponentially. Fork-join queue analysis. A queuing model closely related to erasure-coded storage is the fork-join queue [8] which has been extensively studied in the literature. Recently, the authors in [43] proposed a (n, k) fork-join queue where a file request is forked to all n storage nodes that host the file chunks,

and it exits the system when any k chunks are processed. Using this (n, k) fork-join queue to model the latency performance of erasure-coded storage, a closed-form upper bound of service latency is derived for a single file and exponentially-distributed service time. However, the approach cannot be applied to a multiple-file storage where each file has a separate folk-join queue and the queues of different files are highly dependent due to shared storage nodes and joint request scheduling. Further, under a folk-join queue, each file request must be served by all n nodes or a set of pre-specified nodes. It falls short to address dynamic load-balancing of multiple files. C. Our Contributions This paper aims to propose a systematic framework that (i) quantifies the outer bound on the service latency of arbitrary erasure codes and for any number of files in distributed data center storage with general service time distributions, and (ii) enables a novel solution to a joint minimization of latency and storage cost by optimizing the system over three dimensions: erasure coding, chunk placement, and scheduling policy. The outer bound on the service latency is found using four steps, (i) We present a novel probabilistic scheduling policy, which dispatches each file request to k distinct storage nodes who then manages their own local queues independently. A file request exits the system when all the k chunk requests are processed. We show that probabilistic scheduling provides an upper bound on average latency of erasurecoded storage for arbitrary erasure codes, any number of files, and general services time distributions. (ii) Since the latency for probabilistic scheduling for all probabilities over nk subsets is hard to evaluate, we show that the probabilistic scheduling is equivalent to accessing each of the n storage node with certain probability. If there is a strategy that accesses  each storage node with certain probability, there exist a probabilistic scheduling strategy over all nk subsets. (iii) The policy that selects each storage node with certain probability generates memoryless requests at each of the node and thus the delay at each storage node can be characterized by the latency of M/G/1 queue. (iv) Knowing the exact delay from each storage node, we find a tight bound on the delay of the file by extending ordered statistic analysis in [37]. Not only does our result supersede previous latency analysis [33], [38], [43], [42] by incorporating multiple files and arbitrary service time distribution, it is also shown to be tighter for a wide range of workloads even in the single-file case. The main application of our latency analysis is a joint optimization of latency and storage cost for multiple-file storage over three dimensions: erasure coding, chunk placement, and scheduling policy. To the best of our knowledge, this is the first paper to explore all these three design degrees of freedoms and to optimize an aggregate latency-plus-cost objective for all end users in an erasure-coded storage. Solving such a joint optimization is known to be hard due to the integer property of storage cost, as well as the coupling of control variables. While the length of erasure code determines not only storage cost but also the number of file chunks to be created and placed, the placement of file chunks over storage nodes further dictates the possible options of scheduling future file requests. To deal with these challenges, we propose an algorithm that constructs and computes a sequence of local, convex approximations of the latency-plus-cost minimization that is a mixed integer optimization. The sequence of approximations parametrized by β > 0 can be efficiently computed using a standard projected gradient method and is shown to converge to the original problem as β → ∞. To validate our theoretical analysis and joint latency-plus-cost optimization, we provide a prototype of the proposed algorithm in Tahoe [41], which is an open-source, distributed filesystem based on the zfec erasure coding library for fault tolerance. A Tahoe storage system consisting of 12 storage nodes are deployed as virtual machines in an OpenStack-based data center environment distributed in New Jersey (NJ), Texas (TX), and California (CA). Each site has four storage servers. One additional storage client was deployed in the NJ data center to issue storage requests. First, we validate our latency analysis via experiments with multiple files and different request arrival rates on the testbed. Our measurement of real service time distribution falsifies the exponential assumption in [33], [38], [42]. Our analysis outperforms

the upper bound in [43] even in the single-file case. Second, we implement our algorithm for joint latencyplus-cost minimization and demonstrate significant improvement of both latency and cost over oblivious design approaches. Our entire design is validated in various scenarios on our testbed, including different files sizes and arrival rates. The percentage improvement increases as the file size increases because our algorithm reduces queuing delay which is more effective when file sizes are larger. Finally, we quantify the tradeoff between latency and storage cost. It is shown that the improved latency shows a diminished return as storage cost/redundancy increase, suggesting the importance of identifying a particular tradeoff point. II. S YSTEM M ODEL We consider a data center consisting of m heterogeneous servers, denoted by M = {1, 2, . . . , m}, called storage nodes. To distributively store a set of r files, indexed by i = 1, . . . , r, we partition each file i into ki fixed-size chunks1 and then encode it using an (ni , ki ) MDS erasure code to generate ni distinct chunks of the same size for file i. The encoded chunks are assigned to and stored on ni distinct storage nodes, which leads to a chunk placement subproblem, i.e., to find a set Si of storage nodes, satisfying Si ⊆ M and ni = |Si |, to store file i. Therefore, each chunk is placed on a different node to provide high reliability in the event of node or network failures. While data locality and network delay have been one of the key issues studied in data center scheduling algorithms [12], [13], [15], the prior work does not apply to erasure-coded systems. The use of (ni , ki ) MDS erasure code allows the file to be reconstructed from any subset of ki -out-of-ni chunks, whereas it also introduces a redundancy factor of ni /ki . To model storage cost, we assume that each storage node j ∈ M charges a constant cost Vj per chunk. Since ki is determined by file size and the choice of chunk size, we need to choose an appropriate ni which not only introduces sufficient redundancy for improving chunk availability, but also achieves a cost-effective solution. We refer to the problem of choosing ni to form a proper (ni , ki ) erasure code as an erasure coding subproblem. For known erasure coding and chunk placement, we shall now describe a queueing model of the distributed storage system. We assume that the arrival of client requests for each file i form an independent Poisson process with a known rate λi . We consider chunk service time Xj of node j with arbitrary distributions, whose statistics can be obtained inferred from existing work on network delay [27], [26] and file-size distribution [28], [29]. Under MDS codes, each file i can be retrieved from any ki distinct nodes that store the file chunks. We model this by treating each file request as a batch of ki chunk requests, so that a file request is served when all ki chunk requests in the batch are processed by distinct storage nodes. All requests are buffered in a common queue of infinite capacity. Consider the 2-file storage example in Section I, where files A and B are encoded using (4, 2) and (3, 2) MDS codes, respectively, file A will have chunks as A1 , A2 , A3 and A4 , and file B will have chunks B1 , B2 and B3 . As depicted in Fig.2 (a), each file request comes in as a batch of ki = 2 chunk requests, e.g., (R1A,1 , R1A,2 ), (R2A,1 , R2A,2 ), and (R1B,1 , R1B,2 ), where RiA,j , denotes the ith request of file A, j = 1, 2 denotes the first or second chunk request of this file request. Denote the five nodes (from left to right) as servers 1, 2, 3, 4, and 5, and we initial 4 file requests for file A and 3 file requests for file B, i.e., requests for the different files have different arrival rates. The two chunks of one file request can be any two different chunks from A1 , A2 , A3 and A4 for file A and B1 , B2 and B3 for file B. Due to chunk placement in the example, any 2 chunk requests in file A’s batch must be processed by 2 distinct nodes from {1, 2, 3, 4}, while 2 chunk requests in file B’s batch must be served by 2 distinct nodes from {3, 4, 5}. Suppose that the system is now in a state depicted by Fig.2 (a), wherein the chunk requests R1A,1 , R2A,1 , R1A,2 , R1B,1 , and R2B,2 are served by the 5 storage nodes, and there are 9 more chunk requests buffered in the queue. Suppose that node 2 completes serving chunk request R2A,1 and is now free to server another request waiting in the queue. Since node 2 has already served a chunk request of batch 1 While we make the assumption of fixed chunk size here to simplify the problem formulation, all results in this paper can be easily extended to variable chunk sizes. Nevertheless, fixed chunk sizes are indeed used by many existing storage systems [14], [16], [18].

(R2A,1 , R2A,2 ) and node 2 does not host any chunk for file B, it is not allowed to serve either R2A,2 or R2B,j , R3B,j where j = 1, 2 in the queue. One of the valid requests, R3A,j and R4A,j , will be selected by an scheduling algorithm and assigned to node 2. We denote the scheduling policy that minimizes average expected latency in such a queuing model as optimal scheduling. Definition 1: (Optimal scheduling) An optimal scheduling policy (i) buffers all requests in a queue of infinite capacity; (ii) assigns at most 1 chunk request from a batch to each appropriate node, and (iii) schedules requests to minimize average latency if multiple choices are available. R A,1 2

R A,1 2

R 1A,1

R 1A,2

R 1B,1

R A,2 2

R B,2 R B,1 2 2

R 1B,2

R 1A,1 R A,2 2

R

R A,1 4

A,1 3

R 1A,2

R 1B,1

R 1B,2

R B,1 2

R 3B,1

R B,2 2

R

A,2 3

R

A,2 4

R 3B,2

R 3B,1 R 3B,2

R 3A,1

R 3A,2

A,1 4

R A,2 4

R

…… Dispatch

(a) MDS scheduling

Fig. 2.

(b) Probabilistic scheduling

Functioning of (a) an optimal scheduling policy and (b) a probabilistic scheduling policy.

An exact analysis of optimal scheduling is extremely difficult. Even for given erasure codes and chunk placement, it is unclear what scheduling policy leads to minimum average latency of multiple files. For example, when a shared storage node becomes free, one could schedule either the earliest valid request in the queue or the request with scarcest availability, leading to different implications on average latency. A scheduling policy similar to [33], [38] that blocks all but the first t batches does not apply to multiple files because a Markov-chain representation of the resulting queue is required to have each state encapsulating not only the status of each batch in the queue, but also the exact assignment of chunk requests to storage nodes, since nodes are shared by multiple files and are no longer homogeneous. This leads to a Markov chain which has a huge state space and is hard to quantify analytically even for small t. On the other hand, the approach relying on (n, k) fork-join queue in [43] also falls short because each file request must be forked to ni servers, inevitably causing conflict at shared servers. III. U PPER BOUND : P ROBABILISTIC SCHEDULING

This section presents a class of scheduling policies (and resulting latency analysis), which we call the probabilistic scheduling, whose average latency upper bounds that of optimal scheduling. A. Probabilistic scheduling Under (ni , ki ) MDS codes, each file i can be retrieved by processing a batch of ki chunk requests at distinct nodes that store the file chunks. Recall that each encoded file i is spread over ni nodes, denoted by a set Si . Upon the arrival of a file i request, in probabilistic scheduling we randomly dispatch the batch of ki chunk requests to ki -out-of-ni storage nodes in Si , denoted by a subset Ai ⊆ Si (satisfying |Ai | = ki ) with predetermined probabilities. Then, each storage node manages its local queue independently and continues processing requests in order. A file request is completed if all its chunk requests exit the system. An example of probabilistic scheduling is depicted in Fig.2 (b), wherein 5 chunk requests are currently served by the 5 storage nodes, and there are 9 more chunk requests that are randomly dispatched to and are buffered in 5 local queues according to chunk placement, e.g., requests B2 , B3 are only distributed to nodes {3, 4, 5}. Suppose that node 2 completes serving chunk request A2 . The next request in the node’s local queue will move forward.

Definition 2: (Probabilistic scheduling) An Probabilistic scheduling policy (i) dispatches each batch of chunk requests to appropriate nodes with predetermined probabilities; (ii) each node buffers requests in a local queue and processes in order. It is easy to verify that such probabilistic scheduling ensures that at most 1 chunk request from a batch to each appropriate node. It provides an upper bound on average service latency for the optimal scheduling since rebalancing and scheduling of local queues are not permitted. Let P(Ai ) for all Ai ⊆ Si be the probability of selecting a set of nodes Ai to process the |Ai | = ki distinct chunk requests2 . Lemma 1: For given erasure codes and chunk placement, average service latency of probabilistic scheduling with feasible probabilities {P(Ai ) : ∀i, Ai } upper bounds the latency of optimal scheduling. Clearly, the tightest upper bound can be obtained by minimizing average latencyPof probabilistic scheduling over all feasible probabilities P(Ai ) ∀Ai ⊆ Si and ∀i, which involves i (ni -choose-ki ) decision variables. We refer to this optimization as a scheduling subproblem. While it appears prohibitive computationally, we will demonstrate next that the optimization can be transformed into an equivalent P form, which only requires i ni variables. The key idea is to show that it is sufficient to consider the conditional probability (denoted by πi,j ) of selecting a node j, given that a batch of ki chunk requests of file i are dispatched. It is easy to see that for given P(Ai ), we can derive πi,j by X πi,j = P(Ai ) · 1{j∈Ai } , ∀i (1) Ai :Ai ⊆Si

where 1{j∈Ai } is an indicator function which equals to 1 if node j is selected by Ai and 0 otherwise. Theorem 1: A probabilistic scheduling policy with feasible probabilities {P(Ai ) : ∀i, Ai } exists if and only if there exists conditional probabilities {πi,j ∈ [0, 1], ∀i, j} satisfying m X

πi,j = ki ∀i and πi,j = 0 if j ∈ / Si .

(2)

j=1

The proofPof Theorem 1 relying on Farkas-Minkowski Theorem [45] is detailed in the Appendix A. Intuitively, m j=1 πi,j = ki holds because each batch of requests is dispatched to exact ki distinct nodes. Moreover, a node does not host file i chunks should not be selected, meaning that πi,j = 0 if j ∈ / Si . Using this result, it is sufficient to study probabilistic scheduling via conditional probabilities πi,j , which greatly simplifies our analysis. In particular, it is easy toPverify that under our model, the arrival of chunk requests at node j form a Poisson Process with rate Λj i λi πi,j , which is the superposition of r Poisson processes each with rate λi πi,j . The resulting queueing system under probabilistic scheduling is stable if all local queues are stable. Corollary 1: The queuing system governed can be stabilized by a probabilistic scheduling policy under request arrival rates λ1 , λ2 , . . . , λr if there exists {πi,j ∈ [0, 1], ∀i, j} satisfying (23) and X Λj = λi πi,j < µj , ∀j. (3) i

B. Latency analysis and upper bound An exact analysis of the queuing latency of probabilistic scheduling is still hard because local queues at different storage nodes are dependent of each other as each batch of chunk requests are dispatched jointly. Let Qj be the (random) waiting time a chunk request spends in the queue of node j. The expected latency of a file i request is determined by the maximum latency that ki chunk requests experience on distinct servers, Ai ⊆ Si , which are randomly scheduled with predetermined probabilities, i.e.,    T¯i = E EAi max{Qj } , (4) j∈Ai

2 It is easy to see that P(Ai ) = 0 for all Ai * Si and |Ai | = ki because such node selections do not recover ki distinct chunks and thus are inadequate for successful decode.

where the first expectation is taken over system queuing dynamics and the second expectation is taken over random dispatch decisions Ai . If the server scheduling decision Ai were deterministic, a tight upper bound on the expected value of the highest order statistic can be computed from marginal mean and variance of these random variables [37], namely E[Qj ] and Var[Qj ]. Relying on Theorem 1, we first extend this bound to the case of randomly selected servers with respect to conditional probabilities {πi,j ∈ [0, 1], ∀i, j} to quantify the latency of probabilistic scheduling. Lemma 2: The expected latency T¯i of file i under probabilistic scheduling is upper bounded by ( X πi,j T¯i ≤ min z + (E[Qj ] − z) z∈R 2 j∈Si ) X πi,j q + (E[Qj ] − z)2 + Var[Qj ] . (5) 2 j∈S i

The bound is tight in the sense that there exists a distribution of Qj such that (5) is satisfied with exact equality. Next,P we realize that the arrival of chunk requests at node j form a Poisson Process with superpositioned rate Λj i λi πi,j . The marginal mean and variance of waiting time Qj can be derived by analyzing them as separate M/G/1 queues. We denote Xj as the service time per chunk at node j, which has an arbitrary distribution satisfying finite mean E[Xj ] = 1/µj , variance E[X2 ] − E[X]2 = σj2 , second moment ˆ 3j . These statistics can be readily inferred from existing work E[X2 ] = Γ2j , and third moment E[X3 ] = Γ on network delay [27], [26] and file-size distribution [28], [29]. Lemma 3: Using Pollaczek-Khinchin transform [38], expected delay and variance for total queueing and network delay are given by Λj Γ2j 1 , + E[Qj ] = µj 2(1 − ρj ) ˆ3 Λj Γ Λ2j Γ4j j 2 Var[Qj ] = σj + + , 3(1 − ρj ) 4(1 − ρj )2

(6) (7)

where ρj = Λj /µj is the request intensity at node j. Combining Lemma 2 and Lemma 3, a tight upper bound on expected latency of file i under probabilistic scheduling can be obtained by solving a single-variable minimization problem over real z ∈ R for given erasure codes ni , chunk placement Si , and scheduling probabilities πij . IV. A PPLICATION : J OINT LATENCY AND COST OPTIMIZATION In this section, we address the following questions: what is the optimal tradeoff point between latency and storage cost for a erasure-coded system? While any optimization regarding exact latency is an open problem, the analytical upper bound using probabilistic scheduling enables us to formulate a novel optimization of joint latency and cost objectives. Its solution not only provides a theoretical bound on the performance of optimal scheduling, but also leads to implementable scheduling policies that can exploit such tradeoff in practical systems. A. Formulating the Joint Optimization We showed that a probabilistic scheduling policy can be optimization over three sets of control variables: erasure coding ni , chunk placement Si , and scheduling probabilities πij . However, a latency optimization without considering storage cost is impractical and leads to a trivial solution where every file ends up spreading over all nodes. To formulate a joint latency and cost optimization, we assume that storing a single chunk on node j requires cost Vj , reflecting the fact that nodes may have heterogeneous quality of

service and thus storage prices. Therefore, total storage cost is determined by both the level of redundancy (i.e., erasure P code length ni ) and chunk placement Si . Under this model, the cost of storing file i is given by Ci = j∈Si Vj . In this paper, we only consider the storage cost of chunks while network cost would be an interesting future direction. P ˆ ˆ is the fraction of file i requests, and average latency of Let λ = i λi be P the total arrival rate, so λi /λ ˆ T¯i . Our objective is to minimize an aggregate latency-cost objective, i.e., all files is given by i (λi /λ) min

r X λi i=1

ˆ λ

T¯i + θ

r X X

Vj

(8)

i=1 j∈S

s.t. (1), (2), (3), (5), (6), (7). var. ni , πi,j , Si ∈ M, ∀i, j. Here θ ∈ [0, ∞) is a tradeoff factor that determines the relative importance of latency and cost in the minimization problem. Varying from θ = 0 to θ → ∞, the optimization solution to (8) ranges from those minimizing latency to ones that achieve lowest cost. The joint latency-cost optimization is carried out over three sets of variables: erasure code ni , scheduling probabilities πi,j , and chunk placement Si , subject to the constraints derived in Section III. Varying θ, the optimization problem allows service providers to exploit a latency-cost tradeoff and to determine the optimal operating point for different application demands. We plug into (8) the results in Section III and obtain a Joint Latency-Cost Minimization (JLCM) with respect to probabilistic scheduling3 : Problem JLCM: m r X q i X X Λj h 2 Vj (9) min z + X j + Xj + Yj + θ ˆ 2λ j=1

i=1 j∈Si

Λj Γ2j

1 − z, ∀j + µj 2(1 − ρj ) ˆ3 Λj Γ Λ2j Γ4j j Yj = σj2 + + , ∀j 3(1 − ρj ) 4(1 − ρj )2 r X ρj = Λj /µj < 1; Λj = πi,j λi ∀j

s.t. Xj =

(10) (11) (12)

i=1 m X

πi,j = ki ; πi,j ∈ [0, 1]; πi,j = 0 ∀j ∈ / Si

(13)

j=1

|Si | = ni and Si ⊆ M, ∀i var. z, ni , Si , πi,j , ∀i, j.

(14)

Problem JLCM is challenging due to two reasons. First, all optimization variables are highly coupled, making it hard to apply any greedy algorithm that iterative optimizes over different sets of variables. The number of nodes selected for chunk placement (i.e., Si ) is determined by erasure code length ni in (14), while changing chunk placement Si affects the feasibility of probabilities πi,j duePto (13). Second, Problem JLCM is a mixed-integer optimization over Si and ni , and storage cost Ci = j∈Si Vj depends on the integer variables. Such a mixed-integer optimization is known to be difficult in general B. Constructing convex approximations In the next, we develop an algorithmic solution to Problem JLCM by iteratively constructing and solving a sequence of convex approximations. This section shows the derivation of such approximations for any 3

The optimization is relaxed by applying the same axillary variable z to all T¯i , which still satisfies the inequality (5).

Algorithm JLCM : Routine projected gradient() : Choose sufficiently large β > 0 (0) Initialize t = 0 and feasible (πi,j ∀i, j) Compute current objective value B (0) while B (0) − B (1) >  (t) Approximate cost function using (17) and (πi,j ∀i, j) Call projected gradient() to solve optimization (19) (t+1) (πi,j ∀i, j) = arg min (19) z = arg min (19) Compute new objective value B (t+1) Update t = t + 1 end while Find chunk placement Si and erasure code ni by (15) (t) Output: (ni , Si , πi,j ) ∀i, j Fig. 3. Algorithm JLCM: Our proposed algorithm for solving Problem JLCM.

Choose proper stepsize δ1 , δ2 , δ3 , . . . (s) (t) Initialize s = 0 and πi,j = πi,j P (s+1) (s) while i,j |πi,j − πi,j | >  (s) Calculate gradient ∇(17) with respect to πi,j (s+1) (s) πi,j = πi,j + δs · ∇(17) (s+1) Project πi,j onto feasibility set: P s+1 (s+1) s+1 {πi,j : j πi,j = ki , πi,j ∈ [0, 1], ∀i, j} Update s = s + 1 end while (s) Output: (πi,j , ∀i, j) Fig. 4. Projected Gradient Descent Routine, used in each iteration of Algorithm JLCM.

given reference point, while the algorithm and its convergence will be presented later. Our first step is to replace chunk placement Si and erasure coding ni by indicator functions of πi,j . It is easy to see that any nodes receiving a zero probability πi,j = 0 should be removed from Si , since any chunks placed on them do not help reducing latency. Lemma 4: The optimal chunk placement of Problem JLCM must satisfy Si = {j : πi,j > 0} ∀i, which implies X j∈Si

Vj =

m X j=1

Vj 1(πi,j >0) , ni =

m X

Vj 1(πi,j >0)

(15)

j=1

Thus, Problem JLCM becomes to an optimization over only (πi,j ∀i, j), constrained by ki and πi,j ∈ [0, 1] in (13), with respect to the following objective function: r X m m q i X X Λj h 2 Vj 1(πi,j >0) . z+ ¯ X j + Xj + Yj + θ 2λ i=1 j=1 j=1

Pm

j=1

πi,j =

(16)

However, the indicator functions above that are neither continuous nor convex. To deal with them, we (t) select a fixed reference point (πi,j ∀i, j) and leverage a linear approximation of (16) with in a small neighbourhood of the reference point. For all i, j, we have " # (t) V (π − π ) j i,j i,j Vj 1(πi,j >0) ≈ Vj 1π(t) >0 + (t) , (17) i,j (πı,j + 1/β) log β where β > 0 is a sufficiently large constant relating to the approximation ratio. It is easy to see that (t) the approximation approaches the real cost function within a small neighbourhood of (πi,j ∀i, j) as β (t) increases. More precisely, when πi,j = 0 the approximation reduces to πi,j (Vj β/ log β), whose gradient (t) approaches infinity as β → ∞, whereas the approximation converges to constant Vj for any πi,j = 0 as β → ∞. It is easy to verify that the approximation is linear and differentiable. Therefore, we could iteratively construct and solve a sequence of approximated version of Problem JLCM. Next, we show that the rest of optimization objective in (9) is convex in πi,j when all other variables are fixed. Lemma 5: The following function, in which Xj and Yj are functions of Λj defined by (10) and (11),

is convex in Λj : F (Λj ) =

q i Λj h Xj + Xj2 + Yj . ˆ 2λ

(18)

C. Algorithm JLCM and convergence analysis Leveraging the linear local approximation in (17) our idea to solve Problem JLCM is to start with an (0) initial (πi,j ∀i, j), solve its optimal solution, and iteratively improve the approximation by replacing the reference point with an optimal solution computed from the previous step. Lemma 5 shows that such approximations of Problem JLCM are convex and can be solved by off-the-shelf optimization tools, e.g., Gradient Descent Method and Interior Point Method [39]. The proposed algorithm is shown in Figure IV-A. For each iteration t, we solve an approximated version (0) of Problem JLCM over (πi,j ∀i, j) with respect to a given reference point and a fixed parameter z. More precisely, for t = 1, 2, . . . we solve " # (t) r X m X V (π − π ) j i,j i,j min θ Vj 1π(t) >0 + (t) i,j (πı,j + 1/β) log β i=1 j=1 m q i X Λj h (19) +z + Xj + Xj2 + Yj ˆ j=1 2λ s.t.

Constraints (10), (11), (12) m X πi,j = ki and πi,j ∈ [0, 1] j=1

var. πi,j ∀i, j. Due to Lemma 5, the above minimization problem with respect to a given reference point has a convex objective function and linear constraints. It is solved by a projected gradient descent routine in Figure P IV-A. (t) Notice that the updated probabilities (πi,j ∀i, j) in each step are projected onto the feasibility set { j πi,j = ki , πi,j ∈ [0, 1], ∀i, j} as required by Problem JLCM using a standard Euclidean projection. It is shown that such a projected gradient descent method solves the optimal solution of Problem (19). Next, for fixed (t) probabilities (πi,j ∀i, j), we improve our analytical latency bound by minimizing it over z ∈ R. The convergence of our proposed algorithm is proven in the following theorem. (t) Theorem 2: Algorithm JLCM generates a descent sequence of feasible points, πi,j for t = 0, 1, . . ., which converges to a local optimal solution of Problem JLCM as β grows sufficiently large. Remark: To Pprove Theorem 2, we show that Algorithm JLCM generates a series of decreasing objective values z + j F (Λj ) + θCˆ of Problem JLCM with a modified cost function: Cˆ =

r X m X i=1 j=1

Vj

log(βπi,j + 1) . log β

(20)

The key idea in our proof is that the linear approximation of storage cost function in (17) can be seen as a sub-gradient of Vj log(βπi,j + 1)/log β, which converges to the real storage cost function as β → ∞, i.e., log(βπi,j + 1) = Vj 1(πi,j >0) . (21) β→∞ log β P Therefore, a converging sequence for the modified objective z + j F (Λj ) + θCˆ also minimizes Problem ˆ is a concave JLCM, and the optimization gap becomes zero as β → ∞. Further, it is shown that h lim Vj

P ˆ can be viewed as optimizing the difference between 2 function. Thus, minimizing z + j F (Λj ) + θh P ˆ which can be also solved via a Difference-of-Convex convex objectives, namely z + j F (Λj ) and −θh, Programming (DCP). In this context, our linear approximation of cost function in (17) can be viewed as an approximated supper-gradient in DCP. Please refer to [40] for a comprehensive study of regularization techniques in DCP to speed up the convergence of Algorithm JLCM. V. I MPLEMENTATION AND E VALUATION A. Tahoe Testbed To validate our proposed algorithms for joint latency and cost optimization (i.e., Algorithm JLCM) and evaluate their performance, we implemented the algorithms in Tahoe [41], which is an open-source, distributed filesystem based on the zfec erasure coding library. It provides three special instances of a generic node: (a) Tahoe Introducer: it keeps track of a collection of storage servers and clients and introduces them to each other. (b) Tahoe Storage Server: it exposes attached storage to external clients and stores erasure-coded shares. (c) Tahoe Client: it processes upload/download requests and connects to storage servers through a Web-based REST API and the Tahoe-LAFS (Least-Authority File System) storage protocol over SSL.

Fig. 5. Our Tahoe testbed with average ping (RTT) and bandwidth measurements among three data centers in New Jersey, Texas, and California

Our algorithm requires customized erasure code, chunk placement, and server selection algorithms. While Tahoe uses a default (10, 3) erasure code, it supports arbitrary erasure code specification statically through a configuration file. In Tahoe, each file is encrypted, and is then broken into a set of segments, where each segment consists of k blocks. Each segment is then erasure-coded to produce n blocks (using a (n, k) encoding scheme) and then distributed to (ideally) n distinct storage servers. The set of blocks on each storage server constitute a chunk. Thus, the file equivalently consists of k chunks which are encoded into n chunks and each chunk consists of multiple blocks4 . For chunk placement, the Tahoe client randomly selects a set of available storage servers with enough storage space to store n chunks. For server selection during file retrievals, the client first asks all known servers for the storage chunks they might have. Once it knows where to find the needed k chunks (from the k servers that responds the fastest), it downloads at least the first segment from those servers. This means that it tends to download chunks from the “fastest” servers purely based on round-trip times (RTT). In our proposed JLCM algorithm, we consider RTT plus expected queuing delay and transfer delay as a measure of latency. In our experiment, we modified the upload and download modules in the Tahoe storage server and client to allow for customized and explicit server selection, which is specified in the configuration file that is read by the client when it starts. In addition, Tahoe performance suffers from its single-threaded design 4

If there are not enough servers, Tahoe will store multiple chunks on one sever. Also, the term “chunk” we used in this paper is equivalent to the term “share” in Tahoe terminology. The number of blocks in each chunk is equivalent to the number of segments in each file.

1

150 Our Upper Bound Upper Bound of [42]

0.8 0.7

Latency (sec)

0.6 0.5 0.4 0.3 0.2

50

Service Time Distribution Exponential Distribution with same Mean Exponential Distribution with same Variance

0.1 0 0

100

10

20 30 Latency (sec)

40

0 10

50

Fig. 6. Comparison of actual service time distribution and an exponential distribution with the same mean. It verifies that actual service time does not follow an exponential distribution, falsifying the assumption in previous work [33], [38].

15

20

2000

Latency Cost

Nomalized Objective

1500

500

50

100 150 Number of Iterations

200

250

Fig. 8. Convergence of Algorithm JLCM for different problem size with r = 1000 files for our 12-node testbed. The algorithm efficiently computes a solution in less than 250 iterations.

35

40

Storage Cost

1000

20

900

18

800

16

700

14

600

12

500

10

400

8

300

6

200

4

100

2

0

0 0

30

Fig. 7. Comparison of our latency upper bound with previous work [43]. Our bound significantly improves previous result under medium to high traffic and comes very close to that of [43] under low traffic (with less than 4% gap). Latency Cost

1000

25 1/λ

Algorithm JLCM

Oblivious LB Optimal CP,EC

Random CP Optimal EC

Maximum EC

Storage Cost

Cumulative Distribution Function

0.9

0

Fig. 9. Comparison of Algorithm JLCM with some oblivious approaches. Algorithm JLCM minimizes latency-plus-cost over 3 dimensions: load-balancing (LB), chunk placement (CP), and erasure code (EC), while any optimizations over a subset of the dimensions is non-optimal.

on the client side for which we had to use multiple clients with separate ports to improve parallelism and bandwidth usage during our experiments. We deployed 12 Tahoe storage servers as virtual machines in an OpenStack-based data center environment distributed in New Jersey (NJ), Texas (TX), and California (CA). Each site has four storage servers. One additional storage client was deployed in the NJ data center to issue storage requests. The deployment is shown in Figure 5 with average ping (round-trip time) and bandwidth measurements listed among the three data centers. We note that while the distance between CA and NJ is greater than that of TX and NJ, the maximum bandwidth is higher in the former case. The RTT time measured by ping does not necessarily correlate to the bandwidth number. Further, the current implementation of Tahoe does not use up the maximum available bandwidth, even with our multi-port revision. B. Experiments and Evaluation Validate our latency analysis. While our service delay bound applies to arbitrary distribution and works for systems hosting any number of files, we first run an experiment to understand actual service

time distribution on our testbed. We upload a 50MB file using a (7, 4) erasure code and measure the chunk service time. Figure 6 depicts the Cumulative Distribution Function (CDF) of the chunk service time. Using the measured results, we get the mean service time of 13.9 seconds with a standard deviation of 4.3 seconds, second moment of 211.8 s2 and the third moment of 3476.8 s3 . We compare the distribution to the exponential distribution( with the same mean and the same variance, respectively) and note that the two do not match. It verifies that actual service time does not follow an exponential distribution, and therefore, the assumption of exponential service time in [33], [38] is falsified by empirical data. The observation is also evident because a distribution never has positive probability for very small service time. Further, the mean and the standard deviation are very different from each other and cannot be matched by any exponential distribution. Empirical CDF

0.8

(11,6)

(10,7)

(10,6)

(8,4)

Average Latency

Analytical Bound

140

(11,6) (10,7) (10,6) (8,4)

120 100

0.6

Latency (sec)

Cumulative Distribution Function

1

0.4

80 60 40

0.2

20

0 0

20

40

60

80 100 Latency(Sec)

120

140

160

180

0 50M

100M

150M

200M

File Size(MB)

Fig. 10. Actual service latency distribution of an optimal solution from Algorithm JLCM for 1000 files of size 150M B using erasure code (11,6), (10,7), (10,6) and (8,4) for each quarter with aggregate request arrival rates are set to λi = 0.118 /sec Average Latency

Analytical Bound

Storage Cost

13

Average Latency (Sec)

11 150

10 9

100

8 50 7

Analytical Bound

135 130

Latency (Sec)

12 200

Average Latency 140 Storage Cost Per User (US Dollars)

250

Fig. 11. Evaluation of different chunk sizes. Latency increases super-linearly as file size grows due to queuing delay. Our analytical latency bound taking both network and queuing delay into account tightly follows actual service latency.

125 120 115 110 105

0

6 r=0.125

r=0.12 r=0.115 r=0.11 Request Arrival Rate (/Sec)

r=0.1

Fig. 12. Evaluation of different request arrival rates. As arrival rates increase, latency increases and becomes more dominating in the latency-plus-cost objective than storage cost. The optimal solution from Algorithm JLCM allows higher storage cost, resulting in a nearly-linear growth of average latency.

100 6.667

8.002 9.001 9.75 Average Storage Cost Per User (US Dollar)

12

Fig. 13. Visualization of latency and cost tradeoff for varying θ = 0.5 second/dollar to θ = 200 second/dollar. As θ increases, higher weight is placed on the storage cost component of the latencyplus-cost objective, leading to less file chunks and higher latency.

Using the service time distribution obtained above, we compare the upper bound on latency that we propose in this paper with the outer bound in [43]. Even though our upper bound holds for multiple files, and includes connection delay, we restrict our comparison to the case for a single file without any connection delay for a fair comparison (since the upper bound in [43] only works for a single file). We plot the latency upper bound that we give in this paper and the upper bound in [Theorem 3, [43]] in Figure

7. In our probabilistic scheduling, access requests are dispatched uniformly to all storage nodes. We find that our bound significantly outperforms the upper bound in [43] for a wide range of 1/λ < 32, which represents medium to high traffic regime. In particular, our bound works fine in high traffic regime with 1/λ < 18, whereas the upper bound in [43] goes to infinity and thus fail to offer any useful information. Under low traffic, the two bounds get very close to each other with a less than 4% gap. Validate Algorithm JLCM and joint optimization. We implemented Algorithm JLCM and used MOSEK [44], a commercial optimization solver, to realize the projected gradient routine. For 12 distributed storage nodes in our testbed, Figure 8 demonstrates the convergence of Algorithm JLCM, which optimizes latency-plus-cost over three dimensions: erasure code length ni , chunk placement Si , and load balancing πi,j . Convergence of Algorithm JLCM is guaranteed by Theorem 2. To speed up its calculation, in this experiment we merge different updates, including the linear approximation, the latency bound minimization, and the projected gradient update, into one single loop. By performing these updates on the same time-scale, our Algorithm JLCM efficiently solves the joint optimization of problem size r = 1000 files. It is observed that the normalized objective (i.e., latency-plus-cost normalized by the minimum) converges within 250 iterations for a tolerance  = 0.01. To achieve dynamic file management, our optimization algorithm can be executed repeatedly upon file arrivals and departures. To demonstrate the joint latency-plus-cost optimization of Algorithm JLCM, we compare its solution with three oblivious schemes, each of which minimize latency-plus-cost over only a subset of the 3 dimensions: load-balancing (LB), chunk placement (CP), and erasure code (EC). We run Algorithm JLCM for r = 1000 files of size 150M B on our testbed, with Vj = $1 for every 25M B storage and a tradeoff factor of θ = 200 sec/dollar. The result is shown in Figure. 9. First, even with the optimal erasure code and chunk placement (which means the same storage cost as the optimal solution from Algorithm JLCM), higher latency is observed in Oblivious LB, which schedules chunk requests according to a load-balancing heuristic that selects storage nodes with probabilities proportional to their service rates. Second, we keep optimal erasure codes and employ a random chunk placement algorithm, referred to as Random CP, which adopts the best outcome of 100 random runs. Large latency increment resulted by Random CP highlights the importance of joint chunk placement and load balancing in reducing service latency. Finally, Maximum EC uses maximum possible erasure code n = m and selects all nodes for chunk placement. Although its latency is comparable to the optimal solution from Algorithm JLCM, higher storage cost is observed. We verify that minimum latency-plus-cost can only be achieved by jointly optimizing over all 3 dimensions. Evaluate the performance of our solution. First, we choose r = 1000 files of size 150M B and the same storage cost and tradeoff factor as in the previous experiment. Request arrival rates are set to λi = 1.25/(10000sec), for i = 1, 4, 7, · · · 997, λi = 1.25/(10000sec), for i = 2, 5, 8, · · · 998 and λi = 1.25/(12000sec), for i = 3, 6, 9, · · · 999, 1000 respectively, which leads to an aggregate file request arrival rate of λ = 0.118 /sec. We obtain the service time statistics (including mean, variance, second and third moment) at all storage nodes and run Algorithm JLCM to generate an optimal latency-plus-cost solution, which results in four different sets of optimal erasure code (11,6), (10,7), (10,6) and (9,4) for each quarter of the 1000 files respectively, as well as associated chunk placement and load-balancing probabilities. Implementing this solution on our testbed, we retrieve the 1000 files at the designated request arrival rate and plot the CDF of download latency for each file in Figure 10. We note that 95% of download requests for files with erasure code (10,7) complete within 100 seconds, while the same percentage of requests for files using (11,6) erasure code complete within 32 seconds due to higher level of redundancy. In this experiment erasure code (10,6) outperforms (8,4) in latency though they have the same level of redundancy because the latter has larger chunk size when file size are set to be the same. To demonstrate the effectiveness of our joint optimization, we vary file size in the experiment from 50MB to 200MB and plot the average download latency of the 1000 individual files, out of which each quarter is using a distinct erasure code (11,6), (10,7), (10,6) and (9,4), and our analytical latency upper bound in Figure 11 . We see that latency increases super-linearly as file size grows, since it generates higher load on the storage system, causing larger queuing latency (which is super-linear according to our analysis). Further, smaller files always have lower latency because it is less costly to achieve higher

redundancy for these files. We also observe that our analytical latency bound tightly follows actual average service latency. Next, we varied aggregate file request arrival rate from λi = 0.125 /sec to λi = 0.1 /sec (with individual arrival rates also varies accordingly), while keeping tradeoff factor at θ = 2 sec/dollar and file size at 200M B. Actual service delay and our analytical bound for each scenario is shown by a bar plot in Figure 12 and associated storage cost by a curve plot. Our analytical bound provides a close estimate of service latency. As arrival rates increase, latency increases and becomes more dominating in the latencyplus-cost objective than storage cost. Thus, the marginal benefit of adding more chunks (i.e., redundancy) eventually outweighs higher storage cost introduced at the same time. Figure 12 shows that to achieve a minimization of the latency-plus-cost objective, the optimal solution from Algorithm JLCM allows higher storage cost for larger arrival rates, resulting in a nearly-linear growth of average latency as the request arrival rates increase. For instance, Algorithm JLCM chooses (12,6), (12,7), (11,6) and (11,4) erasure codes at the largest arrival rates, while (10,6), (10,7), (8,6) and (8,4) codes are selected at the smallest arrival rates in this experiment. We believe that this ability to autonomously manage latency and storage cost for latency-plus-cost minimization under different workload is crucial for practical distributed storage systems relying on erasure coding. Visualize latency and cost tradeoff. Finally, we demonstrate the tradeoff between latency and storage cost in our joint optimization framework. Varying the tradeoff factor in Algorithm JLCM from θ = 0.5 sec/dollar to θ = 200 sec/dollar for fixed file size of 200M B and aggregate arrival rates λi = 0.125 /sec, we obtain a sequence of solutions, minimizing different latency-plus-cost objectives. As θ increases, higher weight is placed on the storage cost component of the latency-plus-cost objective, leading to less file chunks in the storage system and higher latency. This tradeoff is visualized in Figure 13. When θ = 0.5, the optimal solution from Algorithm JLCM chooses three sets of erasure codes (12,6), (12,7), and (12,4) erasure codes, which is the maximum erasure code length in our framework and leads to highest storage cost (i.e., 12 dollars for each user), yet lowest latency (i.e., 110 sec). On the other hand, θ = 200 results in the choice of (6,6), (8,7), and (6,4) erasure code, which is almost the minimum possible cost for storing the three file, with the highest latency of 128 seconds. Further, the theoretical tradeoff calculated by our analytical bound and Algorithm JLCM is very close to the actual measurement on our testbed. To the best our our knowledge, this is the first work proposing a joint optimization algorithm to exploit such tradeoff in an erasure-coded, distributed storage system. VI. C ONCLUSION Relying on a novel probabilistic scheduling policy, this paper develops an analytical upper bound on average service delay of erasure-coded storage with arbitrary number of files and any service time distribution. A joint latency and cost minimization is formulated by collectively optimizing over erasure code, chunk placement, and scheduling policy. The minimization is solved using an efficient algorithm with proven convergence. Even though only local optimality can be guaranteed due to the non-convex nature of the mixed-integer optimization problem, the proposed algorithm significantly reduces a latencyplus-cost objective. Both our theoretical analysis and algorithm design are validated via a prototype in Tahoe, an open-source, distributed file system. Several practical design issues in erasure-coded, distributed storage, such as incorporating network latency and dynamic data management have been ignored in this paper and open up avenues for future work. R EFERENCES [1] A.D. Luca and M. Bhide, “Storage virtualization for dummies, Hitachi Data Systems Edition,” John and Wiley Publishing, 2009. [2] Amazon S3, “Amazon Simple Storage Service,” available online at http://aws.amazon.com/s3/. [3] Sathiamoorthy, Maheswaran, et al. “Xoring elephants: Novel erasure codes for big data.” Proceedings of the 39th international conference on Very Large Data Bases. VLDB Endowment, 2013. [4] Fikes, Andrew. “Storage architecture and challenges.” Talk at the Google Faculty Summit,available online at http://bit.ly/nUylRW, 2010. [5] M. K. Aquilera, HP Labs., P. Alto, R. Janakirama and L. Xu, “Using erasure codes efficiently for storage in a distributed system,” Dependable Systems and Networks, 2005. DSN 2005. Proceedings. International Conference, 2005.

[6] A. G. Dimakis, K. Ramchandran, Y. Wu, C. Suh, “A Survey on Network Codes for Distributed Storage,” arXiv:1004.4438, Apr. 2010 [7] A. Fallahi and E. Hossain, “Distributed and energy-Aware MAC for differentiated services wireless packet networks: a general queuing analytical framework,” IEEE CS, CASS, ComSoc, IES, SPS, 2007. [8] F.Baccelli, A.Makowski, and A.Shwartz, “The fork-join queue and related systems with synchronization constraints: stochastic ordering and computable bounds, Advances in Applied Probability, pp. 629660, 1989. [9] A.S. Alfa, “Matrix-geometric solution of discrete time MAP/PH/1 priority queue,” Naval research logistics, vol. 45, 00. 23-50, 1998. [10] J.H. Kim and J.K. Lee, “Performance of carrier sense multiple access with collision avoidance in wireless LANs,” Proc. IEEE IPDS., 1998. [11] E. Ziouva and T. Antoankopoulos, “CSMA/CA Performance under high traffic conditions: throughput and delay analysis,” Computer Comm, vol. 25, pp. 313-321, 2002. [12] N.E. Taylor and Z.G. Ives, “Reliable storage and querying for collaborative data sharing systems,” IEEE ICED Conference, 2010. [13] R. Rosemark and W.C. Lee, “Decentralizing query processing in sensor networks,” Proceedings of the second MobiQuitous: networking and services, 2005 [14] Dimakis, Alexandros D G , “Distributed data storage in sensor networks using decentralized erasure codes,” Signals, Systems and Computers, 2004. Conference Record of the Thirty-Eighth Asilomar., 2004. [15] R. Rojas-Cessa, L. Cai and T. Kijkanjanarat, “Scheduling memory access on a distributed cloud storage network,” IEEE 21st annual WOCC, 2012. [16] M.K. Aguilera, R. Janakiraman, L. Xu, “Using Erasure Codes Efficiently for Storage in a Distributed System,” Proceedings of the 2005 International Conference on DSN, pp. 336-345, 2005. [17] S. Chen, K.R. Joshi and M.A. Hiltunem, “Link Gradients: Predicting the Impact of Network Latency on Multi-Tier Applications,” Proc. IEEE INFOCOM, 2009. [18] Q. Lv, P. Cao, E. Cohen, K. Li and S. Shenker, “Search and replication in unstructured peer-to-peer networks,” Proceedings of the 16th ICS, 2002. [19] H. Kameyam and Y. Sato, “Erasure Codes with Small Overhead Factor and Their Distributed Storage Applications,” CISS ’07. 41st Annual Conference, 2007. [20] H.Y. Lin, and W.G. Tzeng, “A Secure Decentralized Erasure Code for Distributed Networked Storage,” Parallel and Distributed Systems, IEEE Transactions, 2010. [21] W. Luo, Y. Wang and Z. Shen, “On the impact of erasure coding parameters to the reliability of distributed brick storage systems,” Cyber-Enabled Distributed Computing and Knowledge Discovery, International Conference, 2009. [22] J. Li, “Adaptive Erasure Resilient Coding in Distributed Storage,” Multimedia and Expo, 2006 IEEE International Conference, 2006. [23] K. V. Rashmi, N. Shah, and V. Kumar, “Enabling node repair in any erasure code for distributed storage,” Proceedings of IEEE ISIT, 2011. [24] X. Wang, Z. Xiao, J. Han and C. Han, “Reliable Multicast Based on Erasure Resilient Codes over InfiniBand,” Communications and Networking in China, First International Conference, 2006. [25] S. Mochan and L. Xu, “Quantifying Benefit and Cost of Erasure Code based File Systems.” Technical report available at http : //nisl.wayne.edu/P apers/T ech/cbef s.pdf , 2010. [26] H. Weatherspoon and J. D. Kubiatowicz, “Erasure Coding vs. Replication: A Quantitative Comparison.” In Proceedings of the First IPTPS,2002 [27] A. Abdelkefi and J. Yuming, “A Structural Analysis of Network Delay,” Ninth Annual CNSR, 2011. [28] A.B. Downey, “The structural cause of file size distributions,” Proceedings of Ninth International Symposium on MASCOTS, 2011. [29] F. Paganini, A. Tang, A. Ferragut and L.L.H. Andrew, “Network Stability Under Alpha Fair Bandwidth Allocation With General File Size Distribution,” IEEE Transactions on Automatic Control, 2012. [30] P. Corbett, B. English, A. Goel, T. Grcanac, S. Kleiman, J. Leong and S. Sankar, “Row-diagonal parity for double disk failure correction,” In Proceedings of the 3rd USENIX FAST’, pp. 1-14, 2004. [31] B. Calder, J. Wang, A. Ogus, N. Nilakantan, A. Skjolsvold, S. McKelvie, Y. Xu, S. Srivastav, J. Wu, H. Simitci, et al., “ Windows azure storage: A highly available cloud storage service with strong consistency,” In Proceedings of the Twenty-Third ACM SOSP, pages 143–157, 2011. [32] O. Khan, R. Burns, J. Plank, W. Pierce, and C. Huang, “Rethinking erasure codes for cloud file systems: Minimizing I/O for recovery and degraded reads,” In Proceedings of FAST, 2012. [33] L. Huang, S. Pawar, H. Zhang, and K. Ramchandran, “Codes can reduce queueing delay in data centers,” in Proc. IEEE ISIT, 2012. [34] G. Ananthanarayanan, S. Agarwal, S. Kandula, A Greenberg, and I. Stoica, “Scarlett: Coping with skewed content popularity in MapReduce,” Proceedings of ACM EuroSys, 2011. [35] M. Bramson, Y. Lu, and B. Prabhakar, “Randomized load balancing with general service time distributions,” Proceedings of ACM Sigmetrics, 2010. [36] Y. Lu, Q. Xie, G. Kliot, A. Geller, J. Larus, and A. Greenberg, “Joinidle-queue: A novel load balancing algorithm for dynamically scalable web services,” 29th IFIPPERFORMANCE, 2010. [37] D. Bertsimas and K. Natarajan, “Tight bounds on Expected Order Statistics,” Probability in the Engineering and Informational Sciences, 2006. [38] L. Huang, S. Pawar, H. Zhang and K. Ramchandran, “Codes Can Reduce Queueing Delay in Data Centers,” Journals CORR, vol. 1202.1359, 2012. [39] S. Boyd and L. Vandenberghe, “Convex Optimization,” Cambridge University Press, 2005. [40] L.T. Hoai An and P.D. Tao,“The DC (Difference of Convex Functions) Programming and DCA Revisited with DC Models of Real World Non-convex Optimization Problems,” Annals of Operations Research, vol. 133, Issue 1-4, pp. 23-46, Jan 2005. [41] B. Warner, Z. Wilcox-O’Hearn and R. Kinninmont, “Tahoe-LAFS docs,” available online at https://tahoe-lafs.org/trac/tahoe-lafs.

[42] N. Shah, K. Lee, and K. Ramachandran, “The MDS queue: analyzing latency performance of codes and redundant requests,” arXiv:1211.5405, Nov. 2012. [43] G. Joshi, Y. Liu, and E. Soljanin, “On the Delay-Storage Trade-off in Content Download from Coded Distributed Storage Systems,” arXiv:1305.3945v1, May 2013. [44] MOSEK, “MOSEK: High performance software for large-scale LP, QP, SOCP, SDP and MIP,” available online at http://www.mosek.com/. [45] T. Angell, “The Farkas-Minkowski Theorem”. Lecture nodes available online at www.math.udel.edu/∼angell/Opt/farkas.pdf, 2002.

A PPENDIX A. Proof of Lemma 1 The proof is very simple. Any probabilistic scheduling with feasible probabilities {P(Ai ) : ∀i, Ai } can be viewed as an equivalent non-optimal scheduling, where chunk requests are buffered in a common queue and randomly assigned to appropriate servers. It is easy to see that it satisfies Properties (i) and (ii) of optimal scheduling. Since the optimal scheduling assigns chunk requests in a way to minimize average latency, probabilistic scheduling provides an upper bound. B. Proof of Theorem 1 P We first prove that the conditions m j=1 πi,j = ki ∀i and πi,j ∈ [0, 1] are necessary. πi,j ∈ [0, 1] for all i, j is obvious due to its definition. Then, it is easy to show that m X

πi,j =

m X

X

P(Ai )

j=1 Ai ⊆Si ,j∈Ai

j=1

=

X X

P(Ai )

Ai ⊆Si j∈Ai

=

X

ki P(Ai ) = ki

(22)

Ai ⊆Si

where the first step is due to (1), the second step changes P the order of summation, the last step uses the fact that each set Ai contain exactly ki nodes and that Ai ⊆Si P(Ai ) = 1. we prove that for any set of πi,1 , . . . , πi,m (i.e., node selection probabilities of file i) satisfying PNext, m π j=1 i,j = ki and πi,j ∈ [0, 1], there exists a probabilistic scheduling scheme with feasible load balancing probabilities P(Ai ) ∀Ai ⊆ Si to achieve the same node selection probabilities. We start by constructing Si = {j : πi,j > 0}, which isP a set containing at least ki nodes, because there must be at least ki positive probabilities πi,j to satisfy m j=1 πi,j = ki . Then, we choose erasure P code length ni = |Si | and place chunks on nodes in Si . From (1), we only need to show that when j∈Si πi,j = ki and πi,j ∈ [0, 1], the following system of ni linear equations have a feasible solution P(Ai ) ∀Ai ⊆ Si : X 1{j∈Ai } · P(Ai ) = πi,j , ∀j ∈ Si (23) Ai ⊆Si

where 1{j∈Ai } is an indicator function, which is 1 if j ∈ Ai , and 0 otherwise. We will make use of the following lemma. Lemma 6: Farkas-Minkowski Theorem [45]. Let A be an m × n matrix with real entries, and x ∈ Rn and b ∈ Rm be 2 vectors. A necessary and sufficient condition that A · x = b, x ≥ 0 has a solution is that, for all y ∈ Rm with the property that AT · y ≥ 0, we have hy, bi ≥ 0. We prove the desired result using mathematical induction. It is easy to show that the statement holds for ni = ki . In this case, we have a unique solution Ai = Si and P(Ai ) = πi,j = 1 for the system of linear equations (23), because all chunks must be selected to recover file i. Now assume that the system of linear equations (23) P has a feasible solution for some ni ≥ ki . Consider the case with arbitrary |Si +{h}| = ni +1 and πi,h + j∈Si πi,j = ki . We have a system of linear equations: X 1{j∈Ai } · P(Ai ) = πi,j , ∀j ∈ Si + {h} (24) Ai ⊆Si +{h}

Using the Farkas-Minkowski Theorem [45], a sufficient and necessary condition that (24) has a non-

P negative solution is that, for any y1 , . . . , ym and j yj πi,j < 0, we have X yj 1{j∈Ai } < 0 for some Ai ⊆ Si + {h}.

(25)

j∈Si +{h}

Toward this end, we construct π ˆi,j = πi,j + [u − πi,j ]+ for all j ∈ Si . Here [x]+ = max(x, 0) is a truncating function and u is a proper water-filling level satisfying X [u − πi,j ]+ = πi,h . (26) j∈Si

P

P It is easy to show that j∈Si π ˆi,j = πi,h + j∈Si πi,j P = ki and π ˆi,j P ∈ [0, 1], because π ˆi,j = max(u, πi,j ) ∈ [0, 1]. Here we used the fact that u < 1 since ki = j∈Si π ˆi,j ≥ j∈Si u ≥ ki u. Therefore, the system of linear equations in (23) with π ˆi,j on the right hand side must have a non-negative solution due to our induction assumption for ni = |Si |. Furthermore, without loss of generality, we assume that yh ≥ yj for all j ∈ Si (otherwise a different h can be chosen). It implies that X X yj π ˆi,j = yj (πi,j + [u − πi,j ]+ ) j∈Si

j∈Si



X j∈Si

=

X

yj πi,j +

X

yh πi,j

j∈Si

yj πi,j + yh πi,h < 0,

(27)

j∈Si

P where the second step follows from (26) and the last step uses j yj πi,j < 0. Applying the Farkas-Minkowski Theorem to the system of linear equations in (23) with π ˆi,j on the rightP hand side, the existence of a non-negative solution (due to our induction assumption for ni ) implies ˆi ⊆ Si . It means that that j∈Si yj 1{j∈Ai } < 0 for some A X X yj 1{j∈Aˆi } = yh 1{h∈Aˆi } + yj 1{j∈Aˆi } < 0. (28) j∈Si +{h}

j∈Si

ˆi ⊆ Si . This is exactly the desired inequality in (25). The last step uses 1{h∈Aˆi } = 0 since h ∈ / Si and A Thus, (24) has a non-negative solution due to the Farkas-Minkowski Theorem. ThePinduction statement holds for ni + 1. Finally, the solution indeed gives a probability distribution since Ai ⊆Si +{h} P(Ai ) = P j πi,j /ki = 1 due to (22). This completes the proof.  C. Proof of Corollary 1 The queuing system governed by a probabilistic scheduling policy is stable if all the local queues are stable. If {πi,j ∈ [0, 1], ∀i, j} satisfies (23), due toP Theorem 1, there exists a feasible scheduling policy with appropriate probabilities {P(Ai ) such that πi,j = Ai :Ai ⊆Si P(Ai )·1{j∈Ai } are the conditional probabilities of selecting node j for a file i request. Therefore, each local queue seems a superposition of r Poisson P arrival processes, each with rate P λi πi,j , which is also a Poisson Process with rate Λj i λi πi,j . The process is stable under traffic Λj = i λi πi,j < µj , ∀j. D. Proof of Lemma 2 Proof: Let Qmax be the maximum of waiting time {Qj , j ∈ Ai }. We first show that Qmax is upper bounded by the following inequality for arbitrary z ∈ R: X Qmax ≤ z + [Qmax − z]+ ≤ z + [Qj − z]+ , (29) j∈Ai

where [a]+ = max{a, 0} is a truncate function. Now, taking the expectation on both sides of (29), we have " # X E [Qmax ] ≤ z + E [Qj − z]+ j∈Ai

# X1 (Qj − z + |Qj − z|) =z+E 2 j∈Ai " # X1 = z + EAi (E[Qj ] − z + E|Qj − z|) , 2 j∈Ai X πi,j =z+ (E[Qj ] − z + E|Qj − z|), 2 j∈A "

(30)

i

where EAi denotes the expectation over randomly selected ki storage nodes in Ai ⊆ S according to probabilities πi,1 , . . . , πi,m . From Cauchy-Schwarz inequality, we have q E|Qj − z| ≤ (E[Zj ] − z)2 + Var[Qj ]. (31) Combining (30) and (31), we obtain the desired result by taking a minimization over z ∈ R. Finally, it is p easy to verify that the bound is tight for the same binary distribution constructed in [37], i.e., Qj = z ± (E[Qj ] − z)2 + Var[Qj ] with probabilities: P+ =

1 1 E[Qj ] − z + ·p , 2 2 (E[Qj ] − z)2 + Var[Qj ]

(32)

and P− = 1 − P+ , which satisfy the mean and variance conditions. Therefore, the upper bound in (5) is tight for this binary distribution.  E. Proof of Lemma 3 Proof: Poisson property of arrival process has been proven. We directly apply the Pollaczek-Khinchin transform in [38] to derive the expected delay and variance above.  F. Derivation of Problem JLCM Proof: Plugging the results from Lemma 2 and Lemma 3 into (8) and applying the same z to all T¯i (which relax the problem and maintains inequality (5)),Pwe obtain the desired objective function Problem JLCM. In the derivation, we used the fact that Λj = i λi πi,j ∀j from (3). Notice that the first summation is changed from j ∈ Si in Lemma 2 to j = {1, . . . , m} because we should always assign πi,j = 0 to storage node j that does not host any chunks of file i, i.e., for all j ∈ / Si .  G. Proof of Lemma 5 Proof: Plugging ρj = Λj /µj into (10) and (11), it is easy to verify that Xj and Yj are both convex in Λj ∈ [0, µj ], i.e., µ2j Γ2j ∂ 2 Xj = > 0, dΛ2j (µj − Λj )3 ˆ3 2µ2j Γ µ4j Γ4j (2µj + 4Λj ) ∂ 2 Yj j = + > 0. dΛ2j (µj − Λj )4 3 (µj − Λj )3

h i q 2 Similarly, we can verify that G = Xj + Xj + Yj is convex in Xj and Yj by showing that its Hessian matrix is positive semi-definite, i.e.,   2 1 Xj Xj 2  0. ∇ G=  23 · Xj 1 2 2 Xj + Yj Next, since G is increasing in Xj , Yj , their composition is convex and increasing in Λj [39]. It further implies that F = Λj G/2 is also convex. This completes the proof.  H. Proof of Lemma 4 Proof: The result is straight forward because a coded chunk should never be constructed if it is never utilized by load balancing, i.e., πi,j = 0, in the optimal solution. Similarly, removing any node j ∈ Si with πi,j = 0 strictly reduces storage cost by Vj , while latency remains the same.  I. Proof of Theorem 2 Proof: To simplify notations, we first introduce 2 auxiliary functions: m q i X Λj h g= Xj + Xj2 + Yj , 2 j=1 " # (t) r X m X ) V (π − π j i,j i,j h=θ Vj 1π(t) >0 + (t) . i,j (πı,j + 1/β) log β i=1 j=1

(33)

(34)

(t)

Therefore Problem (19) is equivalent to minπ (g + h) over π = (πi,j ∀i, j). For any β > 0, due to the the concavity of logarithmic functions we have log(βy + 1) − log(βx + 1) ≤ β(y − x)/(βx + 1) for any non (t) (t+1) negative x, y. Choosing x = πi,j and y = πi,j and multiplying a constant Vj / log β on both sides of the inequality, we have (t+1)

Vj (πi,j (t)

(t)

−πi,j )

(πı,j + β1 ) log β

(t+1)

≥ Vj

log(βπi,j +1) log β

(t)

− Vj

log(βπi,j +1) . log β

(35)

Therefore we construct a new auxiliary function r X m X

log(βπi,j + 1) . log β

(36)

g(π (t+1) ) + h(π (t+1) ) ≤ g(π (t) ) + h(π (t) ).

(37)

ˆ=θ h

i=1 j=1 (t+1)

Since πi,j

Vj

minimizes Problem (19), we have

ˆ and show that it generates a descent sequence, i.e., Next we consider a new objective function [g + h] ˆ (t+1) ) − [g + h](π ˆ (t) ) [g + h](π ˆ (t+1) ) − h(π ˆ (t) ) ≤ h(π (t) ) − h(π (t+1) ) + h(π (t) m r X X Vj (πi,j − πi,j ) ˆ (t+1) ) − h(π ˆ (t) ) + h(π = (t) i=1 j=1 (πı,j + 1/β) log β ≤ 0,

(38)

where the first step uses (37) and the last step follows from (35). Therefore, Algorithm JLCM generates (t) ˆ Notice that for any πi,j ∈ [0, 1], a descent sequence, πi,j for t = 0, 1, . . ., for objective function [g + h].

we have ˆ lim h(π) =

β→∞

r X m X

Vj 1(πi,j >0) ,

(39)

i=1 j=1

which is exactly the cost function in Problem JLCM. The converging point of the descent sequence is also a local optimal point of Problem JLCM as β → ∞.