Rescheduling on identical parallel machines with

0 downloads 0 Views 660KB Size Report
rithm in Java and conducted the experiments on a Dell OptiPlex ..... Y. Yin et al. / European Journal of Operational Research 252 (2016) 737–749. 745. I. (2). L2.
European Journal of Operational Research 252 (2016) 737–749

Contents lists available at ScienceDirect

European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor

Discrete Optimization

Rescheduling on identical parallel machines with machine disruptions to minimize total completion time Yunqiang Yin a, T. C. E. Cheng b, Du-Juan Wang c,∗ a

Faculty of Science, Kunming University of Science and Technology, Kunming 650093, China Department of Logistics and Maritime Studies, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong c School of Management Science and Engineering, Dalian University of Technology, Dalian 116023, China b

a r t i c l e

i n f o

Article history: Received 29 March 2015 Accepted 24 January 2016 Available online 29 January 2016 Keywords: Combinatorial optimization Production Rescheduling Bicriterion analysis Two-dimensional fully polynomial-time approximation scheme

a b s t r a c t We consider a scheduling problem where a set of jobs has already been assigned to identical parallel machines that are subject to disruptions with the objective of minimizing the total completion time. When machine disruptions occur, the affected jobs need to be rescheduled with a view to not causing excessive schedule disruption with respect to the original schedule. Schedule disruption is measured by the maximum time deviation or the total virtual tardiness, given that the completion time of any job in the original schedule can be regarded as an implied due date for the job concerned. We focus on the trade-off between the total completion time of the adjusted schedule and schedule disruption by finding the set of Pareto-optimal solutions. We show that both variants of the problem are N P -hard in the strong sense when the number of machines is considered to be part of the input, and N P -hard when the number of machines is fixed. In addition, we develop pseudo-polynomial-time solution algorithms for the two variants of the problem with a fixed number of machines, establishing that they are N P hard in the ordinary sense. For the variant where schedule disruption is modeled as the total virtual tardiness, we also show that the case where machine disruptions occur only on one of the machines admits a two-dimensional fully polynomial-time approximation scheme. We conduct extensive numerical studies to evaluate the performance of the proposed algorithms. © 2016 Elsevier B.V. All rights reserved.

1. Introduction Most modern production and service systems operate in a dynamic environment in which unexpected disruptions may occur, necessitating changes in the planned schedule, which may render the originally feasible schedule infeasible. Examples of such disruption events include the arrival of new orders, machine breakdowns, order cancellations, changes in order priority, processing delays, and unavailability of raw materials, personnel, tools, etc. Rescheduling, which involves adjusting the original schedule to account for a disruption, is necessary in order to minimize the effects of the disruption on the performance of the system. This involves a trade-off between finding a cost-effective new schedule and avoiding excessive changes to the original schedule. The degree of disruption to the original schedule is often modeled as a constraint or part of the original scheduling objective



Corresponding author. Tel.: +86 13840846586. E-mail address: [email protected] (D.-J. Wang).

http://dx.doi.org/10.1016/j.ejor.2016.01.045 0377-2217/© 2016 Elsevier B.V. All rights reserved.

(Hall, Liu, & Potts, 2007; Hall & Potts, 2004, 2010; Hoogeveena, Lentéb, & T’kindtb, 2012; Jain & Foley, 2016; Liu & Ro, 2014; Qi, Bard, & Yu, 2006; Unal, Uzsoy, & Kiran, 1997; Wang, Liu, Wang & Wang, 2015; Yan, Che, Cai, & Tang, 2014; Yang, 2007; Yuan & Mu, 2007). Variants of the rescheduling problem can be found in many real-world applications such as automotive manufacturing (Bean, Birge, Mittenthal, & Noon, 1991), space shuttle missions (Zweben, Davis, Daun, & Deale, 1993), shipbuilding (Clausen, Hansen, Larsen, & Larsen, 2001), short-range airline planning (Yu, Argello, Song, McCowan, & White, 2003), deregulated power market (Dahal, Al-Arfaj, & Paudyal, 2015), etc. The literature on rescheduling abounds. For recent reviews, the reader may refer to Aytug, Lawley, McKay, Mohan, and Uzsoy (2005), Billaut, Sanlaville, and Moukrim (2002), Ouelhadj and Petrovic (2009), and Vieira, Herrmann, and Lin (2003). In this paper we review only studies on machine rescheduling with unexpected disruptions arising from machine breakdowns that are directly related to our work. Leon, Wu, and Storer (1994) developed robustness measures and robust scheduling to deal with machine breakdowns and processing time variability when a right-shift repair strategy is used. Robustness is defined as the

738

Y. Yin et al. / European Journal of Operational Research 252 (2016) 737–749

minimization of the bicriterion objective function comprising the expected makespan and expected delay, where the expected delay is the deviation between the deterministic makespan in the original and adjusted schedules. Their experimental results showed that robust schedules significantly outperform schedules based on the makespan alone. Ozlen and Azizog˘ lu (2011) considered a rescheduling problem on unrelated parallel machines, where a disruption occurs on one of the machines. The scheduling measure is the total completion time and the deviation cost, which is the total disruption caused by the differences between the original and adjusted schedules. They developed polynomial-time algorithms to solve the following hierarchical optimization problems: minimizing the total disruption cost among the minimum total completion time schedules and minimizing the total completion time among the minimum total disruption cost schedules. Qi et al. (2006) considered a rescheduling problem in the presence of a machine breakdown in both the single-machine and two parallelmachine settings with the objective of minimizing the total completion time plus different measures of time disruption. They provided polynomial-time algorithms and pseudo-polynomial-time algorithms for the problems under consideration. Zhao and Tang (2010) extended some of their results to the case with linear deteriorating jobs. Liu and Ro (2014) considered a rescheduling problem with machine unavailability on a single machine, where disruption is measured as the maximum time deviation between the original and adjusted schedules. Studying a general model where the maximum time disruption appears both as a constraint and as part of the scheduling objective, they provided a pseudopolynomial-time algorithm, a constant factor approximation algorithm, and a fully polynomial-time approximation scheme when the scheduling objective is to minimize the makespan or maximum lateness. In this paper we address the issue of how to reschedule jobs in the presence of machine breakdowns. We assume that a set of jobs has been optimally scheduled according to the shortest processing time (SPT) rule to minimize the total completion time on m identical parallel machines (Baker, 1974). However, the processing of most of the jobs has not begun. This situation arises when schedules are planned in advance of their start dates, typically several weeks earlier in practice. Based on the SPT schedule, a lot of preparation work has been made, such as ordering raw materials, tooling the equipment, organizing the workforce, fixing customer delivery dates, etc. Due to unforeseen disruptions, different from the setting in Ozlen and Azizog˘ lu (2011), we consider the situation where machine breakdowns may occur on more than one machine, and the disruption start time and the duration of a machine disruption may differ on different machines. This necessitates rescheduling the remaining jobs in the original SPT schedule. However, doing so will disrupt the SPT schedule, causing havoc on the preparative work already undertaken. Thus, on rescheduling, it is important to adhere to the original scheduling objective, say, the total completion time, while minimizing the disruption cost with respect to the SPT schedule. In this paper we use the maximum time deviation or the total virtual tardiness, where the completion time of a job in the SPT schedule can be regarded as an implied due date for the job concerned here, to model the disruption cost with respect to the SPT schedule. Instead of modeling the degree of disruption over the original schedule as a constraint or part of the scheduling objective, we focus on the trade-off between the total completion time of the adjusted schedule and schedule disruption by finding the set of Pareto-optimal solutions for this bicriterion scheduling problem. The purpose of this paper is twofold. One is to study this innovative and more realistic scheduling model. The other is to ascertain the computational complexity status and provide solution procedures, if viable, for the problems under consideration.

To motivate our scheduling problem, consider a practical example related to the manufacturing of containers. In this context, the manufacturing process is labor intensive and several machines are deployed to manufacture a host of various container types. The preparation for the manufacturing of each type of containers incurs a high operational cost. The factory takes customer orders during the current scheduling cycle (usually a month) and generates an optimal production schedule for the next period that minimizes the scheduling cost measured as the total flowtime for manufacturing all the ordered containers. In other words, the factory uses the SPT dispatching rule to schedule the manufacturing of the containers. The delivery times and related transport arrangements of the finished containers are then determined accordingly. However, unexpected disruptions such as machine breakdowns may occur that will render the machines unavailable for certain periods of time, which will affect the utilization of the machines and order delivery to customers. This will result in the original SPT schedule no longer optimal, wreak havoc on the preparative work already undertaken, and make an impact on the subsequent transport arrangements. Hence, it is important to react quickly to such disruptions whereby the affected jobs need to be rescheduled with a view to reducing the scheduling cost, while not causing excessive schedule disruption with respect to the original SPT schedule (excessive schedule disruption will increase the operational cost or result in the loss of customer goodwill). From the viewpoint of the production planner, a trade-off between the scheduling cost and deviation from the original SPT schedule is desired. This situation can be modeled as our problem of rescheduling on identical parallel machines with machine disruptions to minimize the total completion time. The rest of the paper is organized as follows: In Section 2 we formally formulate our problem into two variants. In Section 3 we analyze the computational complexity and derive structural properties that are useful for tackling the two variants of the problem under study. In Section 4 we develop a pseudo-polynomial-time dynamic programming solution algorithm for the variant with the maximum time deviation as the schedule disruption cost and a fixed number of machines, establishing that it is N P -hard in the ordinary sense. In Section 5 we also develop a pseudo-polynomialtime dynamic programming solution algorithm for the variant with the total virtual tardiness as the schedule disruption cost and a fixed number of machines, establishing that it is N P -hard in the ordinary sense, and convert the algorithm into a two-dimensional fully polynomial-time approximation scheme for the case where machine disruptions occur only on one of the machines. In the last section we conclude the paper and suggest topics for future research.

2. Problem statement and definitions There are n jobs in the job set J = {1, 2, . . . , n} to be processed without interruption on m identical parallel machines {M1 , . . . , Mm }, which can deal with only one job at a time. All the jobs are available for processing at time zero. Each job j has a processing requirement of length pj . We assume that the jobs have been sequenced in an optimal schedule that minimizes the total completion time. It is well known that the jobs should be scheduled in the SPT order with no idle time between them for this purpose, whereby the jobs are sequenced successively on the m machines, i.e., jobs i, m + i, . . . , mn/m + i are successively scheduled on machine i without any idle time, i = 1, . . . , m, where x denotes the largest integer less than or equal to x. Let π ∗ denote the sequence in which the jobs are scheduled in this SPT order. Hereafter, we assume that all the pj are known non-negative integers and the jobs are sequenced in the SPT order such that

Y. Yin et al. / European Journal of Operational Research 252 (2016) 737–749

p1 ≤  ≤ pn , and denote by CS the makespan of the SPT schedule.  Let pmin = min { p j }, pmax = max { p j }, and P = nj=1 p j . j=1,...,n

j=1,...,n

Each machine Mi , i = 1, . . . , m, may have an unavailable time interval [Bi , Fi ] resulting from a machine breakdown with 0 ≤ Fi − Bi ≤ D and Bi ≥ 0, and at least one of the inequalities B1 ≤ F1 , B2 ≤ F2 , . . . , Bm ≤ Fm is strict, where D denotes an upper limit on the durations of machine disruptions on each machine, which can be estimated based on past experience. We assume, without loss of generality, that Bi < Fi for i = 1, . . . , m1 , and Bi = +∞ for i = m1 + 1, . . . , m for some positive integer m1 , which models the case where machine disruptions occur only on the first m1 machines. Let Bmax = max {Bi } and Fmax = max {Fi }. It is clear i=1,··· ,m1

i=1,...,m1

that Fmax ≤ Bmax + D. During job processing, machine disruptions occur so that the original SPT schedule is no longer optimal, or worse, no longer feasible. As a consequence, we wish to reschedule all the uncompleted jobs in response to the disruptions. We assume that Bi and Fi , i = 1, . . . , m, are known at time zero, prior to processing but after scheduling the jobs of J, and focus on the non-resumable case with the following assumption: If Bi and Fi , i = 1, . . . , m are all known after time zero, then the jobs of J having already been processed are removed from the problem, while the partially processed jobs, at most one on each machine, can either be processed to completion and removed, or processing can be halted immediately and started again from the beginning at a later time, with J and n updated accordingly. In this case, we reset the time index such that machine Mi , i = 1, . . . , m, is available from τ i onwards with at least one of the τ i ’s equal to 0. Here, τ i can be regarded as the release time of machine Mi , i = 1, . . . , m. For any feasible schedule ρ of the jobs of J, we define the following variables for j ∈ J: Cj (ρ ): the completion time of job j, T j (ρ ) = max{C j (ρ ) − C j (π ∗ ), 0}: the virtual tardiness of job j, where Cj (π ∗ ) can be viewed as a special type of due date, called the virtual due date of job j, and  j (ρ ) = |C j (ρ ) − C j (π ∗ )|: the time disruption of job j. Without ambiguity, we will drop ρ in our notation and just write Cj , Tj , and j for short. Furthermore, let max = max j∈J  j be the maximum time disruption of the jobs. The quality of a schedule ρ is measured by two criteria. The first is the original scheduling objective, i.e., the total completion time, and the second is a measure of the schedule disruption cost in terms of the maximum time deviation or the total virtual tardiness with respect to the original SPT schedule. As for all bicriterion optimization problems, we need to establish a relationship or tradeoff between the two criteria in the objective function. In this paper we focus on the Pareto-optimization  problem: Given the two criteria v1 = C j and v2 = max or =  T j , determine the set of all Pareto-optimal solutions (v1 , v2 ), where a schedule ρ with v1 = v1 (ρ ) and v2 = v2 (ρ ) is called Pareto optimal (or efficient) if there does not exist another schedule ρ such that v1 (ρ ) ≤ v1 (ρ ) and v2 (ρ ) ≤ v2 (ρ ) with at least one of these inequalities being strict. Graham, Lawler, Lenstra, and Rinnooy Kan (1979) introduced the three-field notation α |β |γ for describing scheduling problems. For the first field α , we use α = P x, hm1 1 to denote that there are m identical parallel machines, where x is empty when m is considered to be part of the input and x = m when m is fixed, and that there is a single unavailable time interval on the first m1 machines. For the second field β , we use τ = (τ1 , . . . , tm ) to denote the release time vector of the m machines, and β = [Bi , Fi ]1≤i≤m1 to represent that machine disruptions occur only on the first m machines and the unavailable time interval on machine Mi is [Bi , Fi ], i = 1, . . . , m1 , while in the γ field, we use (v1 , v2 ) to indicate a Pareto-optimization problem with two criteria v1 and   v2 . In our models, v1 = C j and v2 ∈ {max , T j }. Specifically,

739

 we address the problems P x, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( C j , max ) and   P x, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( C j , T j ), where x is empty or x = m. 3. Complexity analysis and optimal properties In this section we address the computational complexity issues of the considered problems and derive several structural properties of the optimal schedules that will be used later in the design of solution algorithms for the problems. 3.1. Complexity analysis We first show that both problems P, hm1 1 |τ , [Bi , Fi ]1≤i≤m1  |( C j , max ) and P, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 | ( Cj , Tj ) are N P -hard

in the strong sense. 3-PARTITION: Given an integer E, a finite set A of 3r positive integers aj , j = 1, . . . , 3r, such that E/4 < aj < 2/E, ∀ j = 1, . . . , 3r, and 3 r , . . . , 3r} be partitioned into r disj=1 a j = rE, can the set I = {1 joint subsets I1 , . . . , Ir such that j∈I a j = E for h = 1, . . . , r? h

Theorem 3.1. The problem P, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( N P -hard in the strong sense.



C j , max ) is

Proof. We establish the proof by making a reduction from 3PARTITION to the problem P, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 | Cj ≤ QA , max ≤ QB . Given an instance of 3-PARTITION, construct an instance of the scheduling problem as follows: • • • • • • • • •

The number of jobs: n = 3r; The number of machines: m = r; The number of disrupted machines: m1 = r; Job processing times: p j = a j , j = 1, . . . , 3r; The release times of the machines: τi = 0, j = 1, . . . , r; The disruption start times: Bi = E, j = 1, . . . , r; The disruption finish times: Fi = (3r + 2 )E, j = 1, . . . , r; The threshold value for Cj : QA = 3rE; The threshold value for max : QB = (6r + 2 )E.

Analogous to the proof of Theorem 2 in Levin, Mosheiov, and Sarig (2009), it is easy to see that there is a solution to the 3PARTITION instance if and only if there is a feasible schedule for the constructed instance of the problem P, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 | Cj ≤ QA , max ≤ QB .  In a similar way, we can establish the following result.   Theorem 3.2. The problem P, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( C j , T j ) is N P -hard in the strong sense. As a result of Theorems 3.1 and 3.2, we focus only on the case with a fixed number of machines m in the sequel. Note that even the single-machine case with a known  future machine unavailability, denoted as 1, h1 || nj=1 C j , is N P -hard (Adiri, Bruno, Frostig, & RinnooyKan, 1989). Hence  both of our problems P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( nj=1 C j , max ) n n and P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( j=1 C j , j=1 T j ) are N P -hard, too, which have been further shown to be N P -hard in the ordinary sense by developing pseudo-polynomial-time algorithms for them in the subsequent sections. 3.2. Optimal properties The following lemma provides an easy-to-prove property of the original SPT schedule for the problem Pm|| Cj . Lemma 3.3. For any two jobs j and k in the original SPT schedule π ∗  for the problem P m|| nj=1 C j , j < k implies C j (π ∗ ) − p j ≤ Ck (π ∗ ) − pk .

740

Y. Yin et al. / European Journal of Operational Research 252 (2016) 737–749

After rescheduling, we refer to the partial schedule of jobs finished no later than Bi on machine Mi , i = 1, . . . , m1 , as the earlier schedule on machine Mi ; the partial schedule of jobs in the earlier schedules on all the machine Mi , i = 1, . . . , m1 , as the earlier schedule; and the partial schedule of jobs that begin their processing at time Fi or later on machine Mi , i = 1, . . . , m1 , and that are processed on machine Mi , i = m1 + 1, . . . , m, as the later schedule. The next result establishes the order of the jobs in the earlier schedule on each machine Mi , i = 1, . . . , m1 , and the order of the jobs in the later schedule. Lemma 3.4. For each of the problems P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1    |( nj=1 C j , max ) and Pm, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( nj=1 C j , nj=1 T j ), there exists an optimal schedule ρ ∗ in which (1) the jobs in the earlier schedule on each machine Mi , i = 1, . . . , m1 , follow the SPT order; and (2) the jobs in the later schedule follow the SPT order. Proof. (1) Consider first the earlier schedule in ρ ∗ on any machine Mi , i = 1, 2, . . . , m1 . If property (1) does not hold, let k and j be the first pair of jobs for which j precedes k in π ∗ , implying that pj ≤ pk , but k immediately precedes j in ρ ∗ on machine Mi . Constructing a new schedule ρ from ρ ∗ by swapping jobs k and j while leaving the other jobs unchanged. Furthermore, let t denote the processing start time of job k in schedule ρ ∗ . Then we have Ck (ρ ∗ ) = t + pk ≥ t + p j = C j (ρ ) and C j (ρ ∗ ) = Ck (ρ ) = t + p j + pk . Hence, C j (ρ ) + Ck (ρ ) = 2t + n 2 p j + pk ≤ 2t + p j + 2 pk = Ck (ρ ∗ ) + C j (ρ ∗ ), making j=1 C j (ρ ) ≤ n ∗ ). To show that ρ is no worse than ρ ∗ , it sufC ( ρ j=1 j fices to show that max (ρ ∗ ) ≥ max (ρ ) for the prob lem P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( C j , max ) and Ti (ρ ∗ ) ≥ Ti (ρ )   for the problem P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( C j , T j ). Indeed, by ∗ ∗ Lemma 3.3, pj ≤ pk implies that Cj (π ) ≤ Ck (π ). Thus, it follows from the proof of Lemma 1 in Liu and Ro (2014) that max (ρ ∗ ) ≥ max (ρ ) and from Emmons (1969) that Ti (ρ ∗ ) ≥ Ti (ρ ). Repeating this argument a finite number of times establishes property (1). (2) Denote by I the set of jobs in the later schedule with |I| = n . Re-index the jobs in I as 1 , 2 , . . . , n such that p1 ≤ p2 ≤ · · · ≤ pn . In what follows, we show that it is optimal to sequence the jobs in I in SPT order. To achieve this, it suffices to show that there exists an optimal schedule in which job n among the jobs in I has the latest processing start time. If this is the case, then the problem can be decomposed into a problem of scheduling the first n − 1 jobs on machine Mi after Fi , i = 1, . . . , m1 , and on machine Mi , i = m1 + 1, . . . , m, and then inserting job n at the earliest possible time when any one of the machines becomes available. Similarly, the problem with the first n − 1 jobs can be further decomposed into a problem with the first n − 2 jobs and scheduling job n − 1, and so on. Thus property (2) follows. Since job n has the largest processing time among the jobs in the set I, analogous to the proof of property (1), we can show that it must be the last job on machine Mi1 to which it was assigned. Now suppose that there exists an optimal schedule ρ ∗ in which there is a job j , other than job n , assigned to a different machine Mi2 but with a later processing start time than that of job n . Let t1 be the processing start time of job n on machine Mi1 and t2 be the processing start time of job j on machine Mi2 with t1 < t2 . We construct a new schedule ρ by letting job j start its processing at t1 on machine Mi1 and job n start its processing at t2 on machine Mi2 , while leaving the other jobs unchanged. Such an exchange makes job j start earlier by t2 − t1 and job n start later by t2 − t1 . The total completion time, however, does not change. Now we show that ρ is no worse than ρ ∗ . There are two cases to consider.

Case 1: Job j is early in ρ , i.e., t1 + p j ≤ C j (π ∗ ). It follows from t1 ≤ C j (π ∗ ) − p j ≤ Cn (π ∗ ) − pn that job n is early in ρ ∗ . Hence,

 j ( ρ ) = C j ( π ∗ ) − p j − t1 , n (ρ ∗ ) = Cn (π ∗ ) − pn − t1 , Tj (ρ ) = Tn (ρ ∗ ) = 0. Furthermore, if job n is tardy in ρ , i.e., t2 + pn ≥ Cn (π ∗ ). It follows from t2 ≥ Cn (π ∗ ) − pn ≥ C j (π ∗ ) − p j that job j is also late in ρ ∗ . Hence,

n (ρ ) = Tn (ρ ) = t2 + pn − Cn (π ∗ ),  j (ρ ∗ ) = T j ( ρ ∗ ) = t2 + p j − C j ( π ∗ ). Thus,

it

follows

from

C j (π ∗ ) − p j ≤ Cn (π ∗ ) − pn

that

 j (ρ ) ≤ n (ρ ∗ ), n (ρ ) ≤  j (ρ ∗ ) and Tn (ρ ) + T j (ρ ) = Tn (ρ ) ≤ T j (ρ ∗ ) = T j (ρ ∗ ) + Tn (ρ ∗ ). If job n is early in ρ , i.e., t2 + pn ≤ Cn (π ∗ ), we have

n (ρ ) = Cn (π ∗ ) − pn − t2 , Tn (ρ ) = 0,  j (ρ ∗ ) = |t2 + p j − C j (π ∗ )|, Tj (ρ ∗ ) = max{t2 + p j − C j (π ∗ ), 0}. Thus, it follows from C j (π ∗ ) − p j ≤ Cn (π ∗ ) − pn and t1 < t2 that  j (ρ ) ≤ n (ρ ∗ ), n (ρ ) ≤ n (ρ ∗ ) and Tn (ρ ) + T j (ρ ) = 0 ≤ T j (ρ ∗ ) + Tn (ρ ∗ ). Case 2: Job j is tardy in ρ , i.e., t1 + p j ≥ C j (π ∗ ). It follows from t1 < t2 that job j is also tardy in ρ ∗ . Hence,

 j ( ρ ) = T j ( ρ ) = t1 + p j − C j ( π ∗ ),  j ( ρ ∗ ) = T j ( ρ ∗ ) = t2 + p j − C j ( π ∗ ). Furthermore, if job n is tardy in ρ , i.e., t2 + pn ≥ Cn (π ∗ ), we have

n (ρ ) = Tn (ρ ) = t2 + pn − Cn (π ∗ ), n (ρ ∗ ) = |t1 + pn − Cn (π ∗ )|, Tn (ρ ∗ ) = max{t1 + pn − Cn (π ∗ ), 0}. Thus, it follows from C j (π ∗ ) − p j ≤ Cn (π ∗ ) − pn and t1 < t2 that  j (ρ ) ≤  j (ρ ∗ ), n (ρ ) ≤  j (ρ ∗ ) and that Tn (ρ ) + T j (ρ ) ≤ T j (ρ ∗ ) + Tn (ρ ∗ ). If job n is early in ρ , i.e., t2 + pn ≤ Cn (π ∗ ), it follows from t1 < t2 that n is also early in ρ ∗ . Hence

n (ρ ) = Cn (π ∗ ) − pn − t2 , n (ρ ∗ ) = Cn (π ∗ ) − pn − t1 , Tn (ρ ) = Tn (ρ ∗ ) = 0. Thus, it follows from C j (π ∗ ) − p j ≤ Cn (π ∗ ) − pn that n (ρ ) ≤ n (ρ ∗ ), Tn (ρ ) + T j (ρ ) =  j (ρ ) ≤  j (ρ ∗ ) = T j (ρ ∗ ) + Tn (ρ ∗ ). Thus, in any case, schedule ρ is no worse than ρ ∗ for the two problems under consideration. The result follows.  It is worth noting that scheduling the jobs in the earlier schedule in SPT order, each as early as possible, is not necessarily optimal even for the case without the schedule disruption cost. This is because such rescheduling may waste a long period of machine idle time immediately preceding the start time of each machine disruption, which we illustrate in the following example. Example 3.5. Let n = 3, m = 2; p1 = 2, p2 = 3, p3 = 4; τ1 = τ2 = 0; B1 = B2 = 5; and F1 = F2 = 6. If the jobs are scheduled in the same sequence as π ∗ , each as early as possible, then jobs 1 and 3 are scheduled in the intervals [0, 2] and [6, 10], respectively, on machine M1 , while job 2 is scheduled in the interval [0, 3] on machine M2 , yielding an objective value of 15. In an optimal schedule, however, jobs 1 and 2 are scheduled in the intervals [0, 2] and [2, 5], respectively, on machine M1 , while job 3 is scheduled in the interval [0, 4] on machine M2 , yielding an objective value of 11.

Y. Yin et al. / European Journal of Operational Research 252 (2016) 737–749

4. Problem P m, hm1 1 |τ, [Bi , Fi ]1≤i≤m1 |(

n

j=1 C j , max )

In this section we consider the problem P m, hm1 1 |τ ,  [Bi , Fi ]1≤i≤m1 |( nj=1 C j , max ). We first develop a pseudopolynomial-time dynamic programming (DP) algorithm to solve it, followed by an experimental study to evaluate the effectiveness of the DP-based solution algorithm. 4.1. A pseudo-polynomial-time solution algorithm for the problem  P m, hm1 1 | τ , [Bi , Fi ]1≤i≤m1 |( nj=1 C j , max ) We start with providing an auxiliary result.  Lemma 4.1. For the problem P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( nj=1 C j , max ), the maximum time disruption max is upper-bounded by max{min{Bmax , P } + D, CS − pmin }. Proof. It suffices to show that max (ρ ∗ ) is less than max{min{Bmax , P } + D, CS − pmin }, where ρ ∗ is an optimal sched ule for the problem P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 | nj=1 C j that satisfies Lemma 3.4. We first consider the jobs with Cj (ρ ∗ ) > Cj (π ∗ ). If C j (ρ ∗ ) ≤ min{Bmax , P } + D, then  j ≤ min{Bmax , P } + D; otherwise, Lemma 3.4 indicates that all the jobs completed after time min{Bmax , P } + D and before job j in ρ ∗ are processed before job j in π ∗ , so  j ≤ min{Bmax , P } + D. Alternatively, when Cj (ρ ∗ ) ≤ Cj (π ∗ ), we have  j ≤ CS − pmin since CS is the maximum completion time of the jobs in π ∗ . Thus, max (ρ ∗ ) ≤ max{min{Bmax , P} + D, CS − pmin }. Our DP-based solution algorithm STDP for the problem  P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( nj=1 C j , max ) relies strongly on Lemmas 3.4 and 4.1. Note that after rescheduling, a job in the earlier schedule might be completed earlier than in π ∗ . In this case, as max is defined as max j∈J |C j − C j (π ∗ )|, the job might be immediately preceded by an idle time period. Thus, we design algorithm STDP on the basis of solving a series of the constrained optimiza tion problem P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 , max ≤ Q | nj=1 C j with 0 ≤ Q ≤ max{min{Bmax , P } + D, CS − pmin }. The constraint max ≤ Q implies that, in a feasible schedule ρ , C j (ρ ) ≥ C j (π ∗ ) − Q and C j (ρ ) ≤ C j (π ∗ ) + Q, for j = 1, . . . , n. Accordingly, each job j has an implied release time r Qj = C j (π ∗ ) − p j − Q and an implied deadline Q

d j = C j (π ∗ ) + Q. Let ( j, B, F, T , v )Q be a state corresponding to a partial schedule for the jobs {1, . . . , j} such that the maximum time deviation is less than Q, where •

B = (t1 , t2 , . . . , tm ): tk , k = 1, . . . , m1 , denotes the completion 1







time of the last job scheduled before Bk on machine Mk ; F = (t1 , t2 , . . . , tm1 ): tk , k = 1, . . . , m1 , stands for the total processing time of the jobs scheduled after Fk on machine Mk ; T = (tm1 +1 , tm1 +2 , . . . , tm ): tm1 +k , k = 1, . . . , m − m1 , measures the completion time of the last job scheduled on machine Mk ; v: the total completion time of the partial schedule.

Algorithm STDP follows the framework of forward recursive state generation and starts with the empty state in which no job has been scheduled yet, i.e., j = 0. The set S Qj contains the states for all the generated sub-schedules for jobs {1, . . . , j}. The algorithm recursively generates the states for the partial schedules by adding a job to a previous state. Naturally, the construction of S Qj may generate more than a single state that will not lead to a complete optimal schedule. The following result shows how to reduce the state set S Qj .  Lemma 4.2. For any two states ( j, B, F, T , v )Q and ( j, B = (t1 , . . . , tm ), F = (t1 , . . . , tm 1 ), T = (tm +1 , . . . , tm ), v ) in S Qj , if 1

1

tk ≤ tk for k = 1, . . . , m1 , tk ≤ tk for k = 1, . . . , m, and v ≤ v , we can eliminate the latter state.

741

Proof. Let S1 and S2 be two sub-schedules corresponding to the states ( j, B, F, T , v )Q and ( j, B , F , T , v )Q , respectively. And let  S2 be a sub-schedule of the jobs { j + 1, . . . , n} that is appended to the sub-schedule S2 so as to create a feasible schedule  S2 . In the resulting feasible schedule  S2 , the total completion time is given as follows:



C j ( S2 ) = v +

n 

Ck ( S2 ).

k= j+1

Since tk ≤ tk for k = 1, . . . , m1 and tk ≤ tk for k = 1, . . . , m, set { j + 1, . . . , n} can also be added to the sub-schedule S1 in an analogous way as  S2 , denoting the resulting sub-schedule as  S1 , to form a feasible schedule  S1 , and we have Ck ( S1 ) ≤ Ck ( S2 ) for k = j + 1, . . . , n. In the resulting feasible schedule  S1 , the total completion time is given as follows:



C j ( S1 ) = v +

n 

Ck ( S1 ).

k= j+1

  It follows from v ≤ v that C j ( S1 ) ≤ C j ( S2 ). Therefore, subschedule S1 dominates S2 , so the result follows.  We provide a formal description of algorithm SMDP as follows: Sum-Max-DP Algorithm SMDP Step 1. [Preprocessing] Re-index the jobs in SPT order. Step 2. [Initialization] For each j = 1, . . . , n, set i = j − m j/m  and C j (π ∗ ) = k=i,m+i,...,m j/m+i pk ; for each Q ∈ [0, max{min{Bmax , P } + D, CS − pmin }], set S0Q = {(0, B, F, T , 0 )Q }, where B = (τ1 , . . . , τm1 ), F = (0, . . . , 0) and T = (τm1 +1 , . . . , τm ),

 

and set r Qj = C j (π ∗ ) − p j − Q and d¯Qj = C j (π ∗ ) + Q.

m1

Step 3. [Generation] Generate S Qj from S Qj−1 . For each Q ∈ [0, max{min{Bmax , P } + D, CS − pmin }] do For j = 1 to n do Set S Qj = ∅; For each ( j − 1, B, F, T , v )Q ∈ S Qj−1 do



Set t = min

min {Fk + tk },

k=1,...,m1

k∗ = arg min





min

k=m1 +1,...,m

min {Fk + tk },

k=1,...,m1

tk

min

k=m1 +1,...,m

and

tk ;

For i = 1 to m1 do /∗ Alternative 1: schedule job j before Bi : Q

If max{ti , r Qj } + p j ≤ min{d j , Bi }, then set

S Qj ← S Qj ∪ {( j, B , F, T , v + max{ti , r Qj } + p j )Q },

where B = (t1 , . . . , t(i−1 ) , max{ti , r Qj } + p j , t(i+1 ) , . . . , tm ); 1

Endif Endfor /∗ Alternative 2: assign job j to the later schedule: If t + p j ≤ dQj , then set S Qj ← S Qj ∪ {( j, B, F , T , v + t + p j )Q },

where F = (t1 , . . . , tk∗ −1 , t + p j − Fk∗ , tk∗ +1 , . . . , tm1 ) and T = T if 1 ≤ k∗ ≤ m1 , otherwise F = F and T = (tm1 +1 , . . . , tk∗ −1 , t + p j , tk∗ +1 , . . . , tm );

Endif Endfor Endfor [Elimination] /∗ Update set S Qj ∗ /

742

Y. Yin et al. / European Journal of Operational Research 252 (2016) 737–749

1. For any two states ( j, B, F, T , v )Q and ( j, B, F, T , v )Q with v ≤ v , eliminate the latter state from set S Qj ; 2. For any two states ( j, B = (t1 , . . . , t(i−1 ) , ti , t(i+1 ) , . . . , tm ), F, T , v )Q and ( j, B = 1

(t1 , . . . , t(i−1) , ti , t(i+1) , . . . , tm ), F, T , v )Q with ti ≤ ti , 1

eliminate the latter state from set S Qj ; 3. For any two states ( j, B, F = (t1 , . . . , ti−1 , ti , ti+1 , . . . , tm1 ), T , v )Q and ( j, B, F = (t1 , . . . , ti−1 , ti , ti+1 , . . . , tm1 ), T , v )Q with ti ≤ ti , elimiS Qj ;

nate the latter state from set 4. For any two states ( j, B, F, T = (tm1 +1 , . . . , tk−1 , tk , tk+1 , . . . , tm ), v )Q and ( j, B, F, T = (tm1 +1 , . . . , tk−1 , tk , tk+1 , . . . , tm ), v )Q with tk ≤ tk , eliminate the latter state from set S Qj ; Endfor

Step 4. [Return all the Pareto-optimal points] Set Q = max{min{Bmax , P } + D, CS − pmin } and i = 1; While SnQ = ∅ do Select (Vi , Qi ) = (v, Q ) that corresponds to the state (n, B, F, T , v )Q with the minimum v value among all the states in SnQ such that Q ≤ Q as a Pareto-optimal point; Set Q = Qi − 1 and i = i + 1; End while Return (V1 , Q1 ), . . . , (Vi−1 , Qi−1 ) and the corresponding Pareto-optimal schedules can be found by backtracking. Justification for algorithm SMDP. After the [Preprocessing] procedure, for each Q ∈ [0, max{min{Bmax , P } + D, CS − pmin }], the algorithm goes through n phases. The jth phase, j = 1, . . . , n, takes care of job j and produces a set S Qj . Suppose that the state set S Qj−1 has been constructed and we try to assign job j now. Due to Lemma 3.4, it is optimal to assign j either to the last position of any machine Mi , i = 1, . . . , m1 , that is finished no later than Bi , or to the later schedule on some machine Mi , i = 1, . . . , m, at the earliest possible time when the machine becomes available. Note that after rescheduling, a job might be completed earlier than in π ∗ . In this case, due to the maximum time deviation constraint, the job might be immediately preceded by an idle time period. Thus, if job j is assigned to the last position of machine Mi , i = 1, . . . , m1 , that is finished no later than Bi , its processing start time is max{ti , r Qj }, where ti > r Qj , i.e., ti + p j − C j (π ∗ ) > −Q, implying that job j has a time deviation strictly less than Q, while ti ≤ r Qj implies that there may be an idle time immediately preceding job j and that job j has a time deviation of Q time units, Q

and that max{ti , r Qj } + p j ≤ min{d j , Bi }, which guarantees that job j has a time deviation no more than Q and can be completed before Bi . In this case, we update ti as max{ti , r Qj } + p j and the contribution of job j to the scheduling objective is

max{ti , r Qj }

+ p j,

so v = v + max{ti , r Qj } + p j . If job j is assigned to the later schedule on some machine Mi , we have i = k∗ by Lemma 3.4 and its processing start time is t, where the condition t + p j ≤ dQj guarantees that job j has a time deviation less than Q. In this case, the contribution of job j to the scheduling objective is t + p j , so

v = v + t + p j . Naturally, the construction process of S Qj may generate more than a single state ( j, B, F, T , v )Q with the same B, F, and T . Among all these states, by the elimination rule 1 in algorithm SMDP, we keep in S Qj only the state with the minimum v value to ensure the finding of the minimum objective value. By Lemma 4.2, the elimination rules 2–4 in algorithm SMDP can also be used to eliminate the non-dominated states. It is easy to see

that for each Q ∈ [0, max{min{Bmax , P } + D, P − pmin }], an efficient solution is given by the pair (V, Q ) = (v, Q ), which corresponds to the state (n, B, F, T , v )Q with the minimum v value among all the states in SnQ such that Q ≤ Q . Theorem 4.3. Algorithm SMDP solves  P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( nj=1 C j , max ) in 1 {min{Bmax , P} + D, CS − pmin }Pm m B ) time. i i=1

the problem O(nmax {min

Proof. The optimality of algorithm SMDP is guaranteed by Lemmas 3.4, 4.1 and 4.2, and the above analysis. We now work out the time complexity of the algorithm. Step 1 implements a sorting procedure that needs O(nlog n) time. In Step 3, for each Q, before each iteration j, the total number of possible states ( j − 1, B, F, T , v )Q ∈ S Qj−1 can be calculated in the following way: there are at most Bi possible values for ti , i = 1, . . . , m1 , and at most P possible values for ti , i = 1, . . . , m. Because of the elimination rule, the total number of different states at the m1 beginning of each iteration is at most O(P m i=1 Bi ). In each iteration j, there are at most m1 + 1 new states generated from each state in S Qj−1 for each candidate job. Thus, the number of m 1 new states generated is at most (m1 + 1 )O(P m i=1 Bi ). However, due to the elimination rules, the number of new states m1 generated in S Qj is upper-bounded by O(P m i=1 Bi ) after the elimination step. Thus, after n max{min{Bmax , P } + D, CS − pmin } iterations, Step 3 can be executed in O(n max{min{Bmax , P } + m 1 D, CS − pmin }P m i=1 Bi ) time, as required. Step 4 takes m1 O(max{min{Bmax , P } + D, CS − pmin }P m i=1 Bi ) time. Therefore, the overall time complexity of the algorithm is m1 O(n max{min{Bmax , P } + D, CS − pmin }P m i=1 Bi ).  The proof of Theorem 4.3 also implies that the prob lem P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 , max ≤ Q | nj=1 C j can be solved in m1 O(nP m i=1 Bi ) time. In particular, when m = m1 = 1, the corresponding problem, denoted as 1, h1 |max ≤ Q| Cj , can be solved in O(nPB1 ) time. In what follows, we develop an alternative algorithm for the problem 1, h1 |max ≤ Q| Cj by exploiting the following stronger results on the optimal schedule, the proofs of which are analogous to those of Lemmas 2 and 3 in Liu and Ro (2014), respectively. Lemma 4.4. For the problem 1, h1 |max ≤ Q| Cj , there exists an optimal schedule ρ ∗ in which: (1) the jobs in the earlier schedule are processed with at most one inserted idle time period; (2) each job processed in the earlier schedule after an inserted idle time period starts processing exactly at its induced release time; (3) the jobs processed in the earlier schedule after an inserted idle time period are processed consecutively in π ∗ . Lemma 4.5. For the problem 1, h1 |max ≤ Q| Cj , there exists an optimal schedule ρ ∗ in which if a job is immediately preceded by an idle time period in the earlier schedule, then in π ∗ the job has a start time later than F1 . Combining Lemmas 3.4, 4.4, and 4.5, we can apply algorithm ML in Liu and Ro (2014) with a slight modification to solve the problem 1, h1 |max ≤ Q| Cj . As a result, the problem 1, h1 |max ≤ Q| Cj can also be solved in O(n2 B1 Q) time (see Theorem 2 in Liu & Ro, 2014). 4.2. The performance of algorithm SMDP We performed numerical studies by varying the problem size to assess the performance of algorithm SMDP. We coded the algorithm in Java and conducted the experiments on a Dell OptiPlex

Y. Yin et al. / European Journal of Operational Research 252 (2016) 737–749 Table 1 The performances of the Algorithm SMDP. n

B1

D

Avg. number of Paretooptimization points

Max. number of Paretooptimization points

Avg. running time (second)

10

P/8

P/80 P/50 P/30 P/80 P/50 P/30 P/80 P/50 P/30 P/80 P/50 P/30 P/80 P/50 P/30 P/80 P/50 P/30 P/80 P/50 P/30 P/80 P/50 P/30 P/80 P/50 P/30 P/80 P/50 P/30 P/80 P/50 P/30 P/80 P/50 P/30

57.30 54.77 56.27 58.90 50.07 56.93 54.93 51.37 55.27 81.63 79.73 87.00 81.27 85.23 84.87 84.47 85.73 84.10 110.17 107.63 105.63 108.40 110.20 108.10 106.53 104.83 108.53 131.00 133.25 132.25 116.75 134.50 133.75 135.50 138.00 131.75

74 70 75 74 69 72 75 69 72 101 101 107 110 108 115 106 108 104 132 126 133 138 129 135 126 132 135 142 162 160 131 142 141 155 143 141

9.30 8.50 8.30 9.06 7.45 8.11 7.14 6.54 7.19 82.17 75.79 89.94 65.79 62.90 57.31 36.87 38.02 31.88 685.17 567.76 489.43 465.83 402.12 329.70 161.83 156.22 166.08 6145.25 5602.20 4820.60 2287.93 3508.21 3426.92 1651.32 1889.25 1544.08

P/6 P/4 15

P/8 P/6 P/4

20

P/8 P/6 P/4

25

P/8 P/6 P/4

7010 with a 3.40 gigahertz, 4 gigabyte memory Intel core i5-3570 CPU. For simplicity, we only studied the two-machine case with m1 = 1. The number of jobs considered was n ∈ {10, 15, 20, 25}. For each value of n, we randomly generated 30 instances for each combination of the specifications for the two parameters B1 and D = F1 − B1 . Randomly generating the job processing times pj from the uniform distribution (1, 20), we considered the case where B1 ∈ {P/8, P/6, P/4}, and D ∈ {P/80, P/50, P/30}. For each value of n, we recorded the average number of Pareto-optimal points, the maximum number of Pareto-optimal points, and the average time required to construct the entire Pareto set for a single instance. For each of the 4 ∗ 3 ∗ 3 = 36 parameter combinations, we generated 30 instances, i.e., we tested a total of 1080 instances. Table 1 summarizes the results. The main observations from Table 1 are as follows: •







As expected beforehand, the number of Pareto-optimal points increases as the number of machines increases; The number of Pareto-optimal points is insensitive to the duration of the machine disruption and the disruption start time; The average time required to construct the entire Pareto-set decreases as the disruption start time increases; In most cases the algorithm fails to solve instances with up to 25 jobs due to space capacity limitation.

5. Problem P m, hm1 1 |τ, [Bi , Fi ]1≤i≤m1 |( In this section we consider   [Bi , Fi ]1≤i≤m1 |( nj=1 C j , nj=1 T j ). We

n

j=1 C j ,

n j=1

the problem first develop

Tj ) P m, hm1 1 |τ , a pseudo-

743

polynomial-time DP algorithm to solve the problem, and then show that the special case with m1 = 1 admits a two-dimensional fully polynomial-time approximation scheme (FPTAS), which is the strongest approximation result for a bicriterion N P -hard problem. Recall that an algorithm Aε for a bicriterion problem is a (1 + ε )approximation algorithm if it always delivers an approximate solution pair (Z, T) with Z ≤ (1 + ε )Z ∗ and T ≤ (1 + ε )T ∗ for all the instances, where (Z∗ , T∗ ) is a Pareto-optimal solution. A family of approximation algorithms {Aε } defines a two-dimensional FPTAS for the considered problem, if for any ε > 0, Aε is a (1 + ε )approximation algorithm that is polynomial in n, L, and 1/ε , where L = log max{n, τmax , Bmax , D, pmax } is the number of bits in binary encoding for the largest numerical parameter in the input. 5.1. A pseudo-polynomial-time solution algorithm for the problem   P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( nj=1 C j , nj=1 T j ) We can easily derive the following result by Lemma 4.1.  Lemma 5.1. For the problem P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( nj=1 C j , n n j=1 T j ), the total virtual tardiness j=1 T j is upper-bounded by n max{min{Bmax , P } + D, CS − pmin }. Our DP-based solution algorithm STDP for the problem   P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( nj=1 C j , nj=1 T j ) follows the framework of forward recursive state generation and relies strongly on Lemma 3.4. Algorithm STDP contains n phases. In each phase j, j = 0, 1, . . . , n, a state space S j is generated. Any state in S j is a vector ( j, B, F, T , v1 , v2 ) corresponding to a partial schedule for the jobs {1, . . . , j}, where B, F, and T are defined as those in Section 4.1, and v1 denotes the total completion time of the partial schedule, while v2 represents the total virtual tardiness of the partial schedule. The state spaces S j , j = 0, 1, . . . , n are constructed iteratively. The initial space S0 contains (0, B, F, T , 0, 0 ) as its only element, where B = (τ1 , . . . , τm1 ), F = (0, . . . , 0), and

  m1

T = (τm1 +1 , . . . , τm ). In the jth phase, j = 1, . . . , n, we build a state by adding a single job Jj to a previous state, if it is possible for the given state. That is, for any state ( j, B, F, T , v1 , v2 ) ∈ S j−1 , by Lemma 3.4, we include m1 + 1 possibly generated states in S j as follows: (1) Schedule job Jj before Bi on machine Mi , i = 1, . . . , m1 . This is possible only when ti + p j ≤ Bi . In this case, the contributions of job Jj to the total completion time objective and the total virtual objective are ti + p j and max{ti + p j − C j (π ∗ ), 0}, respectively. Thus, if ti + p j ≤ Bi , we include ( j, B , F, T , v1 + ti + p j , v2 + max{ti + p j − C j (π ∗ ), 0} ) in S j . (2) Assign job Jj to the later schedule. In this case, set t = min{ min {Fk + tk }, min tk } and k∗ = k=1,...,m1

arg min{ min {Fk + tk }, k=1,...,m1

k=m1 +1,...,m

min

k=m1 +1,...,m

tk },

which

denote

the earliest possible time when the machine becomes available after disruption and the corresponding machine with the earliest available time, respectively. Then the contributions of job Jj to the total completion time objective and the total virtual objective are t + p j and max{t + p j − C j (π ∗ ), 0}, respectively. Thus, we include ( j, B, F , T , v1 + t + p j , v2 + max{t + p j − C j (π ∗ ), 0} ) in S j, where F = (t1 , . . . , tk∗ −1 , t + p j − Fk∗ , tk∗ +1 , . . . , tm1 ) and T = T if 1 ≤ k∗ ≤ m1 ; otherwise, F = F and T = (tm1 +1 , . . . , tk∗ −1 , t + p j , tk∗ +1 , . . . , tm ). Before presenting algorithm STDP in detail, we introduce the following elimination property to reduce the state set S j . Lemma 5.2. For any two states ( j, B, F, T , v1 , v2 ) and ( j, B = (t1 , . . . , tm ), F = (t1 , . . . , tm 1 ), T = (tm +1 , . . . , tm ), v 1 , v 2 ) in S j , if 1

1

744

Y. Yin et al. / European Journal of Operational Research 252 (2016) 737–749

tk ≤ tk for k = 1, . . . , m1 , tk ≤ tk for k = 1, . . . , m, v1 ≤ v 1 and v2 ≤ v 2 , we can eliminate the latter state. Proof. The proof is analogous to that of Lemma 4.2.



We give a formal description of algorithm STDP as follows: Sum-Tardiness-DP Algorithm STDP Step 1. [Preprocessing] Re-index the jobs in SPT order. Step 2. [Initialization] Set S0 = {(0, B, F, T , 0, 0 )}, where B = (τ1 , . . . , τm1 ), F = (0, . . . , 0) and

  m1

T = (τm1 +1 , . . . , τm ); For each j = 1, . . . , n, set i = j −  m j/m and C j (π ∗ ) = k=i,m+i,...,m j/m+i pk . Step 3. [Generation] Generate S j from S j−1 . For j = 1 to n do Set S j = ∅; For each ( j − 1, B, F, T , v1 , v2 ) ∈ S j−1 do



Set t = min k∗



min {Fk + tk },

k=1,...,m1

= arg min



min

k=m1 +1,...,m

min {Fk + tk },

k=1,...,m1

tk

min

k=m1 +1,...,m

and

tk ;

For i = 1 to m1 do /∗ Alternative 1: schedule job j before Bi : If ti + p j ≤ Bi , then set S j ← S j ∪ {( j, B , F, T , v1 + ti + p j , v2 + max{ti + p j − C j (π ∗ ), 0} )}, where B = (t1 , . . . , t(i−1 ) , ti + p j , t(i+1 ) , . . . , tm ); 1

Endif Endfor /∗ Alternative 2: assign job j to the later schedule: set S j ← S j ∪ {( j, B, F , T , v1 + t + p j , v2 + max{t + p j − C j ( π ∗ ), 0} )}, where F = (t1 , . . . , tk∗ −1 , t + p j − Fk∗ , tk∗ +1 , . . . , tm1 ) and T = T if 1 ≤ k∗ ≤ m1 , otherwise F = F and T = (tm1 +1 , . . . , tk∗ −1 , t + p j , tk∗ +1 , . . . , tm ); Endfor [Elimination] /∗ Update set S j ∗ / 1. For any two states ( j, B, F, T , v1 , v2 ) and ( j, B, F, T , v1 , v 2 ) with v2 ≤ v 2 , eliminate the second state from set S j ; 2. For any two states ( j, B, F, T , v1 , v2 ) and ( j, B, F, T , v 1 , v2 ) with v1 ≤ v 1 , eliminate the second state from set S j ; 3. For any two states ( j, B = (t1 , . . . , t(i−1) , ti , t(i+1) , . . . , tm ), F, T , v1 , v2 ) and 1

( j, B = (t1 , . . . , t(i−1) , ti , t(i+1) , . . . , tm ), F, T , v1 , v2 ) 1

with ti ≤ ti , eliminate the latter state from set S j ; 4. For any two states ( j, B, F = (t1 , . . . , ti−1 , ti , ti+1 , . . . , tm1 ), T , v1 , v2 ) and ( j, B, F = (t1 , . . . , ti−1 , ti , ti+1 , . . . , tm1 ), T , v1 , v2 ) with ti ≤ ti , eliminate the latter state from set S j ; 5. For any two states ( j, B, F, T = (tm1 +1 , . . . , tk−1 , tk , tk+1 , . . . , tm ), v1 , v2 ) and ( j, B, F, T = (tm1 +1 , . . . , tk−1 , tk , tk+1 , . . . , tm ), v1 , v2 ) with tk ≤ tk , eliminate the latter state from set S j ; Endfor Step 4. [Return all the Pareto-optimal points] Set Q = n max{min{Bmax , P } + D, CS − pmin } and i = 1; While Q ≥ 0 do Select (Vi , Vi ) = (v1 , v2 ) that corresponds to the state (n, B, F, T , v1 , v2 ) with the

minimum v1 value among all the states in Sn such that v2 ≤ Q as a Pareto-optimal point; Set Q = Vi − 1 and i = i + 1; End while Return (V1 , V1 ), . . . , (Vi−1 , Vi−1 ) and the corresponding Pareto-optimal schedules can be found by backtracking. Theorem 5.3. Algorithm STDP solves   P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( nj=1 C j , nj=1 T j ) in m1 min{Bmax , P } + D, CS − pmin }P m−1 i=1 Bi ) time.

the problem O(n2 max {

Proof. Given that algorithm STDP implicitly enumerates all the schedules satisfying the properties given in Lemma 3.4, the DPbased algorithm finds an efficient solution for each possible Q through state transition. Verification of the time complexity of the algorithm is analogous to that of algorithm SMDP, where the difference lies in that before each iteration j, the total number of different possible combinations of (B, F, T ) is upper-bounded m 1 m1  by P m−1 i=1 Bi due to the fact that k=1 (tk + tk ) + m k=m1 +1 tk = j p , while the total number of different possible combinations k=1 k of v1 and v2 is upper-bounded by n max{min{Bmax , P } + D, CS − pmin } due to the elimination rules and the fact that v2 is upperbounded by n max{min{Bmax , P } + D, CS − pmin }.  5.2. A two-dimensional FPTAS for finding a Pareto optimal solution In this section we show how to obtain a good approximation of an efficient point on the trade-off curve by using the trimmingthe-solution-space approach for the special case with m1 = 1. It is based on an approximate DP algorithm STAA with a slight modification of algorithm STDP. We present the idea as follows: For any δ > 1 and ε > 0, partition the interval I1 = [0, P] into L1 = logδ P subintervals

I1(1) = [0, 0], Ik(1) = [δ k−1 , δ k ), k = 1, . . . , L1 − 1, IL(11) = [δ L1 −1 , P ]; and partition the intervals I2 = [0, n(F1 + P )(1 + ε )] and I3 = [0, n max{F1 , CS − pmin }(1 + ε )] into L2 = logδ n(F1 + P )(1 + ε ) subintervals

I1(2) = [0, 0], Ik(2) = [δ k−1 , δ k ), k = 1, . . . , L2 − 1, IL(22) = [δ L2 −1 , n(F1 + P )(1 + ε )], and L3 = logδ n max{F1 , CS − pmin }(1 + ε ) subintervals

I1(3) = [0, 0], Ik(3) = [δ k−1 , δ k ), k = 1, . . . , L3 − 1, IL(33) = [δ L3 −1 , n max{F1 , Cs − pmin }(1 + ε )], respectively. This will divide I1 × · · · × I1 ×I2 × I3 into a set of



 m



Lm L L (m + 2 )-dimensional subintervals. We develop an alterna1 2 3 tive DP algorithm STAA that differs from algorithm STDP in that out of all the states such that (t1 , . . . , tm , v1 , v2 ) falls within the same (m + 2 )-dimensional subinterval, only the state with the smallest t1 value is kept, while all the other states are eliminated. We formally describe the resulting procedure as follows: Sum-Tardiness-AA Algorithm STAA Step 1. [Preprocessing] The same as that in algorithm STDP. Step 2. [Partitioning] Partition the interval I1 = [0, P ] into L1 = logδ P subintervals

I1(1) = [0, 0], Ik(1) = [δ k−1 , δ k ), k = 1, . . . , L1 − 1, IL(11) = [δ L1 −1 , P ]; and partition the intervals I2 = [0, n(F1 + P )(1 + ε )] and I3 = [0, n max{F1 , CS −pmin }(1+ ε)] into L2 = logδ n(F1 + P )(1 + ε ) subintervals

I1(2) = [0, 0], Ik(2) = [δ k−1 , δ k ), k = 1, . . . , L2 − 1,

Y. Yin et al. / European Journal of Operational Research 252 (2016) 737–749

IL(22) = [δ L2 −1 , n(F1 + P )(1 + ε )], and L3 = logδ n max{F1 , CS − pmin }(1 + ε ) subintervals

I1(3) = [0, 0], Ik(3) = [δ k−1 , δ k ), k = 1, . . . , L3 − 1, IL(33) = [δ L3 −1 , n max{F1 , CS − pmin }(1 + ε )], respectively. Step 3. [Initialization]: The same as that in algorithm STDP. Step 4. [Generation]: Generate S j from S j−1 For j = 1 to n do Set S j = ∅; /∗ the exact dynamic program: For each ( j − 1, t1 , t1 , T = (t2 , . . . , tm ), v1 , v2 ) ∈ S j−1 do /∗ Alternative 1: schedule job j on machine M1 before B1 : If t1 + p j ≤ B1 , then set S j ← S j ∪ {( j, t1 + p j , t1 , T , v1 + t1 + p j , v2 + max{t1 + p j − C j (π ∗ ), 0} )}; Endif /∗ Alternative 2: assign job j to machine M1 after F1 : set S j ← S j ∪ {( j, t1 , t1 + p j , T , v1 + F1 + t1 + p j , v2 + max{F1 + t1 + p j − C j (π ∗ ), 0} )}; For i = 2 to m do /∗ Alternative 3: schedule job j on machine Mi : set S j ← S j ∪ {( j, t1 , t1 , T , v1 + ti + p j , v2 + max{ti + p j − C j (π ∗ ), 0} )}, where T = (t2 , . . . , ti−1 , ti + p j , ti+1 , . . . , tm ); Endfor Endfor [Elimination] /∗ Update set S j ∗ / 1. For any two states ( j, t1 , t1 , T = (t2 , . . . , tm ), v1 , v2 ) ), v , v ) in S , where and ( j, t1 , t1 , T = (t2 , . . . , tm j 1 2 (t1 , t2 , . . . , tm , v1 , v2 ) and (t1 , t2 , . . . , tm , v 1 , v 2 ) fall within the same (m + 2 )-dimensional subinterval with t1 ≤ t1 , eliminate the latter state from set S j ; 2. For any two states ( j, t1 , t1 , T , v1 , v2 ) and ( j, t1 , t1 , T , v1 , v 2 ) with v2 ≤ v 2 , eliminate the second state from set S j ; 3. For any two states ( j, t1 , t1 , T , v1 , v2 ) and ( j, t1 , t1 , T , v 1 , v2 ) with v1 ≤ v 1 , eliminate the second state from set S j ; 4. For any two states ( j, t1 , t1 , T , v1 , v2 ) and ( j, B, t1 , t1 , T , v1 , v2 ) with t1 ≤ t1 , eliminate the latter state from set S j ; 5. For any two states ( j, t1 , t1 , T = (tm1 +1 , . . . , tk−1 , tk , tk+1 , . . . , tm ), v1 , v2 ) and ( j, t1 , t1 , T = (tm1 +1 , . . . , tk−1 , tk , tk+1 , . . . , tm ), v1 , v2 ) with tk ≤ tk , eliminate the latter state from set S j ; Endfor Step 4. [Result] The same as that in STDP. Lemma 5.4. For any eliminated state ( j, t1 , t1 , T = (t2 , . . . , tm ), v1 , v2 ) ∈ S j , there exists a non-dominated state ( j, t1 , t1 , T = (t2 , . . . , tm ), v 1 , v 2 ) such that t1 ≤ t1 , tk ≤ δ j tk , k = 1, . . . , m, v 1 ≤ δ j v1 , and v 2 ≤ δ j v2 .

Proof. The proof is by induction on j. It is clear that the lemma holds for j = 1. As the induction hypothesis, we assume that the lemma holds for any j = l − 1, i.e., for any eliminated state (l − 1, t1 , t1 , T = (t2 , . . . , tm ), v1 , v2 ) ∈ Sl−1 , there exists ), v , v ) such a non-dominated state (l − 1, t1 , t1 , T = (t2 , . . . , tm 1 2 that t1 ≤ t1 , tk ≤ δ l−1 tk , k = 1, . . . , m, v 1 ≤ δ l−1 v1 , and v 2 ≤ δ l−1 v2 . We show that the lemma holds for j = l. Consider an arbitrary state (l, t1 , t1 , (t2 , . . . , tm ), v1 , v2 ) ∈ Sl . First, we assume that job l appears in the earlier schedule obtained

745

under the exact dynamic program, where t1 ≤ B1 . While implementing algorithm STAA, the state (l, t1 , t1 , T = (t2 , . . . , tm ), v1 , v2 ) is constructed from (l − 1, t1 − pl , t1 , (t2 , . . . , tm ), v1 − t1 , v2 − max{t1 − Cl (π ∗ ), 0} ) ∈ Sl−1 . According to the induction assumption, there exists a state (l − 1, t1 , t1 , T = (t2 , . . . ,

), v , v ) tm 1 2

such

that

v 1 ≤ δ l−1 (v1 − t1 ),

t1 ≤ t1 − pl ,

tk ≤ δ l−1 tk ,

k = 1, . . . , m,

and v 2 ≤ δ l−1 (v2 − max{t1 − Cl (π ∗ ), 0} ). On the other hand, since t1 + pl ≤ t1 ≤ B1 , the state (l − 1, t + p , t , T , v + t + p , v + max{t + p − C (π ∗ ), 0} ) 1

l

1

1

1

l

2

1

l

l

is generated under the exact dynamic program. It follows directly from the elimination procedure that there exists a state       (l − 1, t 1 , t1 , T = (t2 , . . . , tm ), v1 , v2 ) such that (a) t 1 ≤ t1 + pl ≤ t1 , (b) tk ≤ δtk ≤ δ l tk , k = 1, . . . , m, (c) v1 ≤ δ (v 1 + t + pl ) ≤ δ (v 1 + t1 ) ≤ δ (v 1 + δ l−1 t1 ) ≤ 1

δδ l−1 v1 = δ l v1 , and (d) v2 ≤ δ (v 2 + max{t1 + pl − Cl (π ∗ ), 0} ) ≤ δ (v 2 + max{t1 − Cl (π ∗ ), 0} ) ≤ δ (v 2 + δ l−1 max{t1 − Cl (π ∗ ), 0} ) ≤ δ l v2 . It follows that the induction hypothesis holds for j = l when job l appears in the earlier schedule obtained under the exact dynamic program. We now assume, without loss of generality, that job l appears in the later schedule on machine M1 obtained under the exact dynamic program. While implementing algorithm STAA, the state (l, t1 , t1 , T = (t2 , . . . , tm ), v1 , v2 ) is constructed from (l − 1, t1 , t1 − pl , T , v1 − t1 − F1 , v2 − max{F1 + t1 − Cl (π ∗ ), 0} ) ∈ Sl−1 . According to the induction assumption, there ), v , v ) such that t ≤ exists a state (l − 1, t1 , t1 , T = (t2 , . . . ,tm 1 2 1 t1 , t1 ≤ δ l−1 (t1 − pl ), tk ≤ δ l−1 tk , k = 2, . . . , m, v 1 ≤ δ l−1 (v1 − t1 −

F1 ), and v 2 ≤ δ l−1 (v2 − max{F1 + t1 − Cl (π ∗ ), 0} ). On the other hand, the state (l − 1, t1 , t1 + pl , T , v 1 + F1 + t1 + pl , v 2 + max{F1 + t1 + pl − Cl (π ∗ ), 0} ) is generated under the exact dynamic program. It follows directly from the elimination procedure that there       exists a state (l − 1, t 1 , t1 , T = (t2 , . . . , tm ), v1 , v2 ) such that (a) t 1 ≤ t1 ≤ t1 , (b) t1 ≤ δ (t1 + pl ) ≤ δ (t1 + δ l−1 pl ) ≤ δδ l−1 t1 = δ l t1 , (c) tk ≤ δt ≤ δ l tk , k = 2, . . . , m, k

(d) v1 ≤ δ (v 1 + F1 + t1 + pl ) ≤ δ (v 1 + δ l−1 F1 + (t1 + δ l−1 pl )) ≤ δ (v 1 + δ l−1 F1 + δ l−1 t1 ) ≤ δδ l−1 v1 = δ l v1 , and (e) v2 ≤ δ (v 2 + max{F1 + t1 + pl − Cl (π ∗ ), 0} ) ≤ δ (v 2 + δ l−1 max{F1 + t1 − Cl (π ∗ ), 0} ) ≤ δ l v2 . This establishes that the induction hypothesis holds for l = j when job j appears in the later schedule on machine M1 obtained under the exact dynamic program. The case where job l appears in the later schedule on each of the other machines obtained under the exact dynamic program can be similarly proved. Thus, the result follows.  Theorem 5.5. For any ε > 0 and a Pareto-optimal point (V , V ), alm+3 Lm+2 gorithm STAA finds in O( n ε m+2 ) time a solution pair (V, V ) such that V ≤ (1 + ε )V and V ≤ (1 + ε )V.

ε Proof. Let δ = 1 + 2(1+ ε )n and (n, t1 , t1 , (t2 , . . . , tm ), V , V ) be a state corresponding to the Pareto-optimal point (V , V ). By the proof of Lemma 5.4, for (n, t1 , t1 , (t2 , . . . , tm ), V , V ), there exists a n    non-eliminated state (n, t 1 , t1 , (t2 , . . . , tm ), V, V ) such that V ≤ δ V and V ≤ δ n V. It follows from (1 + nx )n ≤ 1 + 2x, for any 0 ≤ x ≤ ε ε n n 1, that (1 + 2(1+ ε )n ) ≤ 1 + 1+ε ≤ 1 + ε , so V ≤ δ V ≤ (1 + ε )V and

V ≤ δ n V ≤ ( 1 + ε ) V. For the time complexity of algorithm STAA, Step 1 requires O(1) time. For each iteration j, note that we partition the interval I1 into L1 = logδ P  = ln P/ ln δ ≤ (1 + 2n(1 + ε )/ε ) ln P 

746

Y. Yin et al. / European Journal of Operational Research 252 (2016) 737–749

Table 2 A comparison between Algorithm STDP and Algorithm STAA for m = 2 and n = 50. B1

D



Performances of the Algorithm STDP

Performances of the Algorithm STAA

vi 1

Avg. number of triples

P/8 P/80 0.3 14,671.60 P/50 P/30 P/6 P/80 P/50 P/30 P/4 P/80 P/50 P/30

0.5 15,717.20 0.8 13,724.40 0.3 12,849.60 0.5 13,638.40 0.8 13,829.80 0.3 14,715.00 0.5 14,890.00 0.8 13,715.80 0.3 17,812.60 0.5 18,386.60 0.8 18,824.60 0.3 16,375.60 0.5 18,669.20 0.8 20,775.00 0.3 18,806.40 0.5 14,754.80 0.8 16,879.40 0.3 24,345.20 0.5 23,275.80 0.8 22,523.40 0.3 14,605.6 0.5 20,046.40 0.8 25,112.80 0.3 23,668.80 0.5 25,729.80 0.8 22,790.00

vi 2

Max. number of triples

Avg. running time (second)

Avg. number of triples

Max. number of triples

Avg. running time (second)

Avg.

Max.

Avg.

Max.

Number of infeasible solution

20,297 19,747 18,732 15,676 14,609 16,843 17,258 17,455 15,767 23,076 26,200 22,800 20,513 23,497 24,285 20,936 19,210 22,685 29,529 31,540 27,464 24,622 30,280 29,892 25,967 30,160 31,202

25.172 23.76 21.29 16.62 18.82 19.71 21.54 21.60 19.03 37.59 41.70 41.99 34.53 40.76 48.84 42.39 27.67 37.98 92.09 87.14 80.54 48.26 63.84 101.68 89.25 104.09 83.28

13,653.80 14,323.00 12,082.60 12,275.60 12,662.40 12,671.00 13,590.60 13,874.00 12,462.40 16,537.00 16,557.60 16,744.80 14,988.00 16,622.80 17,892.80 17,423.80 13,893.60 15,453.80 22,554.60 21,276.20 20,678.40 13,870.80 19,005.00 22,649.80 21,962.60 24,229.80 20,658.00

18,862 16,600 15,119 15,476 13,903 14,882 14,917 16,103 14,338 20,647 21,866 20,301 18,926 20,288 19,172 19,841 17,006 21,153 27,692 27,945 24,233 23,246 28,960 25,949 24,246 28,520 27,495

23.07 21.13 18.09 16.52 17.69 17.96 19.74 20.25 17.31 33.63 35.04 34.74 30.20 33.94 37.80 37.62 25.52 33.25 79.26 72.674 68.75 44.26 59.10 82.73 77.09 92.92 69.64

4.45E−05 3.64E−05 0 0 0 4.11E−05 3.97E−05 0 3.49E−05 4.68E−05 0 8.91E−05 0 8.65E−05 0 0 3.89E−05 3.68E−05 3.70E−05 3.77E−05 0 4.40E−05 0 1.95E−04 4.23E−05 3.27E−05 0

2.23E−04 1.82E−04 0 0 0 2.05E−04 1.99E−04 0 1.74E−04 2.34E−04 0 2.32E−04 0 4.32E−04 0 0 1.94E−04 1.84E−04 1.85E−04 1.89E−04 0 2.20E−04 0 7.64E−04 2.11E−04 1.63E−04 0

8.84E−03 9.49E−03 1.06E−02 3.25E−03 4.69E−03 1.06E−02 2.56E−03 4.68E−03 5.98E−03 2.16E−02 1.97E−02 1.77E−02 5.17E−03 1.44E−02 6.93E−03 4.50E−03 7.61E−03 6.27E−03 1.49E−02 2.54E−02 1.39E−02 1.67E−02 9.26E−03 5.96E−02 7.52E−03 5.61E−03 6.70E−03

3.45E−02 1.72E−02 1.28E−02 8.33E−03 8.55E−03 2.31E−02 4.59E−03 5.78E−03 1.61E−02 6.06E−02 5.63E−02 5.26E−02 9.09E−03 5.51E−02 1.11E−02 6.49E−03 2.14E−02 1.52E−02 3.75E−02 3.80E−02 2.86E−02 4.76E−02 1.85E−02 2.19E−01 2.27E−02 1.60E−02 9.62E−03

0 1 0 0 1 0 0 1 1 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1

Table 3 A comparison between Algorithm STDP and Algorithm STAA for m = 2 and n = 80. B1

D



P/8 P/80 0.3 P/50 P/30 P/6 P/80 P/50 P/30 P/4 P/80 P/50 P/30

Performances of the Algorithm STDP

Performances of the Algorithm STAA

vi 1

Avg. number of triples

48,853.20 0.5 56,176.80 0.8 57,833.60 0.3 56,251.60 0.5 53,262.20 0.8 55,434.60 0.3 56,969.80 0.5 44,785.40 0.8 52,984.20 0.3 69,871.80 0.5 72,365.00 0.8 75,316.20 0.3 76,450.60 0.5 66,111.80 0.8 76,312.60 0.3 69,035.20 0.5 74,708.00 0.8 62,540.20 0.3 92,613.60 0.5 72,072.40 0.8 89,633.00 0.3 100,733.80 0.5 64,352.00 0.8 92,696.80 0.3 92,520.20 0.5 87,923.60 0.8 79,774.60

vi 2

Max. number of triples

Avg. running time (second)

Avg. number of triples

Max. number of triples

Avg. running time (second)

Avg.

Max.

Avg.

Max.

Number of infeasible solution

57,393 71,089 69,299 65,603 68,947 64,209 65,982 51,804 60,350 80,951 86,473 90,732 90,437 90,064 83,662 81,815 91,732 77,919 99,770 98,556 138,368 111,183 103,409 141,811 103,670 104,571 95,863

166.06 225.20 225.96 224.44 203.91 212.57 227.09 143.58 201.41 415.71 443.95 475.43 491.30 444.85 490.08 411.65 485.20 371.73 942.91 686.42 1035.73 1115.05 622.11 1016.62 960.48 879.50 730.30

43,531.80 48,104.60 50,089.20 49,379.40 44,873.80 45,714.60 50,915.00 39,352.00 45,333.80 58,774.40 61,678.40 60,162.80 66,127.60 55,291.20 65,324.80 60,201.80 62,780.00 50,996.40 79,931.80 61,905.80 72,876.00 86,193.40 53,385.80 77,378.00 79,750.20 76,995.60 66,392.40

49,634 57,299 61,360 54,662 50,467 51,411 56,388 45,530 50,582 62,439 69,756 69,494 80,463 79,949 72,065 67,929 76,907 61,362 82,783 83,686 111,634 92,564 85,747 108,900 89,232 93,838 81,873

134.17 165.42 175.05 173.33 144.58 149.85 182.00 115.03 148.92 288.90 319.30 296.69 362.72 311.38 354.24 309.39 338.08 237.89 686.51 492.64 660.78 793.27 411.01 675.21 701.34 661.79 494.18

3.41E−05 0 0 0 0 3.15E−05 6.22E−05 0 1.56E−05 0 0 0 0 0 0 0 3.06E−05 0 1.73E−05 3.42E−05 3.38E−05 0 0 0 1.57E−05 0 0

8.64E−05 0 0 0 0 8.06E−05 2.34E−04 0 7.78E−05 0 0 0 0 0 0 0 8.06E−05 0 8.67E−05 9.59E−05 1.05E−04 0 0 0 7.83E−05 0 0

5.52E−03 4.89E−03 4.55E−03 1.84E−03 1.95E−03 5.92E−03 2.76E−03 1.88E−03 3.13E−03 6.22E−03 5.49E−03 4.51E−03 2.78E−03 1.99E−03 2.61E−03 2.03E−03 1.24E−03 1.64E−03 1.46E−02 1.34E−02 1.35E−02 1.76E−03 5.11E−03 4.63E−03 3.98E−03 2.83E−03 3.21E−03

1.29E−02 5.41E−03 5.24E−03 3.39E−03 3.53E−03 1.17E−02 5.21E−03 2.03E−03 6.84E−03 7.14E−03 6.71E−03 6.02E−03 3.75E−03 3.70E−03 3.52E−03 2.42E−03 2.35E−03 2.16E−03 4.19E−02 3.82E−02 3.78E−02 4.50E−03 5.35E−03 4.93E−03 1.17E−02 3.23E−03 3.57E−03

0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0

Y. Yin et al. / European Journal of Operational Research 252 (2016) 737–749

747

Table 4 A comparison between Algorithm STDP and Algorithm STAA for m = 2 and n = 100. B1



D

Performances of the Algorithm STDP

Performances of the Algorithm STAA

vi 1

Avg. number of triples

P/8 P/80 0.3 102,384.80 P/50 P/30 P/6 P/80 P/50 P/30 P/4 P/80 P/50 P/30

0.5 0.8 0.3 0.5 0.8 0.3 0.5 0.8 0.3 0.5 0.8 0.3 0.5 0.8 0.3 0.5 0.8 0.3 0.5 0.8 0.3 0.5 0.8 0.3 0.5 0.8

111,313.20 107,853.40 88,003.20 90,375.20 96,500.40 85,251.40 84,709.60 87,061.40 127,267.40 134,547.60 145,354.60 132,494.20 123,410.60 108,277.40 101,112.20 115,060.60 125,590.00 156,900.60 139,667.00 174,297.40 135,820.20 165,788.00 142,354.20 153,036.40 125,955.33 153,468.20

vi 2

Max. number of triples

Avg. running time (second)

Avg. number of triples

Max. number of triples

Avg. running time (second)

Avg.

Max.

Avg.

Max.

Number of infeasible solution

112,750 135,675 120,754 97,860 101,640 103,211 117,752 105,162 92,097 140,259 160,433 169,646 154,301 131,286 138,856 131,326 128,438 145,032 174,772 165,600 200,929 202,311 195,037 153,697 176,658 133,073 164,422

623.77 740.53 673.86 473.21 492.24 552.75 490.01 467.94 462.68 1133.82 1323.18 1518.93 1269.80 1098.35 893.62 836.69 1010.72 1149.10 2332.13 1779.92 2800.72 2083.66 2543.35 1917.58 2275.56 104,956.33 2210.93

84,596.00 88,817.80 85,827.20 71,309.40 78,220.40 77,956.20 69,745.00 69,518.80 72,286.00 106,142.20 105,713.80 111,685.60 110,590.40 101,660.80 84,539.40 82,791.20 92,650.80 102,462.60 127,149.80 116,461.20 134,702.20 113,193.20 136,933.60 118,787.60 126,913.80 112,331.00 125,777.20

89,941 104,351 92,366 77,460 81,812 83,086 91,530 83,528 79,603 113,152 120,795 121,502 122,296 111,610 104,328 109,556 98,661 110,347 150,739 141,117 148,204 171,235 149,168 127,067 147,437 1539.75 12,9604

422.41 463.16 421.28 308.48 368.34 358.41 326.86 310.66 323.30 794.57 798.04 862.63 865.57 734.74 551.41 543.87 640.74 748.20 1468.45 1201.93 1564.48 1419.31 1661.57 1281.90 1498.56 1037.60 1434.12

9.87E−06 0 0 0 0 0 3.28E−05 9.77E−06 3.10E−05 4.31E−05 1.17E−05 1.93E−05 2.08E−05 1.04E−05 1.12E−05 2.02E−05 1.08E−05 1.01E−05 1.07E−05 0 9.91E−06 2.12E−05 2.32E−05 0 1.03E−05 0 1.00E−05

4.93E−05 0 0 0 0 0 5.91E−05 4.88E−05 1.00E−04 1.59E−04 5.87E−05 5.08E−05 5.78E−05 5.22E−05 5.59E−05 1.01E−04 5.40E−05 5.04E−05 5.33E−05 0 4.96E−05 6.08E−05 1.16E−04 0 5.16E−05 0 5.02E−05

4.75E−03 2.37E−03 2.86E−03 2.13E−03 1.91E−03 1.85E−03 2.63E−03 1.75E−03 1.22E−03 1.70E−02 6.64E−03 4.09E−03 6.14E−03 4.37E−03 1.94E−03 3.03E−03 1.12E−03 8.26E−04 4.97E−03 5.16E−03 6.03E−03 6.67E−03 3.59E−03 3.30E−03 1.56E−03 1.99E−03 2.63E−03

1.11E−02 3.79E−03 3.02E−03 2.49E−03 2.02E−03 1.95E−03 7.41E−03 4.26E−03 1.38E−03 6.63E−02 1.87E−02 9.68E−03 1.25E−02 1.27E−02 2.81E−03 9.88E−03 1.58E−03 1.48E−03 5.13E−03 5.92E−03 1.52E−02 1.38E−02 6.23E−03 3.57E−03 2.10E−03 2.14E−03 5.21E−03

0 0 1 1 0 0 0 0 0 0 0 0 1 2 0 0 1 0 0 0 0 1 0 0 0 0 1

Table 5 A comparison between Algorithm STDP and Algorithm STAA for m ≥ 2 and m1 = 1. m

3

4

5

6

7

8

Performances of the Algorithm STDP

Performances of the Algorithm STAA

vi 1

Avg. number of triples

Max. number of triples

Avg. running time (second)

Avg. number of triples

Max. number of triples

Avg. running time (second)

Avg.

Max.

Avg.

Max.

10 20 30 40 50 60

189.20 3153.20 11,303.40 30,987.00 66,465.17 113,994.60

331 5233 13,575 39,949 125,348 122,645

0.61 2.75 15.08 102.26 553.88 1373.39

161.40 2545.20 9655.80 26,735.80 58,672.47

331 5047 11,515 37,746 105,025 110,060

0.74 2.72 12.51 81.01 430.07 1169.13

0 0 1.45E−04 8.83E−05 8.75E−06 0

0 0 7.26E−04 4.42E−04 2.63E−04 0

2.00E−01 1.25E−02 7.39E−02 4.01E−02 1.07E−01 1.88E−02

5.00E−01 6.25E−02 2.08E−01 1.20E−01 2.14E−01 3.57E−02

0 0 0 0 1 0

70 10 20 30 40 50

– 129.80 6609.40 45,481.20 100,693.40 157,583.60

– 213 13,905 74,698 14,0254 192,774

– 0.64 10.95 357.94 1076.71 2560.24

– 205 9294 56,087 114,804 193,625

– 0.8 6.45 207.57 635.56 2135.09

– 0 4.82E−04 0 0 0

– 0 2.41E−03 0 0 0

– 5.71E−02 6.33E−01 2.50E−02 2.04E−01 1.80E−02

– 2.86E−01 3.0 0E+0 0 1.25E−01 5.71E−01 4.17E−02

– 1 0 0 0 2

60 10 20 30 40

– 163.00 10,804.60 73,022.60 144,347.40

– 203 24,128 138,175 181,805

– 0.7 28.09 1438.16 2485.44

– 203 13,339 155,146 179,408

– 0.89 8.79 1276.06 1907.24

– 1.53E−03 0 2.29E−04 0

– 7.63E−03 0 1.15E−03 0

– 0 0 3.45E−01 5.00E−02

– 0 0 1.50E+00 2.50E−01

– 0 0 1 2

50 10 20 30 40 10 20 30 40 10 20 30 40

– 208.20 9595.20 71,310.80 – 163.80 8283.40 110,707.40 – 160.60 3340.80 99627.60 –

– 355 35,323 151,696 – 229 21096 134,262 – 209 3999 126,664 –

– 0.78 43.10 1723.63 – 0.81 17.92 2349.56 – 0.91 2.8 2412.54 –

– 182 20,069 156,208 – 229 11,653 175,177 – 209 3758 154,268 –

– 0.99 15.63 1699.88 – 1.09 8.66 1867.76 – 1.21 2.85 1859.12 –

– 0.00 0.00 0 – 0.00 0.00 0.00 – 0.00 0.00 0.00 –

– 0.00 0.00 0 – 0.00 0.00 0.00 – 0.00 0.00 0.00 –

– 0.00 0.00 5.56E−02 – 0.00 0.00 0.00 – 0.00 0.00 0.00 –

– 0.00 0.00 1.67E−01 – 0.00 0.00 0.00 – 0.00 0.00 0.00 –

– 0 0 2 – 0 0 2 – 0 0 2 –

n

105,033.20 – 88.00 5048.80 33,280.20 72,664.00 136,377.80 – 104.80 5383.20 68,594.60 129,444.00 – 134.60 5616.00 73,947.40 – 125.20 5432.40 84,714.40 – 143.00 2221.60 87,502.40 –

vi 2 Number of infeasible solution

748

Y. Yin et al. / European Journal of Operational Research 252 (2016) 737–749

subintervals, where the last inequality is obtained from the well-known inequality ln x ≥ (x − 1 )/x for all x ≥ 1, and we partition the intervals I2 and I3 into L2 = logδ (n(F1 + P )(1 + ε )) ≤ (1 + 2n(1 + ε )/ε ) ln(n(F1 + P )(1 + ε )) subintervals and L3 = logδ (n max{F1 , CS − pmin }(1 + ε )) ≤ (1 + 2n(1 + ε )/ε ) ln(n max{F1 , P − pmin }(1 + ε )) subintervals, respectively. So we have |S j | ≤ ((1 + 2n(1 + ε )/ε ) ln P )m (1 + 2n(1 + ε )/ε ) ln(n(F1 + P )(1 + ε ))(1 + 2n(1 + ε )/ε ) ln(n max{F1 , P − m+2 Lm+2 ) after the elimination process, while pmin }(1 + ε )) = O( n ε m+2

Table 6 The performance of Algorithm STDP for m = 3 and m1 = 1, 2, 3. m1

n

Avg. number of triples

Max. number of triples

Avg. running time (second)

1

10 15 20 25 30 40 50 60 70 ≥

167.80 838.60 2661.60 5164.40 11,994.60 42,503.80 86,342.40 113,994.60 186,330.00 –

368 1987 3907 8641 19,195 65,917 104,628 122,645 208,293 –

0.76 1.34 2.57 5.39 19.63 249.49 703.19 1373.39 3557.40 –

2

10 15 20 25 30

1540.20 11,621.80 48,947.80 81,454.80 –

2143 21,681 62,809 93,116 –

1.36 44.08 480.62 3343.68 –

3

10 15 20

8976.20 60,762.80 –

18967 91876 –

32.23 3503.86 –

m+3 m+2

L Step 2 takes O( n ε m+2 ) time. Thus, the overall time complexity nm+3 Lm+2 of algorithm STAA is O( ε m+2 ) indeed. 

5.3. The performance of algorithms STDP and STAA We performed numerical studies by varying the problem size to evaluate the performance of algorithms STDP and STAA. We carried out an experimental study analogous to the one conducted for assessing the performance of algorithm SMDP in Section 4.2. In this experiment we focused on finding a Pareto-optimal solution by applying algorithm STDP and an approximate Pareto solution by applying algorithm STAA for a given Q in various parameter settings. We tested our algorithms on three classes of randomly generated instances: (i) m = 2 and m1 = 1, the job processing times pj were randomly drawn from the uniform distribution (1, 20), B1 = P/8, P/6, and P/4, D = P/80, P/50, and P/30, and ε = 0.3, 0.5, and 0.8; (ii) m > 2 and m1 = 1, the job processing times pj were randomly drawn from the uniform distribution (1, 20), and B1 , D, and ε were chosen randomly from the sets {P/8, P/6, P/4}, {P/80, P/50, P/30}, and {0.3, 0.5, 0.8}, respectively; (iii) m = 3 and m1 = 1, 2, 3, the job processing times pj were randomly drawn from the uniform distribution (1, 20), and B1 and D were chosen randomly from the sets {P/8, P/6, P/4} and {P/80, P/50, P/30}, respectively. Instance classes (i) and (ii) are used to analyze the impacts of disruption start time, disruption duration, and ε , and the impact of number of machines on the performance of algorithms STDP and STAA, respectively, while instance class (iii) is used to analyze the impact of number of disrupted machines on the performance of algorithm STDP. For instance class (i), we tested our algorithms on instances with n = 50, 80, and 100 jobs; and for instance classes (ii) and (iii), we went no further than instances with 70 jobs, since larger instances took too much time to solve. For each combination of n and instance class (i), we generated 30 instances; and for each combination of n and instance classes (ii) and (iii), we generated 20 instances.  i ) be an api , V Let (V i , Vi ) be a Pareto-optimal solution and (V proximate Pareto solution for instance i, where i = 1, . . . , 20 or 30. i i v For instance i, i = 1, . . . , 20 or 30, we define i 1 = V −V ≤ ε and i

vi 2 =

 i −Vi V Vi







As the disruption start time increases, the number of states increases, so the time required to solve an instance also increases for both algorithms; Algorithm STAA does not perform better than algorithm STDP as expected. The reasons are twofold. One is that in each iteration j, there are two new states generated from each state in S j in algorithm STDP, while in algorithm STAA there are three new states generated. Another is that δ becomes small as the problem size increases, so it becomes more difficult for the quaternarys (t1 , t2 , v1 , v2 ) to fall within the same 4-dimensional subinterval, implying that the elimination rule 1 in algorithm STAA cannot efficiently eliminate the dominated states; The benefits of Algorithm STAA are particularly evident when the number of jobs is large.

Tables 5 and 6 summarize the results of the experimental study for instance classes (ii) and (iii), respectively. Different from Tables 1–4, Tables 5 and 6 also report the number of jobs in a problem that the corresponding algorithm cannot solve within 3600 seconds. We make the following observations from Tables 5 and 6. •





When there is only one disrupted machine, i.e., m1 = 1, the number of jobs that algorithms STDP and STAA are capable of solving within 3600 seconds decreases and algorithm STAA generates more Pareto-optimal points as the number of machines increases; As stated above, algorithm STAA does not perform better than algorithm STDP as expected; As expected, algorithm STDP performs poorly as the number of disrupted machines increases.

6. Conclusion

V

≤ ε as the relative deviation of the approximate solution from the efficient solution with respect to the total completion time and total virtual tardiness, respectively. For each combination of n, B1 , D, and ε , we recorded the average and maximum numbers of states generated by both algorithms STDP and STAA, the average times required to obtain the Pareto-optimal and approximate Pareto solutions, and the average and maximum relative deviations. Tables 2–4 summarize the results of the experimental study for instance class (i) with n = 50, n = 80, and n = 100, respectively. We make the following observations from Tables 2 to 4.

In this paper we introduce a rescheduling model in which both the original scheduling objective, i.e., the total completion time, and the deviation cost associated with a disruption of the original schedule in the presence of machine breakdowns are taken into account. The disruption cost is measured as the maximum time deviation or the total virtual tardiness with respect to the original schedule. For each variant, we show that the case where the number of machines is considered to be part of the input is N P -hard in the strong sense, and develop a pseudo-polynomial-time solution algorithm for finding the set of Pareto-optimal solutions for the case with a fixed number of machines, establishing that it is

Y. Yin et al. / European Journal of Operational Research 252 (2016) 737–749

N P -hard in the ordinary sense. For the variant where the disruption cost is modelled as the total virtual tardiness and the machine disruption occurs only on one of the machines, we also develop a two-dimensional FPTAS. Several important issues are interesting for future research. •







Is there a two-dimensional FPTAS or a constant ratio approximation algorithm for the problems  P m, hm1 1 |τ , [Bi , Fi ]1≤i≤m1 |( nj=1 C j , max ) and P m, hm1 1 |τ ,   [Bi , Fi ]1≤i≤m1 |( nj=1 C j , nj=1 T j )? Design efficient two-dimensional FPTAS or constant ratio approximation algorithms for the variant where the disruption cost is modelled as the maximum time deviation and machine disruption occurs only on one of the machines. Design efficient algorithms for the case of the problem with an arbitrary number of machines by using mixed integer linear programming techniques, such as column generation, Lagrangian relaxation, branch-and-bound, or branch-and-price. Extending our model to consider different disruption events (e.g., new order arrivals, processing delays, etc.) and/or different machine environments (e.g., flowshop).

Acknowledgments We thank an Editor, an Associate Editor, and three anonymous referees for their helpful comments on earlier versions of our paper. This paper was supported in part by the National Natural Science Foundation of China (grant nos. 11561036, 71501024, 71301022); and Cheng was supported in part by the Hong Kong Polytechnic University under the Fung You King-Wing Hang Bank Endowed Professorship in Business Administration. References Adiri, I., Bruno, J., Frostig, E., & RinnooyKan, A. H. G. (1989). Single machine flowtime scheduling with a single breakdown. Acta Informatica, 26, 679–696. Aytug, H., Lawley, M. A., McKay, K., Mohan, S., & Uzsoy, R. (2005). Executing production schedules in the face of uncertainties: A review and some future directions. European Journal of Operational Research, 161, 86–110. Baker, K. R. (1974). Introduction to sequencing and scheduling. New York: Wiley. Bean, J. C., Birge, J. R., Mittenthal, J., & Noon, C. E. (1991). Matchup scheduling with multiple resources, release dates and disruptions. Operations Research, 39, 470– 483. Billaut, J. C., Sanlaville, E., & Moukrim, A. (2002). Flexibilite et robustesse en ordonnancement. France: Hermes. Clausen, J., Hansen, J., Larsen, J., & Larsen, A. (2001) Disruption management. OR/MS Today 28, October, pp. 40–43. Dahal, K., Al-Arfaj, K., & Paudyal, K. (2015). Modelling generator maintenance scheduling costs in deregulated power markets. European Journal of Operational Research, 240, 551–561.

749

Emmons, H. (1969). One-machine sequencing to minimize certain functions of job tardiness. Operations Research, 17, 701–715. Graham, R. L., Lawler, E. L., Lenstra, J. K., & Rinnooy Kan, A. H. G. (1979). Optimization and approximation in deterministic machine scheduling: A survey. Annals of Discrete Mathematics, 5, 287–326. Hall, N. G., Liu, Z. X., & Potts, C. N. (2007). Rescheduling for multiple new orders. INFORMS Journal on Computing, 19, 633–645. Hall, N. G., & Potts, C. N. (2004). Rescheduling for new orders. Operations Research, 52, 440–453. Hall, N. G., & Potts, C. N. (2010). Rescheduling for job unavailability. Operations Research, 58, 746–755. Hoogeveena, H., Lentéb, C., & T’kindtb, V. (2012). Rescheduling for new orders on a single machine with setup times. European Journal of Operational Research, 223, 40–46. Jain, S., & Foley, W. J. (2016). Dispatching strategies for managing uncertainties in automated manufacturing systems. European Journal of Operational Research, 248, 328–341. Leon, V. J., Wu, S. D., & Storer, R. H. (1994). Robustness measures and robust scheduling for job shops. IIE Transactions, 26, 32–43. Levin, A., Mosheiov, G., & Sarig, A. (2009). Scheduling a maintenance activity on parallel identical machines. Naval Research Logistics, 56, 33–41. Liu, Z., & Ro, Y. K. (2014). Rescheduling for machine disruption to minimize makespan and maximum lateness. Journal of Scheduling, 17, 339–352. Ouelhadj, D., & Petrovic, S. (2009). A survey of dynamic scheduling in manufacturing systems. Journal of Scheduling, 12, 417–431. Ozlen, M., & Azizog˘ lu, M. (2011). Rescheduling unrelated parallel machines with total flow time and total disruption cost criteria. Journal of the Operational Research Society, 62, 152–164. Qi, X., Bard, J. F., & Yu, G. (2006). Disruption management for machine scheduling: The case of SPT schedules. International Journal of Production Economics, 103, 166–184. Unal, A. T., Uzsoy, R., & Kiran, A. S. (1997). Rescheduling on a single machine with part-type dependent setup times and deadlines. Annals of Operations Research, 70, 93–113. Vieira, G. E., Herrmann, J. W., & Lin, E. (2003). Rescheduling manufacturing systems: A framework of strategies, policies, and methods. Journal of Scheduling, 6, 39–62. Wang, D. J., Liu, F., Wang, J. J., & Wang, Y. Z. (2015). Integrated rescheduling and preventive maintenance for arrival of new jobs through evolutionary multi-objective optimization. doi:10.10 07/s0 050 0- 015- 1615- 7. Yan, P., Che, A., Cai, X., & Tang, X. (2014). Two-phase branch and bound algorithm for robotic cells rescheduling considering limited disturbance. Computers & Operations Research, 50, 128–140. Yang, B. (2007). Single machine rescheduling with new jobs arrivals and processing time compression. International Journal of Advanced Manufacturing Technology, 34, 378–384. Yu, G., Argello, M., Song, G., McCowan, S. M., & White, A. (2003). A new era for crew recovery at continental airlines. Interfaces, 33, 5–22. Yuan, J., & Mu, Y. (2007). Rescheduling with release dates to minimize makespan under a limit on the maximum sequence disruption. European Journal of Operational Research, 182, 936–944. Zhao, C. L., & Tang, H. Y. (2010). Scheduling deteriorating jobs under disruption. International Journal of Production Economics, 125, 294–299. Zweben, M., Davis, E., Daun, B., & Deale, M. J. (1993). Scheduling and rescheduling with iterative repair. IEEE Transactions on Systems, Man, and Cybernetics, 23, 1588–1596.