Scheduling jobs on parallel machines to minimize a ... - CiteSeerX

1 downloads 806 Views 379KB Size Report
(will be inserted by the editor). Scheduling jobs on parallel machines to minimize a regular step total cost function. Boris Detienne · Stéphane Dauz`ere-Pér`es ·.
Noname manuscript No. (will be inserted by the editor)

Scheduling jobs on parallel machines to minimize a regular step total cost function

Boris Detienne · St´ ephane Dauz` ere-P´ er` es · Claude Yugma

Received: date / Accepted: date

Abstract This paper deals with a parallel machine scheduling problem whose objective is to minimize a regular step total cost function. A real world application of the problem is presented, and Mixed Integer Linear Programming models are described for the cases with and without release dates, as well as a dedicated constraint generation procedure. Experimental results are reported and discussed to evaluate the relevance of the different approaches on well-known special cases and the general case. Keywords Scheduling · Parallel machines · Total cost minimization · Step cost function

1 Introduction Machine scheduling has been intensively studied in the last decades, due to the presence of these problems in most manufacturing processes, and in numerous and various fields where resources have to be allocated for the realization of tasks to be organized. In total cost minimization problems, a cost is associated with each task in the schedule, depending on its completion time. The cost of the schedule is defined as the sum of the costs of all tasks, and has to be minimized. Total cost problems cover a wide variety of objectives, the most famous ones are the minimization of the total flow time P P (denoted in the literature by i wi Ci ), of the weighted number of late jobs ( i wi Ui ), P P of the weighted sum of the total tardiness ( i wi Ti ) or lateness ( i wi Li ), or the P weighted sum of the total earliness and tardiness ( i (αi Ei + βi Ti )). Each of these B. Detienne Laboratoire d’Informatique d’Avignon Universit´ e d’Avignon et des pays de Vaucluse 339 chemin des Meinajaries, Agroparc BP 1228 84911 Avignon Cedex 9, France E-mail: [email protected] S. Dauz` ere-P´ er` es and C. Yugma ´ ´ Ecole des Mines de Saint-Etienne Centre Micro´ electronique de Provence, Site Georges Charpak 880 route de Mimet, 13541 Gardanne, France E-mail: [email protected], [email protected]

2

objectives lead to different problems depending on the number and type of machines (single machine, parallel related machines, etc.), and various additional constraints (release dates, precedence constraints, unavailability periods, etc.). The reader can refer to [20], [6] or [24] for an overview of scheduling. This paper considers the case, to our knowledge not studied specifically until now, of a stepwise regular total cost function. In the remainder of this section, we first give a formal statement of the problem studied and a classical time-indexed 0-1 Linear Programming formulation. Then, a short litterature review of well-known special cases and the general case of our problem is presented. The description of an industrial application of the problem follows. Section 2 is dedicated to a 0-1 Linear Programming formulation for the special case where release dates are equal to 0, while Section 3 presents an original and efficient Integer Linear Programming (ILP) formulation for the general case. Section 4 describes an alternative formulation used in a constraint generation approach to solve the general case of the problem more efficiently for special configurations of the instances. Numerical results are reported and discussed in Sections 2, 3 and 4 on well-known special cases as well as on the general case.

1.1 Problem statement In this paper, we are interested in a generalization of the problems in which the total cost function is regular (non decreasing in the job completion times): A set of jobs have to be scheduled on different (i.e. unrelated) parallel machines. Jobs are subject to release dates, and the objective is to minimize a regular step cost function of the job completion times. Formally, an instance of our problem is determined by: • a set M of machines, • a set J of jobs, where each job i ∈ J has: – a release date ri , – a processing time pµ i for each machine µ ∈ M , – and a cost function defined by an ordered set Ki = (d1i , . . . , dsi i ) of jump points and an ordered set Γi = (γi1 , . . . , γisi ) of costs such that γi1 < γi2 < · · · < γisi . A cost of γil is incurred if job i completes at time t, such that dli < t ≤ dl+1 i , l ∈ {1, . . . , si − 1}. A cost of γisi is incurred if t > dsi i . Without loss of generality, we assume that no cost is incurred if t ≤ d1i (if necessary, data can be modified by adding a constant term to the objective function and adapting the costs consequently). Our study is limited to the case of regular cost functions, i.e. γi1 < γi2 < · · · < γisi , i ∈ J. The problem is to assign a machine and a starting time to each job, such that the cost of the schedule is minimized. Jobs are scheduled without preemption, and cannot start before their release dates. All data are integer. This problem can be denoted, in the P standard three-field notation [15], by R|ri | f¯i (Ci ) where f¯i is a non-decreasing step function. Time-indexed model. The problem can be modeled using the classical time-indexed formulation. Note that this generic model cannot be directly used in practice because of the large number of variables it involves. Indeed, standard mathematical programming solvers are not able to solve most 30-job instances.

3

P Let τ = maxi∈J ri + i∈J maxµ∈M pµ i denote the latest time instant of the horizon. Let us denote cit the cost of completing job i at time instant t. According to the previous notations, we have cit = γik , with k such that dk−1 < t ≤ dki . Let us introduce i µ the decision variable xit which is equal to 1 if job i ∈ J completes at time instant t ∈ {1, . . . , τ } on machine µ ∈ M , and 0 otherwise. This allows us to write the following time-indexed formulation (T I) of the problem: P P Pτ µ (1) (T I) : min t=0 cit xit µ∈M i∈J P Pτ µ µ s.t. µ∈M t=ri +p xit = 1 i ∈ J (2) i P Pt+pµi µ µ ∈ M, t ∈ {1, . . . , τ } (3) i∈J θ=t+1 xiθ ≤ 1 xµ it ∈ {0, 1}

µ ∈ M, i ∈ J, t ∈ {1, . . . , τ }

(4)

In this model, Constraints (2) ensure that each job has exactly one completion time and is thus processed exactly once, and Constraints (3) at most one job is processed Pt+pµi at a time on each machine. Indeed, θ=t+1 xµ iθ is equal to one if and only if job i is processed on machine µ during time instant t.

1.2 Literature review P P The R|ri | wi Ui problem is a special case of R|ri | f¯i (Ci ) where si = 1, ∀i ∈ J. P¯ P Moreover, R|ri | fi (Ci ) is a generalization of R|ri | wi Ti . In this special case, each job i has a first jump point d1i equal to its due date. Another jump point is defined for each time instant following d1i , and is associated with a cost increase of Ti P P units. It is easy to show that R|ri | wi Li and R|ri | wi Ci are also special cases of P¯ R|ri | fi (Ci ). This paper focuses on the cases where job cost functions have a few steps. P Although many special cases of R|ri | f¯i (Ci ) have been intensively studied in the literature, we were not able to find any reference on its general form. To the best of our knowledge, the existing methods which can be applied to this problem are developed for the general total cost machine scheduling problem (see for example Jouglet et al. [17] or Tanaka and Fujikama [25]). These approaches, mainly tested on the weighted tardiness problem, exploit dominance rules based on job interchanges which will probably not be so active in our case: Shifting a job completion time rarely generates a change in the cost of the schedule because of the stepwise structure of the cost function. P The problem of minimizing the number of tardy jobs on parallel machines, P || Ui , is NP-hard [13]. For the case of one machine, the problem can be polynomially solved by the Moore’s algorithm [23]. Chang [9] proposes a fast heuristic for minimizing the maximum lateness with a minimum number of tardy jobs. This heuristic can optimally solve up to 50 jobs in less than 50 seconds. When release dates are considered, Baptiste et al. [3] and Dauz`ere-P´er`es and Sevaux [12] propose branch-and-bound algorithms which solves up to 200 jobs. Minimizing the weighted number of tardy jobs on parallel machines is also NP-hard, and Van den Akker et al. [1] propose a column generation approach. Meral and Omer [21] consider unrelated parallel machine scheduling problems that involve the minimization of regular total cost functions such as the total weighted flow time criterion. They introduce some dominance properties of optimal solutions, and propose a branch-and-bound procedure. Based on a maximum flow network approach, an O(n2 ) algorithm is proposed by Bornstein et al. [4] to minimize the

4

number of late jobs on parallel machines when preemption is allowed. To our knowledge, M’Hallah and Bulfin [22] propose the most efficient branch-and-bound method for the minimization of the weighted number of late jobs on identical or non-identical parallel machines. Chen [10] develops an iterated local search to minimize the total weighted number of late jobs on unrelated parallel machines without preemption, with sequence dependent setup times and ready times. The aim of this paper is to provide different Integer Linear Programming (ILP) P formulations for the R|ri | f¯i (Ci ) problem, which can be of practical use even in a direct solving approach by a standard mathematical programming solver. Moreover, a dedicated constraint generation procedure is described and is shown to be more efficient for specific configurations of the instances.

1.3 Example of industrial application Let us describe a semiconductor manufacturing process which motivates this scheduling problem. In this context, machines are extremely sensitive, and their settings can unpredictably drift, so that products may be out of specification. Let us consider a production process in which lots of various products types are scheduled on several production machines. When they are completed, some of these lots are selected to be inspected on several specialized machines, to check whether the process is out of the specifications for the corresponding product type. If this is the case, the production of the lots of the same product type should be stopped or adjusted. Hence, it is important to react as soon as possible to avoid producing lots that may be scrapped, i.e. to inspect lots before too many lots are produced. Hence, we are interested in solving an optimization problem in which the scheduling of inspection operations has to be determined, so that a set of fixed (already scheduled) production operations are processed with the highest possible quality. The inspection operation of a given lot is associated with the lots of the same type that must be produced. As already mentioned, there is a risk of scrapping a lot if its production starts before the lot of the same product type is inspected. The objective of the problem is thus to minimize the overall risk incurred by production lots, by optimally scheduling the set of inspection operations. This problem arises from an actual industrial example. Semiconductor manufacturing involves many sensitive operations of lots of (usually 25) wafers, in which a slight change in the processing conditions may result in the loss of some or all wafers of the lot. A strategy commonly used in this situation consists in frequently adjusting the production equipment parameters on the basis of measurements made on products. The limited capacity of inspection resources lead to restrict the number of measures, so that one measurement operation is performed for the production of a few, typically less than ten, consecutive lots (this sampling rate is, in practice, usually determined beforehand by a sampling strategy [18], [5], [14]). After this number of lots has been processed, further production of lots of the same type on the same machine is considered more or less risky until a measurement operation is performed and the equipment parameters adjusted according to the results of the measure. However, the extremely high costs of the production machines lead decision makers to continue producing without allowing any idle time. In this context, it becomes particularly important to schedule efficiently the measurement operations in order to match the production schedule as well as possible.

5 Production before Inspection Operation → penalty penalty

P10

P20

P30

P21

P11

Inspection Operation completed before start → no penalty

P22

Production

P23

schedule (fixed) Inspection

0

1

2

d12

d11 γ21

schedule d22

d32 γ22

γ23

0

(to be decided) Cost function of operation 2

Fig. 1 Illustration of the problem with one production machine and one inspection machine

Figure 1 illustrates a simple case of our problem, where inspection operations 1 and 2 are made on the lots processed in production operations P10 and P20 . The results of the inspection operations are meant to be used for adjusting process parameters for, respectively, production operations {P11 } and {P21 , P22 , P23 }. On the given inspection equipment schedule, product type 1 completes before the beginning of P11 , which is then safely executed. However, the inspection of product type 2 ends after production operations P21 and P22 start. These operations then incur a risk of producing products that will not meet quality specifications. As the production schedule is fixed, the total risk incurred by a set of lots related to a given inspection operation is determined by its completion time: If inspection operation 2 ends before the start of P21 (at d12 ), then the risk is zero; If it completes between d12 and d22 , then only P21 is not safe and the risk associated to inspection operation 2 is equal to the expected cost γ21 for executing P21 before the inspection operation is completed. When completing between d22 and d32 , inspection operation 2 is only ready for P23 , and the expected cost is γ22 , etc.

2 Special case: No release dates This section presents an Integer Linear Programming (ILP) formulation for the special P case with no release dates, denoted by R|| i f¯i (Ci ). In practice, this problem can represent the scheduling of a set of inspection operations on lots already processed and present in a queue. The ILP formulation introduced below relies on the following well-known result (see e.g. [16]). Let us consider the problem of scheduling a set of jobs I on a single machine and without preemption, such that each job i ∈ I completes before its deadline d¯i , with no release date restrictions. Proposition 1 If there exists at least one feasible schedule in which each job completes before its deadline, then the schedules where jobs are sequenced according to a nondecreasing order of their deadlines are feasible. This leads to the following characterization of feasible solutions.

6

Proposition 2 Let σ be the permutation of I corresponding to a non-decreasing order of the deadlines. There exists at least one feasible solution if and only if: i X

pσ(r) ≤ d¯σ(i)

i∈I

r=1

2.1 An Integer Linear Programming (ILP) Formulation P The problem R|| i f¯i (Ci ) can be seen as the problem of finding a minimum cost assignment of deadlines (corresponding to the jump points, where the cost functions change), such that there exists at least one feasible schedule meeting the deadlines. Using Proposition 2 for characterizing the set of feasible schedules leads to the ILP model below. This model is based on the notion of occurrences of jobs. Each jump point dli , i ∈ J, l ∈ Ki , is linked with a possible occurrence k of job i subject to a deadline dk and whose execution cost is γk = γil , corresponding to the execution of i before dk = dli . Moreover, we note η the total number of occurrences, and we assume that occurrences are indexed according to a non-decreasing order of their deadline, i.e. d1 ≤ d2 ≤ · · · ≤ dη . Let σ(k), k ∈ {1, . . . , η}, denote the job corresponding to the kth ¯ i = {k ∈ {1, . . . , η}|σ(k) = i}, i ∈ J, is the set of indices of the occurrence. Finally, K occurrences linked with job i. (QT S) : min s.t.

P

µ k∈{1,...,η} γk · uk µ ¯ i uk = 1 µ∈M k∈K Pk µ µ l=1 pσ(l) · ul ≤ dk

µ∈M

P

P

P

uµ k ∈ {0, 1}

(5) i∈J

(6)

µ ∈ M, k ∈ {1, . . . , η}

(7)

µ ∈ M, k ∈ k ∈ {1, . . . , η}

(8)

In this model, decision variable uµ k is equal to 1 if and only if occurrence k is processed on machine µ. Constraints (6) ensure that exactly one occurrence of each job is processed, and Constraints (7) that this selection of occurrences can lead to a feasible schedule. This model can be seen as a generalization of a model used in the literature for the minimization of the weighted number of tardy jobs, e.g. in [22]. It is worth noting that ties on deadlines can be broken in an arbitrary way. In this case, the constraint (7) associated with the occurrence of highest rank trivially dominates the constraints associated with occurrences with the same deadline, which thus can be removed from the model.

2.2 Experimental results Tests were performed on a Personal Computer equipped with 3GB RAM and a 2.5GHz Intel Xeon processor, using the standard mathematical programming solver XPRESSMP with standard settings. In order to build our test bed, we used a random instance generator that takes as input the number of jobs n, the number of machines m, the number of jump points per job |K|, and a due date factor Z2 . For each of the n jobs, the generator draws m 2 durations from a uniform distribution in {1, . . . , 100}, |K| jump points in {p1i , . . . , nZ m },

7

and a cost increase in {1, . . . , 100} for each jump point. For each value of n and m, 5 instances were generated for each combination of parameters Z2 ∈ {10, 20, 50} and |K| ∈ {2, 3, 4, 9}. In the industrial application presented in Section 1.3, these values of |K| correspond respectively to the measurement of one lot every 3, 4, 5 and 10 lots.

|K| 2 3 4 9 Total

n 10 20 30 50 100 200 300 400 500 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 14 14 3 15 15 15 15 15 15 8 0 0 15 15 15 15 9 0 0 0 0 60/60 60/60 60/60 60/60 54/60 45/60 37/60 29/60 18/60

Total 135/135 121/135 98/135 69/135 423/540

Table 1 Model (QT S). Number of single machine instances solved optimally within 1000 sec.

n K 2 3 4 9

10 0.0 0.0 0.0 0.1

20 0.0 0.0 0.2 0.7

30 0.1 0.1 0.3 3.8

50 0.2 0.5 1.3 23.7

100 0.5 4.2 16.2∗ 388.7∗

200 3.4 44.7 201.9

300 10.9 161.0∗ 350.5∗

400 22.7 395.5∗

500 48.8 641.9∗

+

+

+

+

+

+

Table 2 Model (QT S). Average computing time for single machine instances solved optimally (sec.). ∗ Not all instances were solved within the 1000 sec. time limit. + No instance was solved within the time limit.

We first report the results obtained using the standard solver XPRESS-MP in the case of a single machine on the model (QT S). Tables 1 and 2 give the number of instances solved within a time limit of 1000 seconds, and the average computing time for the instances that are solved optimally. One can observe that, as expected, the number of jump points directly impacts the results, since it is linked to the size of the model. Under the same conditions, the solver applied on the time-indexed model solves less than 25% of the 30-job instances. It can easily be explained by the very large number of variables, as well as the degeneracy due to the number of variables with the same cost. Tables 3 and 4 report the performance of the solver on parallel unrelated machines, within the same time limit of 1000 seconds. These results show that problems with up to 50 jobs and a sampling rate of 51 can be solved optimally without further investigation, in an operational computing time. P Well-known special case: R|| wi Ui . Adapting the model (QT S) to this problem is straightforward. Hence, it corresponds to the special case where si = 1 ∀i ∈ J (the parameter |K| of the generator is set to 1). This special case of the model already proved to be of practical use, since the method proposed by M’Hallah and Bulfin [22] relies on a similar formulation and is capable of solving exactly instances with up to 400 jobs and 8 machines. To the best of our knowledge, it is currently the best method P published for R|| wi Ui . The direct application of XPRESS-MP on (QT S) is not so efficient, although all tested instances with 500 jobs and 2 machines are solved under a time limit of 1000

8 n K \m K=2 m=2 m=3 m=6 m=9 K=3 m=2 m=3 m=6 m=9 K=4 m=2 m=3 m=6 m=9 K=9 m=2 m=3 m=6 m=9 Total

10 60/60 15 15 15 15 60/60 15 15 15 15 60/60 15 15 15 15 60/60 15 15 15 15 240/240

20 60/60 15 15 15 15 60/60 15 15 15 15 60/60 15 15 15 15 60/60 15 15 15 15 240/240

30 60/60 15 15 15 15 60/60 15 15 15 15 60/60 15 15 15 15 60/60 15 15 15 15 240/240

50 60/60 15 15 15 15 60/60 15 15 15 15 60/60 15 15 15 15 44/60 10 8 11 15 224/240

100 58/60 15 15 13 15 48/60 14 12 10 12 38/60 14 8 6 10 8/60 1 0 2 5 152/240

Total 298/300 75/75 75/75 73/75 75/75 288/300 74/75 72/75 70/75 72/75 278/300 74/75 68/75 66/75 70/75 232/300 56/75 53/75 58/75 65/75 1096/1200

Table 3 Model (QT S). Number of parallel unrelated machine instances solved optimally within 1000 sec.

seconds, with an average time of 22 seconds. However, it does not perform so well for the case with 8 machines, since only less than half of the 400-job instances are solved within the time limit. The results obtained by M’Hallah and Bulfin [22] and our experiments on (QT S) suggest that this model is a good basis for designing dedicated methods for our problem.

3 General case This section introduces an Integer Linear Programming (ILP) model for the general case of our problem (see Section 1.1).

3.1 Dominant set of solutions The principle used to build this model is similar to the one developed in the previous section: Find a dominant order of job occurrences on each machine, and write an ILP model that searches for the best feasible selection of one occurrence (or the corresponding deadline) per job, based on the dominant set of solutions. We show that, by inserting virtual jump points for certain jobs, we can find a dominant order very similar to the one used in the special case without release dates. Let us first give an intuitive insight on the dominant order we exhibit and use in the model. Let us focus on a single machine, and consider an optimal schedule. Let i and j be two jobs processed consecutively in this solution (i precedes j). For the sake of readability, let d¯i (resp. d¯j ) denote the jump point associated with job i (resp. job j) immediately following its completion time, which we refer to as its current jump point.

9 n K/m K=2 m=2 m=3 m=6 m=9 K=3 m=2 m=3 m=6 m=9 K=4 m=2 m=3 m=6 m=9 K=9 m=2 m=3 m=6 m=9

10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.2 0.2 0.0 0.0

20 0.1 0.1 0.1 0.0 0.0 0.2 0.3 0.3 0.0 0.0 0.4 0.4 1.0 0.2 0.1 3.0 4.6 6.8 0.5 0.2

30 0.3 0.3 0.3 0.3 0.1 0.8 0.9 1.7 0.6 0.1 3.0 1.7 9.2 0.9 0.2 74.5 32.3 163.9 101.1 0.8

50 8.9 1.1 4.1 30.1 0.5 31.5 3.1 43.1 78.5 1.2 62.2 18.8 108.0 120.3 1.8 226.4∗ 364.4∗ 410.3∗ 264.6∗ 8.3

100 84.2 12.1 37.1 167.4∗ 131.3 209.8 168.0∗ 329.0∗ 244.4∗ 110.7∗ 228.9 354.8∗ 285.9∗ 161.9∗ 47.4∗ 479.1∗ 913.1∗ +

642.0∗ 327.1∗

Table 4 Model (QT S). Average computing time for parallel unrelated machine instances solved optimally (sec.).∗ Not all instances were solved within the 1000 sec. time limit.+ No instance was solved within the time limit.

Note that this date determines the cost incurred by each job in the current schedule. Concerning the positions of i and j with respect to their release dates and current jump points, the following three cases can occur. • Case 1: d¯i ≤ d¯j . Jobs i and j are scheduled according to a non-decreasing order of their current jump points (see Figure 2). • Case 2: ri > rj and d¯i > d¯j . It can be seen from Figure 3 that i and j can be swapped, so that the schedule before and after them does not change and its cost does not increase (since the job cost functions are regular and none of the jobs will complete after its current jump point). Moreover, in the resulting schedule, i and j are scheduled according to a non-decreasing order of their current jump points. • Case 3: ri ≤ rj and d¯i > d¯j . Jobs i and j are not scheduled according to a non-decreasing order of their current jump points (see Figure 4). Moreover, interchanging their processing order may result in delaying the jobs processed after i and j, or simply the completion time of job i exceeding its current jump point. We propose to insert a virtual jump point ei (d¯j ) in the set of jump points of job i, such that ei (d¯j ) will be the current jump point of job i in this solution, and ei (d¯j ) precedes d¯j according to a given strict order ≺, so that i and j are scheduled according to the ≺-order of their current jump points. Besides, we would like ≺ to be consistent with the usual non-decreasing order of the jump points, in order to deal with cases 1 and 2. We can show that choosing ei (d¯j ) = d¯j is valid if ≺ is defined as a non-decreasing order of the jump points, with ties broken according to an arbitrary non-increasing order of the release dates of the corresponding jobs. Thus, in cases 1 and 3, jobs i and j are scheduled according to the newly defined order of their current jump points. In case 2, interchanging i and j leads to another optimal schedule, in which these jobs are scheduled according to the newly defined

10

i

j

ri

d¯i

rj

d¯j

Fig. 2 Case 1: d¯i ≤ d¯j . Jobs i and j are scheduled according to a non-decreasing order of their current jump points.

i

j

j

rj

i

d¯j

ri

d¯i

Fig. 3 Case 2: ri > rj and d¯i > d¯j . Jobs i and j can be swapped and scheduled according to a non-decreasing order of their current jump points, without increasing the cost of the schedule.

ei (d¯j ) i

j j

ri

i

d¯j

rj

d¯i

Fig. 4 Case 3: ri ≤ rj and d¯i > d¯j . Swapping i and j may increase the cost of the schedule. So we cannot derive an other optimal schedule respecting the usual non-decreasing order of the jump points. A virtual jump point ei (d¯j ) is added that becomes the current jump point of i and precedes d¯j in a given strict order ≺.

order of their current jump points. We use the fact that there exists at least one optimal solution in which the processed job occurrences are scheduled according to this order, to write an ILP model that determines the optimal selection of job occurrences. Let us formally describe this dominance rule, that relies on the following strict order over the whole set of job occurrences (associated with actual or virtual jump points). Definition 1 Let i ∈ J, j ∈ J, ki ∈ Ki , and kj ∈ Kj . We define the (strict partial) order ≺ by dki ≺ dkj if one of these two relations hold: dki < dkj

(9)

(dki = dkj ) ∧ (ri < rj )

(10)

Let us define, for each job i, an ordered set Li of virtual jump points. A virtual jump point is created for job i for each couple (dki , dkj ) of actual jump points associated with job i and another job j, and such that dki ≺ dkj does not hold:  Li = dkj |∃ki ∈ Ki , ∃j ∈ J, ∃kj ∈ Kj , (ri < rj ) ∧ (dkj < dki ) i∈J

11

The cost of processing a job before a virtual jump point is the same as the cost of processing the job before its next actual jump point: γli = arg min{γk |k ∈ Ki , dk ≥ dli }

i ∈ J, li ∈ Li

Moreover, let us denote by Gi = Ki ∪ Li the whole set of jump points of job i ∈ J, P and let λ = i∈J |Gi |. Hence, by construction, in any schedule, if a job i completes at Ci , before job j completing at Cj ≤ dkj on the same machine and such that ri ≤ rj , then there always exists one jump point e ∈ Gi such that Ci ≤ e ≤ dkj . The following propositions lead to a property of optimal schedules of a subset I ⊆ J of jobs on a single machine µ ∈ M . For the sake of readability, pi = pµ i denotes the processing time of job i ∈ I on the considered machine. First, let us recall that the set of active schedules (where jobs are scheduled as soon as possible) is dominant for our problem, since all job cost functions are regular (shifting a job to its left cannot increase the cost of the schedule, so that unforced idle times can be avoided). Thus, any optimal schedule of a subset of jobs can be represented by an ordered sequence of these jobs. Proposition 3 Let S be an optimal (and feasible) schedule. Let us consider a pair (i, j) of jobs processed on the same machine. Let di (resp. dj ) denote the jump point associated with job i (resp. job j) immediately following its completion time. If dki ≺ dkj does not hold, then (dki ≥ dkj ) and (ri ≥ rj ). Proof The logical negation of dki ≺ dkj is: (dki ≥ dkj ) ∧ (dki 6= dkj ) ∨ (ri ≥ rj ) ⇔ ⇔ (dki ⇔



 (dki > dkj ) ∨ (dki ≥ dkj ) ∧ (ri ≥ rj )    > dkj ) ∧ (ri ≥ rj ) ∨ (dki > dkj ) ∧ (ri < rj ) ∨ (dki ≥ dkj ) ∧ (ri ≥ rj )   (dki > dkj ) ∧ (ri < rj ) ∨ (dki ≥ dkj ) ∧ (ri ≥ rj )

Let us assume that ri < rj . Since job i completes before j, there exists at  least one jump point e ∈ Gi such that Ci ≤ e ≤ dkj . Thus, (dki > dkj ) ∧ (ri < rj ) is inconsistent. Hence, the claimed result holds. u t Proposition 4 Let S be an optimal (and feasible) schedule. Let us consider a pair (i, j) of jobs processed consecutively on the same machine (i precedes j). Let di (resp. dj ) denote the jump point associated with job i (resp. job j) immediately following its completion time. Then, at least one of these two relations holds: (i) dki ≺ dkj (ii) i and j can be interchanged so that the cost of the new schedule is not larger than the cost of S (and thus optimal). Proof Let us assume that ¬(i) holds, and let us prove that (ii) necessarily holds in this case. Let t0 denote the completion time of the job immediately preceding job i in S (or let t0 = 0 if there is no such job), and let Ci (resp. Cj ) denote the completion time of job i (resp. job j) in the current schedule, and Ci0 (resp. Cj0 ) the completion time of job i (resp. job j) after interchanging i and j. First, note that interchanging i and j leads to an increase of the cost of the schedule only if, in the new schedule, Cj0 > dkj (j becomes late), Ci0 > dki (i becomes late) or

12

Ci0 > Cj (this inequality is a necessary condition for an increase of the cost associated with the sequence of jobs following i and j). Since ¬(i) holds, proposition 3 implies (dki ≥ dkj ) and (ri ≥ rj ). We can consider the two cases: • If t0 ≥ ri , then we have Cj0 < Cj ≤ dkj and Ci0 = Cj ≤ dkj ≤ dki . Thus, (ii) holds. • If t0 < ri , then Cj = ri + pi + pj and Cj0 = max(t0 , rj ) + pj . Since ri + pi > max(t0 , rj ), we have Cj0 < Cj ≤ dkj . Besides, Ci0 = max(Cj0 , ri ) + pi . ri + pi < Cj is trivial. Moreover, max(t0 , rj ) ≤ ri . So, max(t0 , rj ) + pj + pi ≤ ri + pi + pj and consequently Ci0 ≤ Cj ≤ dkj ≤ dki . Thus, (ii) holds. u t Proposition 4 implies that the set of sequences of jobs following the ≺-order is dominant for our problem.

3.2 An Integer Linear Programming (ILP) formulation Proposition 4 shows that there exists at least one optimal solution that can be described by a selection of one jump point and one machine for each job. The corresponding schedule can be obtained by sequencing on each machine the jobs according to the order ≺ of the selected jump points. For each job i ∈ J, let us associate one occurrence k of i with each jump point dk ∈ Gi , corresponding to the processing of i before dk and after its preceding jump point. In the following, we assume that the set of all occurrences is ordered according to the order ≺ of their associated jump points. Tie-breaking can be made arbitrarily according to the indices of the jobs. Let σ(k), k ∈ {1, . . . , λ}, denote the job corresponding to the kth occurrence, and G¯i = {k ∈ {1, . . . , λ}|σ(k) = i} the set of occurrences linked with job i ∈ J. This allows us to write the following model where the decision variables uµ k is equal to 1 if occurrence k ∈ {1, . . . , λ} is selected for machine µ ∈ M , and 0 otherwise, and tµ k, k ∈ {1, . . . , λ} and µ ∈ M , represent the job completion times associated to selected jump points and machines. These variables allow the expression of the conjunctive constraints as well as the respect of the deadlines of the job occurrences. (QT S ≺ ) : min

µ k∈{1,...,λ} γk · uk µ µ∈M k∈Gi uk = 1 µ µ µ tµ ≥ t k k−1 + pσ(k) · uk µ µ tµ k ≥ (rσ(k) + pσ(k) ) · uk tµ k ≤ dk µ uk ∈ {0, 1} tµ k ≥0

P

µ∈M

P

P

P

(11) i∈J

(12)

µ ∈ M, k ∈ {1, . . . , λ}

(13)

µ ∈ M, k ∈ {1, . . . , λ}

(14)

µ ∈ M, k ∈ {1, . . . , λ}

(15)

µ ∈ M, k ∈ {1, . . . , λ}

(16)

µ ∈ M, k ∈ {1, . . . , λ}

(17)

Constraints (12) ensure that exactly one occurrence is selected for each job, and Constraints (13) that the conjunctive constraints are respected. Constraints (14) guarantee that the release dates are satisfied, and (13) that the deadlines are met. This model can be seen as a generalization of the master sequence model, introduced by Dauz`ere-P´er`es and Sevaux [11] for minimizing the weighted number of tardy jobs on a single machine.

13

If we denote by η the overall number of actual jump points in this model, the number of variables is clearly bounded by η(η − 1)m. Indeed, the total number of η(η−1) virtual and actual jump points is at most . Thus, it is far more compact than 2 the time-indexed formulation, which requires a pseudo-polynomial number of binary variables, depending on the maximum possible schedule length. Besides, linear ordering formulations, which usually require O(n2 m) variables, cannot easily be applied to our problem since they need a link between the position of each job and its cost. In addition to its relatively small size, the advantage of our model is that it restricts the solution space by considering only few (in practice) possible positions for each job. However, its main drawback lies in the constants used in Constraints (13) and (14), which degrade the quality of the linear relaxation of the model. Dominances. The following proposition is the generalization of a dominance rule described in [11]. We do not provide a proof since they are closely related. l

l +1

Proposition 5 Let i ∈ J and j ∈ J − {i}, dlii ∈ Ki , djj ∈ Kj , and consider djj Kj , the actual jump point of j following immediately

dlji .



Assume that all conditions

l

l

µ µ µ µ µ µ µ li li j j ri < r j , ri + pµ i ≥ rj + pj , ri + pi + pj > dj , rj + pj + pi > di , di − pi ≤ dj − pj l +1

and γjj

l

≥ γjj + maxl∈{1,...,si } γil hold. If i is processed on machine µ and completes l

before dlii , and j completes after djj , then job i can be replaced by job j on µ and processed somewhere else such that the cost of the schedule does not increase. The conditions of the proposition merely express the fact that jobs i and j cannot be both processed on machine µ while meeting their jump points. Moreover, processing j before its jump point is less restrictive than processing i before its jump point. Finally, not processing j at this place is more costly than not processing i. In a single machine environment, the proposition allows the deletion of all jump points of Ki prior to dlii . In the parallel machine case, the proposition can be interpreted as follows: In at least one optimal schedule, either i is processed on µ and completes l before dlii and j is processed on another machine at a cost smaller than γjj , or i is processed on another machine or completes after dlii . If all conditions stated in Proposition 5 hold, then the following inequality is valid and can be added to the model: X X X 0 uµ uµ ki ≤ kj l

¯ i |dk ≤d i ki ∈K i i

µ0 ∈M −{µ} k ∈K ¯ j |γl ≤γ lj j j j

The size of the model is directly linked to the overall number of jump points. Thus, it seems profitable to avoid virtual points when possible. Immediate rules prevent from µ adding useless points: If ri < rj , dkj < dki and ri + pµ i + pj > dkj , then the virtual jump point corresponding to the execution of jobs i before j such that j completes before dkj on machine µ is useless since this configuration is infeasible. Moreover, if ki− denotes the jump point of job i immediately preceding ki and dk− + pµ j > d kj i then, in any feasible schedule where i is processed before job j and job j completes before dkj , there already exists a jump point between the completion time of i and dkj , and the corresponding virtual jump point is useless. Besides, it is in some cases always possible to process i between dkj + 1 and dki without increasing the cost of the schedule. Considering the case where i is before j and j completes before dkj is then not mandatory. Such configurations can be detected by solving an auxiliary 1|rj |Lmax

14

problem with appropriate release and due dates [8]. However, this last procedure does successfully reduce the size of the model only when the range of jump points is very scattered.

3.3 Experimental results In order to build our test bed, we use an additional parameter Z1 to the instance generator described in Section 2.2. One release date per job is drawn from a uniform 1 nZ2 1 distribution {0, . . . , nZ m }, and jump points are drawn from {rj + pj , . . . , m }. The values tested for Z1 are in {5, 10, 20}. The benchmark conditions are the same as for the case without release dates. Under these conditions, the solver applied to the time-indexed formulation (T I) solves only few 30-job instances. Tables 5 and 6 report the results obtained by applying the solver XPRESS-MP with a standard strategy on the model (QT S ≺ ) and a single machine, while Tables 7 and 8 give the results for unrelated parallel machines. We can see that this straightforward approach for solving the problem is only suitable for instances with up to 30 jobs or a few number of jump points (|K| = 2, |K| = 3). |K| \n 2 3 4 9 Total

10 45 45 45 45 180/180

20 45 45 45 45 180/180

30 45 45 45 32 167/180

50 43 42 31 8 124/180

Total 178/180 177/180 166/180 130/180 831/900

Table 5 Number of single machine instances with release dates solved optimally within 1000 sec.

|K| \n 2 3 4 9

10 0.0 0.1 0.1 0.5

20 0.2 0.5 1.1 60.8

30 0.8 4.2 13.7 142.6∗

50 8.3∗ 38.5∗ 190.6∗ 503.3∗

Table 6 Average computing time for single machine instances with release dates solved optimally (sec.). ∗ Not all instances were solved within the 1000 sec. time limit.

P P Well-known special cases: 1|rj | wj Uj , R|rj | wj Uj . Applied to the problem of minimizing the weighted number of tardy jobs (|K| = 1), the model solves 76 out of the 80 100-job instances tested on a single machine within a time limit of 1000 seconds, and 62 out of the 80 300-job instances within a time limit of 3600 seconds. On parallel unrelated machines, 467 out of 480 40-job instances (resp. 415 out of 480 50-job instances) are solved optimally within a time limit of 1000 seconds. Additional results on this specific problem can be found in Section 4.3. Note that the one-machine instances were generated as described in [11]. The unrelated parallel machines instances are generated in a similar way, except that one processing time per job and machine is drawn from a random uniform distribution {1, . . . , 100} (see Section 4.3 for more details).

15

|K| \m |K|=2 m=2 m=3 m=6 |K|=3 m=2 m=3 m=6 |K|=4 m=2 m=3 m=6 |K|=9 m=2 m=3 m=6 Total

10 135/135 45 45 45 135/135 45 45 45 135/135 45 45 45 135/135 45 45 45 540/540

20 135/135 45 45 45 135/135 45 45 45 135/135 45 45 45 132/135 44 43 45 537/540

n 30 133/135 44 44 45 133/135 44 44 45 119/135 36 38 45 80/135 17 20 43 465/540

40 121/135 42 35 44 102/135 34 27 41 82/135 18 23 41 43/135 4 5 34 348/540

Total 524/540 176/180 169/180 179/180 505/540 168/180 161/180 176/180 471/540 144/180 151/180 176/180 390/540 110/180 113/180 167/180 1890/2160

Table 7 Number of parallel unrelated machine instances with release dates solved optimally within 1000 sec.

n |K| \m |K|=2 m=2 m=3 m=6 |K|=3 m=2 m=3 m=6 |K|=4 m=2 m=3 m=6 |K|=9 m=2 m=3 m=6

10 0.0 0.0 0.0 0.0 0.1 0.1 0.0 0.0 0.1 0.2 0.2 0.0 0.5 0.8 0.7 0.1

20 0.7 0.5 1.6 0.1 1.9 3.1 2.5 0.2 4.0 7.1 4.8 0.3 50.8∗ 102.7∗ 50.0∗ 0.6

30 18.7∗ 8.6∗ 47.5∗ 0.3 52.4∗ 39.7∗ 117.4∗ 1.2 53.9∗ 85.0∗ 86.1∗ 1.8∗ 159.1∗ 296.0∗ 340.2∗ 20.8∗

40 35.7∗ 37.5∗ 52.5∗ 20.7∗ 74.2∗ 95.5∗ 150.0∗ 6.6∗ 105.3∗ 187.5∗ 200.4∗ 15.9∗ 95.3∗ 343.4∗ 341.6∗ 29.9∗

Table 8 Average computing time for parallel unrelated machine instances with release dates solved optimally (s.).∗ Not all instances were solved within the 1000 s. time limit.

4 Constraint Generation Approach

In this section, we propose another integer linear programming model of the problem with a very large number of constraints. Computational studies are conducted to compare a constraint generation approach to the resolution of (QT S ≺ ) with a standard solver.

16

4.1 Another integer linear programming model This new approach is based on the fact that the deteriorated results obtained with the model (QT S ≺ ) with release dates, compared to the ones obtained with the (QT S) model with no release dates, may come not only from the larger number of jump points, but also from the non-binary variables tk , k ∈ K ∪ L. These variables may induce poor lower bounds by linear relaxation, because of the structure of the conjunctive constraints (13) and release dates constraints (14). Our idea is thus to reformulate (QT S ≺ ) as a pure 0 − 1 linear program with non-negative coefficients only. However, the resulting model has a large number of constraints, which can be added iteratively through a constraint generation process. This model relies on the following propositions. Proposition 6 Let δ = (δ(1), . . . , δ(|δ|)) be a sequence of occurrences of jobs scheduled as soon as possible on a given machine µ ∈ M . Then, for each l ∈ {1, . . . , |δ|), the completion time of occurrence δ(l) is: ( ) l X µ C(δ(l)) = max rσ(δ(k)) + pσ(δ(r)) k∈{1,...,l}

r=k

Proof The proposition can be proved by recurrence, by observing that an occurrence δ(l) starts: – either at its release date rσ(δ(l)) , – or at the end of the preceding occurrence, equal to its starting time plus its duration pµ . σ(δ(l)) u t Then, the next proposition is straightforward. Proposition 7 Let δ = (δ(1), . . . , δ(|δ|)) be a sequence of occurrences of jobs scheduled as soon as possible on a given machine µ ∈ M . δ is feasible with respect to the deadlines if and only if: rσ(δ(k)) +

l X

pµ ≤ dδ(l) , σ(δ(r))

k ∈ {1, . . . , |δ|}, l ∈ {k, . . . , |δ|}

r=k

The new model for the problem of scheduling jobs with regular step cost functions with release dates results directly from Proposition 7. It consists in replacing conjunctive constraints (13) and release dates constraints (14) of (QT S ≺ ) by the characterization of the feasible solutions shown in Proposition 7. ≺ (QT SM M KP ) : min

P

µ∈M µ (rσ(k) + pσ(k) ) · uµ k

P

µ∈M

µ k∈Gi uk = 1 Pl + r=k+1 pµ σ(r)

P

k∈{1,...,λ} γk

P

· uµ k

(18)

i∈J · uµ r

≤ dl

(19)

µ ∈ M, k ∈ {1, . . . , λ}, l ∈ {k, . . . , λ} (20)

uµ k

∈ {0, 1}

µ ∈ M, k ∈ {1, . . . , λ}

(21)

Through this formulation, the problem can be seen as a Multiple choice Multidimensional Knapsack Problem (MMKP) where:

17

• Each group of items represents one job, • Each item corresponds to one occurrence of a job, • Each resource corresponds to the available processing time between one release date and one jump point. There exists a straightforward linear dominance among Constraints (20): For a given machine, a given release date rσ(k) and a right-hand-side d, the constraint involving the largest number of occurrences dominates the others, since the coefficients of the variables are the same. This dominance is very effective, because a large number of job occurrences may share the same deadline, due to the construction of the set of virtual jump points. Formally, if we denote by G∗ = {max{l, l ∈ {1, . . . , λ} ∧ dl = d}, d ∈ Ki , i ∈ J} the set of occurrences leading to dominating constraints, then Constraints (20) can be replaced by: Pl µ µ µ ∈ M, k ∈ {1, . . . , λ}, (rσ(k) + pµ ) · uµ r=k+1 pσ(r) · ur ≤ dl k + σ(k) l ∈ G∗ , l ≥ k

(22)

≺ It follows that the number of resource constraints in (QT SM M KP ) reaches O(mnη). Unfortunately, although MMKP is a well-known problem, not much work has been devoted to its exact resolution.

4.2 Constraint generation scheme To be able to cope with the large number of constraints, the approach iteratively solves subproblems with a reduced but increasing set of resource constraints, with the hope that only a small set of constraints will be sufficient to get a feasible optimal solution for the original problem. More precisely, each master subproblem consists in finding an optimal selection of occurrences (one per job) satisfying a given set of resource constraints (that ensure the satisfaction of the release dates and deadlines). The global feasibility of this selection is checked in a subproblem that aims at identifying a set of violated constraints. This subproblem is solved by sequencing the given occurrences according to the ≺order, such that release dates are satisfied. If the resulting schedule meets all deadlines, then it is trivially feasible and optimal since it corresponds to the best possible selection of occurrences (note that the approach is a heuristic if the master problem is solved sub-optimally). Otherwise, the first occurrence l in the sequence violating a deadline is identified. The resource constraint associated with the block of tasks in the schedule composed of l and the occurrences executed previously with no inserted idle time is violated. The procedure keeps on seeking for such blocks after l, and on each machine. The dominant resource constraints, corresponding to identified blocks (characterized by their machine, first and last occurrences), will then be added to the master problem. Algorithm 1 summarizes the scheme, and Algorithm 2 details the feasibility checking procedure. In our implementation, each master problem is solved using a standard ILP solver with a standard strategy since, to our knowledge, no dedicated and significantly more efficient exact algorithm exists for solving MMKP. There is a finite number ≺ of constraints in (QT SM M KP ). So, Algorithm 1 completes in a finite number of steps. Algorithm 2 runs in O(n), provided that max{l, dl = dk }, k ∈ {1, . . . , λ} is computed beforehand (this can be done using a O(λ) procedure scanning jump points from the last one to the first one).

18

Algorithm 1 Main constraint generation procedure {Initialize the set of constraints} ResCtrs ← initResCtrs() repeat {Solve the master problem} δ ← solveM asterP roblem(ResCtrs) {Check feasibility of the current solution} violCtr ← checkF eas(δ) {Add resource constraints if necessary} if violCtr 6= ∅ then ResCtrs ← ResCtrs ∪ violCtr end if until violCtr = ∅

Algorithm 2 Feasibility checking procedure (input: δ µ , µ ∈ {1, . . . , m} are sequences of occurrences scheduled on each machine) {Initialize the set of violated constraints} viol ← ∅ {Scan the schedules on each machine} for µ ∈ M do t ← 0 ; i ← 1 ; start ← i while (i ≤ |δ µ |) do {Adjust the current time instant} {and change the starting occurrence of the current block if necessary} if t ≤ rσ(δµ (i)) then t ← rσ(δµ (i)) ; start ← i end if {Add the processing time of the current deadline} t ← t + pµ σ(δ µ (i)) {If the current occurrence does not meet its deadline} if t > dδµ (i) then {Add the corresponding dominating constraint} end ← max{l, dl = dδµ (i) } viol ← viol ∪ {(start, end)} end if i←i+1 end while end for return viol

4.2.1 Additional constraints In order to reduce the number of iterations of the main constraint generation procedure, it seems necessary to guide the resolution of the first master problem towards a globally feasible solution. Preliminary studies (notably on the linear relaxations of ≺ ≺ (QT SM M KP ) and (QT S )) did not allow us to find good sets of initial resource constraints. Instead, we used a more generic approach, successfully used in scheduling, known as energetic reasoning [19], [2]. In order to apply the required energy consumption concept to our problem, we define µ µ aµ k,i,l = max(0, pσ(k) −max(0, dk −dl )−max(0, ri −rk )), µ ∈ M, i ∈ J, k ∈ {1, . . . , λ},

l ∈ {k, . . . , λ}

19

which is equal to the minimum number of time units of occurrence k that must be processed in the interval [ri , dl ] if k is selected. rkµ is the earliest possible starting time of k on machine µ. Assuming that k is associated with a jump point dsi of a job i ∈ J, one can verify that rkµ = 0 if s = 1, or rkµ = ds−1 − pµ i i + 1 if s > 1. The set of additional constraints added at the beginning of the process is then X µ αk,i,l uµ µ ∈ M, i ∈ J, l ∈ {1, . . . , η} k ≤ d l − ri k∈{1,...,η}

4.2.2 Deriving bounds to speed up the resolution of the master problem Once scheduled, the sequence of occurrences given as an output of the master problem provides a straightforward upper bound, which can be added to the next master programs in order to prune the search tree. Besides, the optimum value of each master problem is a lower bound of the optimum of the original problem. However, using directly this bound as a constraint of the next master program sometimes seems to alter the performance of the standard ILP solver. This is probably because of a lack of differentiation of the truncated search space (whose linear relaxation value is lower than this bound), that alters the choice of good branching decisions. In order to use this information, we simply stop the solver search procedure each time a feasible solution to the current master program is found whose cost is equal to this bound.

4.3 Numerical results For the sake of conciseness, we focus on the results obtained by the constraint genP eration approach applied to the problem R|rj | wj Uj . As mentioned in Section 3.3, instances of our test bed were randomly generated in a similar way as in [11]. More precisely, the generator takes the following parameters as input: The number n of jobs, the number m of machines, the release date factor K1 and the due date factor K2 . For each job and each machine, a processing time is drawn from a uniform distribution {1, . . . , 100}. To each job i is assigned a release date ri drawn from a uni1 form distribution {0, . . . , nK m }, and a due date di drawn from a uniform distribution µ nK2 {ri + maxµ∈M pi , . . . , ri + maxµ∈M pµ i + m }. Parameter K1 controls the dispersion of the release dates, while parameter K2 controls the size of the job execution windows. The parameters used to build our test bed are the combinations of n ∈ {10, 20, 30, 40, 50, 60, 70, 80, 90, 100}, m ∈ {2, 3, 6}, K1 ∈ {1, 5, 10, 20} and K2 ∈ {1, 5, 10, 20}. Ten instances are generated for each combination of these parameters, leading to a total of 4800 instances. We compare the performance of the constraint generation approach to the direct application of a standard solver on the model (QT S ≺ ). The mathematical programming solver IBM ILOG CPLEX 12 is used with standard settings both for solving the master programs of the constraint generation procedure and for the direct approach. A time limit of 1000 seconds per instance is fixed, and tests are performed on a Microsoft Windows XP laptop equipped with a 2.5GHz dual-core processor and 3.5GB RAM. Table 9 reports the percentage of instances solved optimally by each method, and Table 10 reports the average computing time. From these tables, we see that, globally, our constraint generation method can be favorably compared to the direct approach in

20

Method Const. gen. method (QT S ≺ )

Method Const. gen. method (QT S ≺ )

m 2 3 6 Average 2 3 6 Average

10 100,0% 100,0% 100,0% 100,0% 100,0% 100,0% 100,0% 100,0%

20 100,0% 100,0% 100,0% 100,0% 100,0% 100,0% 100,0% 100,0%

m 2 3 6 Average 2 3 6 Average

70 86,3% 76,3% 91,3% 84,6% 87,5% 68,8% 80,0% 78,8%

80 80,0% 64,4% 81,9% 75,4% 81,3% 60,0% 70,6% 70,6%

Number of jobs 30 40 100,0% 100,0% 100,0% 100,0% 100,0% 100,0% 100,0% 100,0% 100,0% 100,0% 99,4% 98,8% 100,0% 100,0% 99,8% 99,6% Number of jobs 90 100 61,9% 58,1% 56,3% 53,1% 67,5% 65,0% 61,9% 58,8% 80,0% 77,5% 51,9% 43,1% 66,3% 63,1% 66,0% 61,3%

50 98,1% 97,5% 100,0% 98,5% 97,5% 85,6% 96,9% 93,3%

60 95,0% 91,9% 97,5% 94,8% 91,3% 76,3% 84,4% 84,0% Average 87,9% 83,9% 90,3% 87,4% 91,5% 78,4% 86,1% 85,3%

Table 9 Percentage of instances solved optimally for each method, by number of jobs and machines. Dominance between the methods is represented by bold figures.

Method Const. gen. method (QT S ≺ )

m 2 3 6 Average 2 3 6 Average

10 0.0 0.1 0.1 0.1 0.0 0.1 0.1 0.1

20 0.2 0.2 0.1 0.1 0.1 0.1 0.2 0.1

30 0.7 0.6 0.2 0.5 0.4 1.0 0.3 0.6

40 4.5 6.0 0.7 3.7 3.2 19.6 4.8 9.1

50 13.3 26.2 2.5 13.5 11.8 40.7 19.5 23.3

Number of jobs 60 70 80 58.5 109.2 139.4 79.2 56.0 89.9 17.0 33.1 39.1 50.5 66.7 90.1 24.2 50.5 46.0 71.1 46.2 77.2 28.6 47.1 28.3 39.8 48.1 48.1

90 139.7 89.6 56.6 94.1 30.2 55.2 36.0 39.1

100 119.8 99.6 52.3 88.0 40.7 37.7 23.5 33.1

Table 10 Average computing time (in seconds) for each method. Only instances solved by both methods are considered.

terms of number of instances solved to optimality when n ∈ {30, 40, 50, 60, 70, 80} or with 3 and 6 machines. However, the direct approach provides faster optimal solutions when it runs successfully. Tables 11 and 12 show the sensitivity of each method relatively to the release date and due date factors. It appears that the constraint generation approach performs well when release dates are grouped (K1 = 1), while being competitive for other configurations. It can be explained by the fact that, when K1 = 1, the part of the horizon where there are release dates is relatively small and is covered by the first scheduled jobs. It follows that only constraints involving the release dates of these jobs are relevant. This idea is confirmed by the results in Table 13, reporting the average number of constraints added by the constraint generation method. This table, together with Table 14, shows that the number of cuts and iterations required by the procedure is very small compared to the overall number of constraints. We can conclude, from these numerical experiments, that our constraint generation method, although it does not require heavy implementation efforts, can favorably be compared to the direct use of a standard solver on the (QT S ≺ ) model when due dates are grouped and when dealing with more than 2 machines. Moreover, it is still

Average 46.8 32.6 16.1 31.7 17.4 28.5 16.6 20.5

21

Method

K1 1 5 10 20 Average 1 5 10 20 Average

Const. gen. method (QT S ≺ )

1 100.0% 93.3% 92.3% 100.0% 96.4% 98.7% 93.0% 95.0% 100.0% 96.7%

5 96.3% 78.3% 75.3% 82.0% 83.0% 80.3% 78.3% 82.3% 90.7% 82.9%

K2 10 93.3% 77.3% 76.7% 76.7% 81.0% 74.3% 75.3% 81.7% 85.3% 79.2%

20 99.0% 87.0% 80.3% 90.3% 89.2% 77.3% 76.7% 81.7% 94.7% 82.6%

Average 97.2% 84.0% 81.2% 87.3% 87.4% 82.7% 80.8% 85.2% 92.7% 85.3%

Table 11 Percentage of instances solved optimally, for all number of jobs and machines, by each method, by release date and due date factors.

Method

K1 1 5 10 20 Average 1 5 10 20 Average

Const. gen. method (QT S ≺ )

1 4.6 31.2 31.8 5.2 17.7 21.4 9.8 11.7 2.5 11.3

5 6.4 64.4 97.0 49.5 53.2 42.2 32.0 19.3 7.4 25.2

K2 10 9.6 55.1 52.2 47.3 40.9 49.1 32.1 19.8 9.3 27.4

20 9.2 27.6 30.7 11.7 19.4 36.7 24.4 22.1 2.9 20.8

Average 7.2 43.6 51.5 26.5 31.7 36.3 23.6 17.9 5.2 20.5

Table 12 Average computing time (in seconds), for all number of jobs and machines, for each method by release date and due date factors. Only instances solved by both methods are considered.

K1 1 5 10 20 Average

10 0.9 1.1 1.3 1.2 1.1

20 1.7 3.4 4.1 5.2 3.6

30 2.1 5.8 9.3 13.5 7.7

40 3.9 11.7 20.2 24.9 15.2

50 5.6 18.2 31.7 36.2 22.7

Number 60 9.2 27.6 46.8 49.8 32.7

of jobs 70 11.2 38.2 57.8 53.2 38.2

80 12.1 46.9 66.3 64.5 43.9

90 13.7 51.8 70.1 68.1 46.0

100 16.4 52.7 91.2 84.6 54.8

Average 7.5 21.2 32.3 36.1 23.7

Table 13 Average number of constraints, for all number of machines, added by the constraint generation approach for instances solved optimally, by release date factor and number of jobs.

K1 1 5 10 20 Average

10 1.1 1.2 1.3 1.3 1.2

20 1.5 2.0 2.2 2.6 2.1

30 1.5 2.7 3.3 4.2 2.9

40 2.1 3.9 5.1 5.2 4.1

50 2.6 4.9 6.4 6.2 5.0

Number 60 3.6 6.0 8.1 7.2 6.2

of jobs 70 80 4.0 4.2 7.6 8.1 8.7 9.6 7.1 8.4 6.7 7.2

90 4.5 8.5 8.5 8.6 7.1

100 4.9 8.2 11.2 10.1 8.0

Average 3.0 4.8 5.7 5.7 4.7

Table 14 Average number of iterations, for all number of machines, of the constraint generation approach for instances solved optimally, by release date factor and number of jobs.

22

P competitive for other configurations of instances for R|rj | wi Ui . Preliminary results which we do not report here for the sake of conciseness indicate that the behavior of the method on the more general problem is as good when the number of initial constraints (which grows quadratically with the number of jobs) is sufficiently small. Thus, problems with a large number of jobs may induce too large integer linear programs, even at the first iteration of the process. Further studies on the set of initial constraints can be conduct to bypass this difficulty.

5 Conclusion Mixed Integer Linear Programming models have been proposed for an original scheduling problem encountered in semiconductor manufacturing, where inspection operations are scheduled subject to a fixed production schedule. The results obtained with a standard mathematical programming solver on the general problem as well as some of its special cases support the development of dedicated solving methods based on these models or on general heuristics or metaheuristics. A first tentative is proposed using a constraint generation approach with promising results. Acknowledgements This work was supported by STMicroelectronics and the research project Rousset 2003-2009, financed by the Communaut´ e du Pays d’Aix, Conseil G´ en´ eral des Bouches du Rhˆ one and Conseil R´ egional Provence Alpes Cˆ ote d’Azur.

References 1. Van den Akker, J.M., Hoogeveen, J.A. and Van de Velde, S.L., Parallel machine scheduling by column generation, Operations Research, 47, 862-872 (1999). 2. Baptiste, Ph., Le Pape, C., Nuijten, W., Satisfiability tests and time-bound adjustments for cumulative scheduling problems, Annals of Operations Research, 92(0), 305-333 (1999) 3. Baptiste, P., Peridy, L. and Pinson, E., A branch and bound to minimize the number of late jobs on a single machine with release time constraints, European Journal of Operational Research, 144, 1-11, (2003). 4. Bornstein, C. T., Alcoforado, L.F., and Maculan, N., A graph-oriented approach for the minimization of the number of late jobs for the parallel machines scheduling problem, European Journal of Operational Research, 165, 649-656 (2005). 5. Bousetta, A., Cross, A., Adaptative sampling methodology for In-line Defect Inspection, IEEE/SEMI Advanced Manufacturing Conference, 25-31 (2005). 6. Brucker, P., Scheduling Algorithms, Springer-Verlag, Berlin (2007) 7. Carlier, J., Probl` emes d’ordonnancement ` a contraintes disjonctives, Th` ese de troisi` eme cycle, Universit´ e de Paris VI (1975) 8. Carlier, J., The one-machine sequencing problem, European Journal of Operational Research, 11, 42-47 (1982) 9. Chang, P.C. and Su, L.H., Scheduling n jobs on one machine to minimize the maximum lateness with a minimum number of tardy jobs, Computers and Industrial Engineering, 40, 349-360 (2001). 10. Chen, C.-L., An iterated local search for unrelated parallel machines problem with unequal ready times, IEEE International Conference on Automation and Logistics, Qingdao, China, 2044-2047 (2008). 11. Dauz` ere-P´ er` es, S. and Sevaux, M., Using lagrangean relaxation to minimize the weighted number of late jobs on a single machine, Naval Research Logistics, 50, 273-288 (2002). 12. Dauz` ere-P´ er` es, S. and Sevaux, M., An exact method to minimize the number of tardy jobs in single machine scheduling, Journal of Scheduling, 7, 405-420, (2004). 13. Garey, M.R., Johnson, D.S., Computers and Intractability: A guide to the Theory of NP Completeness, Freeman, San Francisco (1979).

23 14. Good, R., and Purdy, M., An MILP approach to wafer sampling and selection, IEEE Transactions on Semiconductor Manufacturing, 20(4), 400-407 (2007) 15. Graham, R.L., Lawler, E.L., Lenstra, J.K., Rinnooy Kan, A.H.G., Optimization and approximation in deterministic sequencing and scheduling: A survey, Ann. Discrete Math. 4, 287-326 (1979) 16. Jackson, J.R., Scheduling a Production Line to Minimize Maximum Tardiness, Research Report 43, Management Science Research Project, University of California, Los Angeles (1955). 17. Jouglet, A., Savourey, D., Carlier, J. and Baptiste, P., Dominance-based heuristics for one-machine total cost scheduling problems, European Journal of Operational Research, 184, 879-899 (2008). 18. Lee, S. B., Lee, T.-Y., Liao, J., Chang, Y.-C. A capacity-dependence dynamic sampling strategy Proceedings of the International Symposium on Semiconductor Manufacturing Conference, 312-314 (2003). 19. Lopez, P., Erschler, J., Esquirol, P., Ordonnancement de tˆ aches sous contraintes: une approche ´ energ´ etique, Automatique, Productique, Informatique Industrielle, 26, 453-481 (1992) 20. Leung, J.Y.-T. (Ed.), Handbook of scheduling: Algorithms, models, and performance analysis, Chapman & Hall / CRC (2004). 21. Meral, A. and Omer, K., Scheduling jobs on unrelated parallel machines to minimize regular total cost functions, IIE Transactions, 31, 153-159 (1999). 22. M’Hallah, R. and Bulfin, R. L., Minimizing the weighted number of tardy jobs on parallel processors, European Journal of Operational Research, 160, 471-484 (2005). 23. Moore, J.M., An n job, one machine sequencing algorithm for minimizing the number of late jobs, Management Science, 15, 102-109 (1968). 24. Pinedo, M. L., Scheduling: Theory, Algorithms, and Systems, Springer-Verlag, Berlin (2005). 25. Tanaka, S. and Fujikama, S., An efficient exact algorithm for general single-machine scheduling with machine idle time, In IEEE International Conference on Automation Science and Engineering, 371-376 (2008).