Computers & Operations Research 33 (2006) 994 – 1009 www.elsevier.com/locate/cor

Scheduling two parallel machines with a single server: the general case Amir H. Abdekhodaeea , Andrew Wirthb,∗ , Heng-Soon Gana a Operations Research Group, CSIRO Mathematics and Information Sciences, Private Bag 10, Clayton South MDC 3169,

Australia b Department of Mechanical and Manufacturing Engineering, The University of Melbourne, Parkville, VIC 3010, Australia

Available online 17 September 2004

Abstract This paper considers the problem of scheduling two-operation non-preemptable jobs on two identical semiautomatic machines. A single server is available to carry out the ﬁrst (or setup) operation. The second operation is executed automatically, without the server. The general problem of makespan minimization is NP-hard in the strong sense. In earlier work, we showed that the equal total length problem is polynomial time and we also provided efﬁcient and effective solutions for the special cases of equal setup and equal processing times. Most of the cases analyzed thus far have fallen into the category of regular problems. In this paper we build on this earlier work to deal with the general case. Various approaches will be considered. One may reduce the problem to a regular one by amalgamating jobs, or we may apply the earlier heuristics to (possibly regular) job clusters. Alternately we may apply a greedy heuristic, a metaheuristic such as a genetic algorithm or the well known Gilmore–Gomory algorithm to solve the general problem. We report on the performance of these various methods. 䉷 2004 Elsevier Ltd. All rights reserved. Keywords: Parallel machines; Single server; Setup

1. Introduction This paper considers the general problem, P 2, S1|pi , si |Cmax , Brucker et al. [1], of scheduling twooperation non-preemptable jobs on two identical semi-automatic machines. A single server is available ∗ Corresponding author. Tel.: +61-3-8344-4852; fax: +61-3-9347-8784.

E-mail addresses: [email protected] (A.H. Abdekhodaee), [email protected] (A. Wirth), [email protected] (H.-S. Gan). 0305-0548/$ - see front matter 䉷 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.cor.2004.08.013

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

995

to carry out the ﬁrst (or setup) operation. The server can handle at most one job at a time. The second operation is executed automatically, and on the same machine, without the server. The processing operation must be carried out immediately after the setup. Our objective is to minimize makespan. Let pi and si denote the processing and setup times, respectively, of job i. It is well known, Brucker et al. [1], that P 2, S1|pi , si |Cmax is unary NP-hard. So is P 2, S1|si =s|Cmax . Brucker et al. [1] contains a full discussion of the computational complexity of these and many related problems. They show, for example, that P 2, S1|pi =p|Cmax is binary NP-hard, but on the other hand they prove that P , S1|pi =p, ri , si =s|Cmax , where ri denote setup release dates, is polynomial time. Brucker et al. [1] also consider various other scenarios and objective functions. Not surprisingly, all but the simplest cases are NP-hard. One special case which is polynomial time is P 2, S1|pi + si = a|Cmax , see Abdekhodaee and Wirth [2] for further details. Most of the literature on the general problem has dealt with issues of computational complexity. Computational results appear to be restricted to a few papers. Namely Koulamas [3],who studied the closely related problem of minimizing total idle time (excluding the last inevitable idle time). Also Abdekhodaee and Wirth [2] and Abdekhodaee et al. [4], who considered the regular case, where pi − pj sj for all i, j and the equal processing and equal setup times cases. In this paper, we build on this earlier work to deal with the general case. Various approaches will be considered. One may reduce the problem to a regular one by merging jobs, or we may partition the jobs into (possibly regular)subsets, apply the earlier heuristics and then sequence these partial solutions using the well known Gilmore–Gomory algorithm. We may also apply a greedy heuristic or a metaheuristic such as a genetic algorithm to solve the general problem. The structure of this paper is as follows: ﬁrst we review some of the results of Abdekhodaee and Wirth [2] and Abdekhodaee et al. [4] and discuss the regularity condition and its schematic representation in the processing time—setup time plane. Then we brieﬂy consider how the general problem may be reduced to a (possibly) regular one by amalgamating or clustering jobs. A range of heuristics, based on the above considerations as well as two versions of a greedy procedure and a genetic algorithm are presented. We then introduce a version of the Gilmore–Gomory algorithm appropriate to this problem. We compare the performance of these various approaches for a range of different scenarios and draw some conclusions.

2. The processing—setup time plane We brieﬂy recall some deﬁnitions from Abdekhodaee and Wirth [2] and a result from Abdekhodaee et al. [4]. Assume that we have n jobs with setup times si and processing times pi for i = 1, . . . , n. Let ti be the start time of job i and ci its completion time. So ci = ti + si + pi . Denote the length of job i by ai = si + pi . Fig. 1 illustrates this. As stated earlier processing does not require the server. We also introduce the convention that uppercase terms shall refer to the set of jobs after it has been scheduled, thus Ci shall refer to the completion time of the ith scheduled job and S1 is the setup time of the job that is scheduled ﬁrst. We assume that no job is unnecessarily delayed (Fig. 2). We say a set of jobs is regular if pi aj for all i, j . If the jobs, sorted by setup start times, is processed alternately on the two machines we say the processing or schedule is alternating. We assume, without loss of generality, that the ﬁrst job is started on machine one and that

996

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009 S2

P2

S4

P4

I2

I0

S1

P1

T1 = 0 T2

I n-1

I3

C1 = T 3 C 2

T4

Tn-1

Tn

I n+1

Tn+1 Tn+2 = Tn+3

Fig. 1. Machine idle time.

Si+1

Wi-1

Pi+1

Pi Si

Wi

S i+2

Pi+2

Fig. 2. Server waiting time.

T1 = 0. Let Ii , the ith machine idle time, be the time the machine which has just ﬁnished the ith job is idle before it starts its next job. Denote by Wi the server waiting time between the end of the setup of the (i + 1)th and the start of the (i + 2)th scheduled jobs. We wish to minimize makespan, that is maxni=1 Ci . Let pmax = maxi {pi } and deﬁne pmin similarly. For the regular case, we recall the following lower bound result, Abdekhodaee et al. [4]. Proposition 1. For a regular set of jobs n 1 makespan ai + s1 + s2 + pmin − pmax . 2 i=1

If the number of jobs, n, is even, processing times are equal and s1 s2 · · · sn then n 1 ai + s1 + s2 + , makespan 2

i=1

where = minA,A | i∈A si − j ∈A sj |, A ∪ A = {3, 4, ..., n}, A ∩ A = and A = A . For the general case the following is a makespan lower bound: n n 1 max . ai + smin , si + pmin 2 i=1

i=1

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

997

Setup Time

5

3

4

2 1

r = min ( si + pi ) i

Processing Time

Fig. 3. Processing time versus setup time.

Abdekhodaee and Wirth [2] report on the performance, for the general case, of two greedy heuristics. We shall return to these heuristics later. Abdekhodaee et al. [4] consider the computational complexity of two further special cases, namely equal processing time and equal setup time problems, and test the performance of various heuristics for these cases. These three special cases thus correspond to lines of gradient −1, and vertical and horizontal lines, respectively, in Fig. 3. These two papers effectively solve these cases and, more generally, contain various useful results about regular problems. Regularity helps to simplify the problem, allows us to explore conditions under which the problem may be polynomially solvable and makes it possible to code the problem as a relatively neat integer programming model. As stated earlier, regular means that: pi pj + sj

∀i, j.

The region which satisﬁes the above inequality is all of the shaded part of Fig. 3. If the shortest length job in the regular region is assumed to have length r then no job can lie in the region the region below pi + si = r. Furthermore, any job in the shaded part of Fig. 3 has a processing time which is less than or equal to r. This regular region itself can be divided in various zones. In zone 1, sj r/2 pi ∀i, j . In zone 2, r/2 si pi . On the other hand, each job in zone 3, has a larger setup time than its processing time, r/2 pi si . In zone 4, pi r/2 sj ∀i, j . Finally, zone 5 depicts the jobs for which pi r sj ∀i, j . Each zone may demand a special heuristic which may not necessarily be effective for other regions. For example, if a set of jobs lies completely in zone 4 or in zone 5 then any alternating schedule with the least processing time job as the ﬁnal job and an arbitrary sequence prior to that provides an optimal solution. On the other hand, if all jobs lie in zone 1, the longest processing time heuristic may be effective. The general problem may not satisfy the regularity condition so the assumption of alternating jobs between machines may not hold. Many jobs, for example, could be processed on one machine while the other machine is still processing a single job. Koulamas [3] has shown that any sequence can be reduced to another sequence which satisﬁes the regularity condition. Of course, such a reduction may result in a degradation of schedule performance.

998

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

In the rest of the paper, we explore various heuristics for solving the general problem. Firstly, we consider several ways of amalgamating jobs to achieve regularity. Then we review the greedy heuristics mentioned earlier. Next we introduce a genetic algorithm and ﬁnally we compare the performance of the above heuristics with the application of the well known Gilmore–Gomory procedure to job clustering.

3. Job amalgamation We present three methods, including the one due to Koulamas [3], which will reduce a general problem to a regular one. 3.1. Job addition We repeatedly aggregate short length jobs together until a regular set may be achieved. Fig. 4 illustrates this procedure. Suppose we have a nonregular set of jobs. The regularity region is deﬁned by pj +sj pmax . Jobs for which r pj + sj < pmax can be combined to form single jobs so that the resulting set is regular (Fig. 5). As the main objective is to reduce machine interference, it is advisable to select the job with the largest processing time as the ﬁnal job. This follows from the fact that if a set of jobs is to be considered as a single job, the setup of this set is s1 + p1 + · · · + sn−1 + pn−1 + sn and the processing time of the new job is pn . Therefore, we should select a job which has the highest possible processing time and the lowest idle time. Note that it may be the case that even if all the jobs in the nonregular region are combined the resulting job may still not fall in the regular region. 3.2. Job subtraction This procedure pairs a job with a long processing time with a number of short jobs as illustrated in Fig. 6. Thus, we effectively cut off long processing times and move the right hand boundary of the regular region to the left, with the aim of achieving a regular set of jobs.

Re

Setup time

n

io

eg

yr

rit

la

gu Jobs not in regularity region

p max Processing time

Fig. 4. Regularity region.

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

999

Fig. 5. Job addition.

Cut off processing time

Fig. 6. Job subtraction.

3.3. Koulamas’ reduction A method developed by Koulamas [3], implements both of the above concepts. The procedure is presented below: Reduction algorithm • Step 1: Sort all jobs i = 1, . . . , n in non-increasing order of pi . • Step 2: Sort all jobs j = 1, . . . , n in non-decreasing order of sj + pj . • Step 3: Do i = 1, . . . , n Do j = 1, . . . , n If sj + pj pi , then delete job j; update pi = pi − pj − sj . Endif; End Do j End Do i.

1000

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

Setup Time

Equal Setup

S0

S0

S0

S0

Processing Time

Fig. 7. Job clustering.

This approach trims the general region from both sides so that the ﬁnal result forms a regular set. Its computational complexity is O(n2 ). 3.4. Job clustering Now rather than combining jobs to create other jobs we construct job clusters each of which is regular. We illustrate this for the case of equal setups, however the technique may be applied generally. Consider a particular instance of equal setup times which can be expressed on a processing time versus setup time diagram as illustrated in Fig. 7. For any two jobs whose processing times difference is less than or equal to the setup time, the set is regular and efﬁcient heuristics can be used to solve this problem. We can apply this characteristic of the problem and introduce a method in which jobs are grouped in regular sets. We divide the jobs into setup-length intervals. For example, start from the shortest processing time job, and successively add the next shortest processing time job to the ﬁrst cluster, provided the difference in processing times between the ﬁrst job in the cluster and the last job to be added does not exceed the constant setup value. The selected jobs form a regular set and can be scheduled by previously developed approaches. If necessary, we repeat the procedure until every job is in a regular cluster. One advantage of this method may be that as well as attempting to minimize makespan we can also go some way to achieving ﬂowtime minimization. Alternately we could construct clusters of jobs of near equal length, or partition the processing time—setup time plane in other ways. For example, we used the L shaped regions, deﬁned by {(p, s) : k < p, s and (p or s < (k + 1)), k = 0, 1, . . .}. We then apply appropriate heuristics to each cluster and ﬁnally combine the partial solutions using another procedure, such as Gilmore–Gomory, as discussed below.

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

1001

4. Greedy heuristics Abdekhodaee and Wirth [2] apply two versions of the greedy heuristic to the general case. We recall these two and also mention some alternative versions. The basic aim at each step of the heuristic is, if possible, not to generate any machine idle time or alternately any server waiting time. In the ﬁrst step of our greedy heuristic jobs are arranged in a list according to some criterion. Then the last eligible job in the list of unsequenced jobs is selected for the next position. An eligible job is one that would not create any idle time or waiting time. If no eligible jobs exist we select the ﬁrst job in the list. The list may be arranged in increasing or decreasing order of setup times, processing times or job lengths. We may also extend this method to generate two separate lists, sort the lists and then select the jobs alternately from the lists. The initial lists may be generated by, say, using the longest processing time heuristic applied to job lengths. Extensive testing, in Abdekhodaee [5], of these alternative versions of the greedy heuristic indicates that overall the best versions are the two listed below. 4.1. Forward heuristic This version of the heuristic aims, myopically, to minimize machine idle time. The forward heuristic • Step 1: Sort the jobs in increasing order of setup times. Set to zero. • Step 2: Find a job with the largest setup less or equal to , if there is no such job select the shortest setup time job. Place it on the next available machine. Let k(j ) be the last job scheduled on machine 2(1). (If k or j = 0 then Ck2 or Cj1 = 0.) • Step 3: If Cj1 Ck2 , then = Ck2 − max(Ck2 − pk , Cj1 ) else = Cj1 − max(Cj1 − pj , Ck2 ). • Step 4: Repeat step 2 and 3 until all jobs are sequenced. 4.2. Backward heuristic If all the server waiting times are zero and if the last job has the shortest processing time then, for the regular case, the solution can be shown to be optimal. This result motivates the following version of the greedy heuristic. Backward heuristic • Step 1: Arrange the jobs in increasing order of processing times and place them in the unsequenced list. • Step 2: Sequence the shortest processing time job in the ﬁnal position. • Step 3: Set to the setup time of last sequenced job. • Step 4: From the unsequenced jobs, ﬁnd the job with the largest processing time less than or equal to . If there is no such a job select the shortest processing time job. Position the selected job just prior to most recently sequenced job. • Step 5: Repeat steps 3 and 4 until all jobs are sequenced. • Step 6: Starting with the ﬁrst job, schedule the resulting job sequence to the ﬁrst available machine at the earliest possible time.

1002

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

5. A genetic algorithm As the application of genetic algorithms to discrete optimization problems is now well known we shall restrict our discussion to the following points. 5.1. Individual representation One of the important issues in using GAs is deciding on how chromosomes or solutions should be encoded. Many GA applications use a binary representation. Perhaps this is due the fact that early implementations were in binary coding and also it might be difﬁcult to apply the original crossover for larger cardinality coding. However, in scheduling problems that involve ordering or permutations, real number representation has also been widely used. There are various real number representations such as adjacency listing and position listing representations. In adjacency listing representations, which is widely used for travelling salesperson problems, the position of each job in relation to neighboring jobs is important and the position of a job in the string is not emphasized. Position listing, on the other hand, is concerned with the position of each job in the string. Therefore, any string refers to the original permutation and shows the current positions. If the original permutation is (a, b, c, d) and the current string is (c, b, a, d), the position listing would be (3,2,1,4). We shall use the permutation representation where (c, b, a, d) means that the jobs are processed in that order as a machine becomes available. 5.2. Population We need to strike a balance between efﬁciency (computational time) and effectiveness (nearness to optimality) in choosing population size. Based on a number of earlier studies, such as Crauwels [6] who set the population size at n, Abdekhodaee [5] examined population sizes from n/2 to 2n. 5.3. Fitness function It is clear that the way the ﬁtness function is deﬁned, may have a role in the survival of solutions to the next generation. We used the objective function, makespan, as the ﬁtness function. 5.4. Crossover Crossover is the reproduction mechanism of a typical genetic algorithm. We used the following three methods. 5.4.1. Partially matched crossover (PMX) In partially matched crossover a section of a string is selected hence two points are marked. There is a one to one matching between elements in similar positions within this section. Consider, for example, the two parents: A = 2 3 1 5 7 8 4 9 6,

B = 1 6 7 3 4 2 8 9 5.

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

1003

Then there is matching between 53, 74, and 82. The segments between the marked points are swapped: A = X X X 3 4 2 X X X,

B = X X X 5 7 8 X X X.

The remainder is ﬁlled in using A and B if there is no conﬂict otherwise the matching is used: A = X X 1 3 4 2 X 9 6,

B = 1 6 X 5 7 8 X 9 X

and ﬁnally: A = 8 5 1 3 4 2 7 9 6, B = 1 6 4 3 4 2 2 9 3.

5.4.2. Order crossover (OX) In OX, sections of strings are selected and copied into the children. For example, if A = 2 3 1 5 7 8 4 9 6,

B = 1 6 7 3 4 2 8 9 5,

then A = X X X 5 7 8 X X X,

B = X X X 3 4 2 X X X.

Then, starting after the second marked point we ﬁll in the missing elements by using the sequence from the other parent. Thus for A we use: 8-9-5-1-6-7-3-4-2, deleting any numbers already used. We obtain: A = 3 4 2 5 7 8 9 1 6, B = 5 7 8 3 4 2 9 6 1.

(1) (2)

5.4.3. Union crossover #2 (UX2) The following steps summarize this procedure: Step 1: Choose a sub-string of elements from a parent and write to S1. Step 2: Write the remaining elements to S2 in the order they appear in the other parent. Step 3: Randomly select S1 or S2, copy the ﬁrst element to the child. Delete this element from S1 (or S2). Step 4: Repeat step 3 until all elements have been selected. 5.5. Mutation Mutation operators are simple moves, commonly applied on a single string or solution, designed to bring more diversity into the search procedure. Two important mutations are swap and insert mutations. In the swap mutation, two jobs change their positions in the sequence. In the insert mutation one job is deleted from its position and is inserted in another part of the sequence. The insert mutation appears to perform better than the swap mutation for our problem. 5.6. Selection In contrast with crossover and mutation operators which create diversity in the solution, selection is the procedure that regulates and limits the number of solutions that are passed to the next generation. If

1004

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

the limits applied by the selection procedure are too strict then the algorithm tends to converge to a local optimum. On the other hand, if the selection procedure is too relaxed then we would be unlikely to attain an efﬁcient GA algorithm. We used the tournament selection procedure. A certain size subset, say k, of the population is selected randomly. From this subset, the chromosome with the best ﬁtness survives to the next generation. This process is repeated until sufﬁciently many chromosomes are selected. 6. The Gilmore–Gomory procedure Gilmore and Gomory [7] introduced an interesting solvable scheduling problem which can be described both as a sequence dependent setup problem or as a special case of the travelling salesperson problem(TSP). Suppose a machine starts in state b0 (say the temperature in a furnace) and for each job i the machine must be in state ai before the job and state bi after the job. If job j is done immediately after i the setup time is |bi − aj |. Suppose further that the ﬁnal state, after all jobs are done, must be a0 . So the total setup time is |b0 − a1 | + |b1 − a2 | + |b2 − a3 | + · · · + |bn−1 − an | + |bn − a0 | if the sequence is {1, 2, . . . , n}. This is clearly a special case of TSP. The Gilmore–Gomory procedure solves this problem optimally in O(n log n). Consider a group of jobs i for which there is no idle time between jobs. Such a group of jobs is said to have a head, i , which is the setup of the ﬁrst job of the group and a tail, i , the difference in completion times of the last jobs of the group on the two machines. This is illustrated in Fig. 8. If we have a set of groups of jobs for which sliding of jobs within each group and between groups is not permitted then we have the following result, where we set 0 = 0 = 0. Proposition 2. For a collection of groups of jobs for which sliding of jobs within and between groups is not permitted, makespan minimization is equivalent to minimizing n−1 i=0 |i − i+1 | + |n − 0 |, where n is the number of groups. Proof. The proof is by induction. If n = 1 then: total machine idle time = |1 − 0| + |0 − 1 | = 1 + 1 Suppose the result is true for n = k then we have: k−1 total machine idle time for k groups = k−1 i=0 |i − i+1 | + |k − 0| = i=0 |i − i+1 | + k If n = k + 1 then we have the following cases: (i) k+1 k : Negative term(s) to be added = −k+1 , Positive term(s) to be added = k+1 .

αi

βi

Fig. 8. The head and tail of a job group.

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

1005

(ii) k+1 > k : Negative term(s) to be added = −k , Positive term(s) to be added = k+1 − k + k+1 . In either case, total machine idle time = ki=0 |i − i+1 | + k+1 . Since minimizing total idle time is equivalent to minimizing makespan the proof is complete. The objective function in the above proposition is identical to that of Gilmore and Gomory. So it is clear that if we could identify appropriate groups of jobs, then we would have no difﬁculty in solving the problem optimally. Even if some groups that may allow for sliding, the above function still provides an upper bound for the problem of makespan minimization (since sliding does not increase and generally decreases makespan). 6.1. The improvement procedure We may apply a version of the Gilmore–Gomory procedure to the initial set of jobs, as discussed below, or use the procedure as an improvement heuristic. Thus, a problem is ﬁrst solved through the use of some other heuristic, for example greedy. Then any subsequence not containing any idle time, can be identiﬁed as a block. These blocks of jobs are then rearranged using the Gilmore and Gomory algorithm. Finally, we allow sliding thereby possibly decreasing further the makespan. We found that this procedure may improve the solution by up to 2 or 3 percent. 6.2. The iterative procedure This heuristic applies Gilmore–Gomory iteratively starting with the initial set of jobs. At the initial stage each job constitutes a separate group with the head equal to the setup and the tail equal to the processing time. After the ﬁrst application of Gilmore–Gomory the jobs sequenced ﬁrst and second are identiﬁed as a single group and Gilmore–Gomory is applied to the resulting n − 1 groups. The process is repeated until we end up with a single group of n jobs. Starting with the ﬁrst job we then schedule the resulting job sequence to the ﬁrst available machine at the earliest possible time. The computational complexity of this procedure is O(n2 log n). Alternately we may apply the Gilmore–Gomory procedure once only. As will be illustrated below this has been found to be adequate especially for larger values of n. 7. Heuristic performance The performance of the various heuristics presented above appears to depend on the number of jobs and the server load, L, the ratio of the mean setup time to mean processing time, E(si )/E(pi ), for that job set. For example, for large L, most heuristics perform particularly well. It is particularly informative, as will be shown below to investigate the performance for L = 1. We generate the processing and setup times using uniform distributions. Thus, for example, if server load is 0.6 then, given that processing times are generated from a discrete uniform distribution function U (0, 100), then setups are generated from a uniform distribution function of U (0, 60). This is for the case when setup and processing times are uncorrelated. In the case where they are correlated si = L ∗ pi .

1006

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

s

s=p

δ δ 0

δ

δ

p

Fig. 9. Job clustering for heuristic L − GG.

The performance of the following heuristics were compared: Forward heuristic. This heuristic was described in Section 4 and will be designated by F hereafter. Backward heuristic. This heuristic was also described in Section 4 and will be referred to as B in the following discussions. Job clustering and the Gilmore and Gomory procedure. Many versions of this method can be considered. But we will only consider clustering the jobs into L-shaped regions, deﬁned by {(p, s) : k < p, s and (p or s < (k + 1)), k = 0, 1, . . .} as shown in Fig. 9, where = max(amin , 30) is used in our simulation. This version is designated by L − GG. Calculate the server load for each region and we apply F if the server load does not exceed 1, otherwise use B. (This choice is based on previous numerical analysis, as discussed in Abdekhodaee and Wirth [2].) Consider the sequence thus constructed. The ﬁrst maximal subsequence which contains no idle time (other than I0 ) is now considered a group. Calculate the load of the remaining jobs (that is, those not yet grouped) in this region and apply the appropriate heuristic. Repeat the grouping procedure. We form the ungrouped set from the jobs not successfully grouped at this stage. This procedure is repeated for all the regions and ﬁnally for jobs in the ungrouped set. Jobs still belonging to the ungrouped set at the end of this process will be considered as single-job groups. The Gilmore and Gomory procedure is then applied to all the groups. Genetic algorithm. This will be designated by GA. Based on previous experience outlined in Abdekhodaee [5] we selected a population size of 100, mutation rate of 0.44, generation gap of 0.85, crossover rate of 0.26, average cut of 0.82 and tournament size of 50 as the default GA parameters. Each run of GA consisted of 250 generations. Iterative Gilmore and Gomory procedure. This will be designated GG(k), where k is the number of times the Gilmore and Gomory procedure is used. We will use GG(n − 1) and G(1) in our simulations. We used Makespan/Lower bound ratio as the performance measure of a schedule, where the lower bound is as stated in Proposition 1. We will ﬁrst compare the performance of the above heuristics for the cases of 50 jobs and 100 jobs with loads varying from 0.1 to 2 at intervals of 0.1 and with the setup and

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009 F

B

GG(1)

GA

GG(n-1)

1007 L-GG

1.16

Worst case Makespan/LB

1.14 1.12 1.1 1.08 1.06 1.04 1.02 1 0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

2.2

Load

Fig. 10. Worst-case Makespan/Lower bound across loads 0.1 to 2 for 50 jobs. F

B

GG(1)

GA

GG(n-1)

L-GG

1.08 1.07

Mean Makespan/LB

1.06 1.05 1.04 1.03 1.02 1.01 1

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

2.2

Load

Fig. 11. Average Makespan/Lower bound across loads 0.1 to 2 for 50 jobs.

processing times uncorrelated. The processing times are generated from a discrete uniform distribution function U (0, 100). For each load, 50 sets of n jobs will be generated, where n ∈ {50, 100}. The worst case and mean performance of these 50 sets of n jobs for each load were evaluated. The results for n = 50 are shown in Figs. 1011 and and for n = 100 in Figs. 12 and 13.

1008

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009 F

B

GG(1)

GA

GG(n-1)

L-GG

1.1 1.09

Worst case Makespan/LB

1.08 1.07 1.06 1.05 1.04 1.03 1.02 1.01 1

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

2.2

2

2.2

Load

Fig. 12. Worst-case Makespan/Lower bound across loads 0.1 to 2 for 100 jobs. F

B

GG(1)

GA

GG(n-1)

L-GG

1.06

1.05

Mean Makespan/LB

1.04

1.03

1.02

1.01

1

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

Load

Fig. 13. Average Makespan/Lower bound across loads 0.1 to 2 for 100 jobs.

The above results indicate that for server loads less than about 0.8 the genetic algorithm appears to provide the best performance. For larger server loads Gilmore–Gomory, in particular GG(n − 1), is best. We note that for very small loads our problem reduces essentially to the classic P 2||Cmax . For this case

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

1009

many heuristics such as multiﬁt are readily available. For loads substantially larger than 1 the problem is solved by any sequence provided the last job is the one with minimal processing time. Zone 5 of Fig. 3 illustrates this. 8. Conclusions The above results together with the earlier work in Abdekhodaee and Wirth [2] and Abdekhodaee et al. [4] appear to provide comprehensive heuristic solutions to the general problem P 2, S1|pi , si |Cmax . By choosing the appropriate heuristic we are able to achieve a solution which is on average (in the worst case) within 2% (respectively, 5%) of the lower bound. Naturally, many questions such as how to deal with more than two machines and more complex setup and processing time patterns remain to be addressed. References [1] Brucker P, Dhaenens-Flipo C, Knust S, Kravchenko SA, Werner F. Complexity results for parallel machine problems with a single server. Journal of Scheduling 2002;5:429–57. [2] Abdekhodaee AH, Wirth A. Scheduling parallel machines with a single server: some solvable cases and heuristics. Computers and Operations Research 2002;29:295–315. [3] Koulamas CP. Scheduling two parallel semiautomatic machines to minimise machine interference. Computers and Operations Research 1996;23:945–56. [4] Abdekhodaee AH, Wirth A, Gan HS. Equal processing and equal setup times cases of scheduling parallel machines with a single server. Computers and Operations Research 2004;31:1867–89. [5] Abdekhodaee AH. Scheduling jobs with setup times on parallel machines with a single server. PhD dissertation, University of Melbourne; 1999. [6] Crauwels H. A comparative study of local search methods for one machine sequencing problems. PhD dissertation, Katholieke Universiteit Leuven; 1998. [7] Gilmore PC, Gomory RE. Sequencing a one state variable machine: a solvable cases of the travelling salesman problem. Operations Research 1964;12:655–79.

Scheduling two parallel machines with a single server: the general case Amir H. Abdekhodaeea , Andrew Wirthb,∗ , Heng-Soon Gana a Operations Research Group, CSIRO Mathematics and Information Sciences, Private Bag 10, Clayton South MDC 3169,

Australia b Department of Mechanical and Manufacturing Engineering, The University of Melbourne, Parkville, VIC 3010, Australia

Available online 17 September 2004

Abstract This paper considers the problem of scheduling two-operation non-preemptable jobs on two identical semiautomatic machines. A single server is available to carry out the ﬁrst (or setup) operation. The second operation is executed automatically, without the server. The general problem of makespan minimization is NP-hard in the strong sense. In earlier work, we showed that the equal total length problem is polynomial time and we also provided efﬁcient and effective solutions for the special cases of equal setup and equal processing times. Most of the cases analyzed thus far have fallen into the category of regular problems. In this paper we build on this earlier work to deal with the general case. Various approaches will be considered. One may reduce the problem to a regular one by amalgamating jobs, or we may apply the earlier heuristics to (possibly regular) job clusters. Alternately we may apply a greedy heuristic, a metaheuristic such as a genetic algorithm or the well known Gilmore–Gomory algorithm to solve the general problem. We report on the performance of these various methods. 䉷 2004 Elsevier Ltd. All rights reserved. Keywords: Parallel machines; Single server; Setup

1. Introduction This paper considers the general problem, P 2, S1|pi , si |Cmax , Brucker et al. [1], of scheduling twooperation non-preemptable jobs on two identical semi-automatic machines. A single server is available ∗ Corresponding author. Tel.: +61-3-8344-4852; fax: +61-3-9347-8784.

E-mail addresses: [email protected] (A.H. Abdekhodaee), [email protected] (A. Wirth), [email protected] (H.-S. Gan). 0305-0548/$ - see front matter 䉷 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.cor.2004.08.013

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

995

to carry out the ﬁrst (or setup) operation. The server can handle at most one job at a time. The second operation is executed automatically, and on the same machine, without the server. The processing operation must be carried out immediately after the setup. Our objective is to minimize makespan. Let pi and si denote the processing and setup times, respectively, of job i. It is well known, Brucker et al. [1], that P 2, S1|pi , si |Cmax is unary NP-hard. So is P 2, S1|si =s|Cmax . Brucker et al. [1] contains a full discussion of the computational complexity of these and many related problems. They show, for example, that P 2, S1|pi =p|Cmax is binary NP-hard, but on the other hand they prove that P , S1|pi =p, ri , si =s|Cmax , where ri denote setup release dates, is polynomial time. Brucker et al. [1] also consider various other scenarios and objective functions. Not surprisingly, all but the simplest cases are NP-hard. One special case which is polynomial time is P 2, S1|pi + si = a|Cmax , see Abdekhodaee and Wirth [2] for further details. Most of the literature on the general problem has dealt with issues of computational complexity. Computational results appear to be restricted to a few papers. Namely Koulamas [3],who studied the closely related problem of minimizing total idle time (excluding the last inevitable idle time). Also Abdekhodaee and Wirth [2] and Abdekhodaee et al. [4], who considered the regular case, where pi − pj sj for all i, j and the equal processing and equal setup times cases. In this paper, we build on this earlier work to deal with the general case. Various approaches will be considered. One may reduce the problem to a regular one by merging jobs, or we may partition the jobs into (possibly regular)subsets, apply the earlier heuristics and then sequence these partial solutions using the well known Gilmore–Gomory algorithm. We may also apply a greedy heuristic or a metaheuristic such as a genetic algorithm to solve the general problem. The structure of this paper is as follows: ﬁrst we review some of the results of Abdekhodaee and Wirth [2] and Abdekhodaee et al. [4] and discuss the regularity condition and its schematic representation in the processing time—setup time plane. Then we brieﬂy consider how the general problem may be reduced to a (possibly) regular one by amalgamating or clustering jobs. A range of heuristics, based on the above considerations as well as two versions of a greedy procedure and a genetic algorithm are presented. We then introduce a version of the Gilmore–Gomory algorithm appropriate to this problem. We compare the performance of these various approaches for a range of different scenarios and draw some conclusions.

2. The processing—setup time plane We brieﬂy recall some deﬁnitions from Abdekhodaee and Wirth [2] and a result from Abdekhodaee et al. [4]. Assume that we have n jobs with setup times si and processing times pi for i = 1, . . . , n. Let ti be the start time of job i and ci its completion time. So ci = ti + si + pi . Denote the length of job i by ai = si + pi . Fig. 1 illustrates this. As stated earlier processing does not require the server. We also introduce the convention that uppercase terms shall refer to the set of jobs after it has been scheduled, thus Ci shall refer to the completion time of the ith scheduled job and S1 is the setup time of the job that is scheduled ﬁrst. We assume that no job is unnecessarily delayed (Fig. 2). We say a set of jobs is regular if pi aj for all i, j . If the jobs, sorted by setup start times, is processed alternately on the two machines we say the processing or schedule is alternating. We assume, without loss of generality, that the ﬁrst job is started on machine one and that

996

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009 S2

P2

S4

P4

I2

I0

S1

P1

T1 = 0 T2

I n-1

I3

C1 = T 3 C 2

T4

Tn-1

Tn

I n+1

Tn+1 Tn+2 = Tn+3

Fig. 1. Machine idle time.

Si+1

Wi-1

Pi+1

Pi Si

Wi

S i+2

Pi+2

Fig. 2. Server waiting time.

T1 = 0. Let Ii , the ith machine idle time, be the time the machine which has just ﬁnished the ith job is idle before it starts its next job. Denote by Wi the server waiting time between the end of the setup of the (i + 1)th and the start of the (i + 2)th scheduled jobs. We wish to minimize makespan, that is maxni=1 Ci . Let pmax = maxi {pi } and deﬁne pmin similarly. For the regular case, we recall the following lower bound result, Abdekhodaee et al. [4]. Proposition 1. For a regular set of jobs n 1 makespan ai + s1 + s2 + pmin − pmax . 2 i=1

If the number of jobs, n, is even, processing times are equal and s1 s2 · · · sn then n 1 ai + s1 + s2 + , makespan 2

i=1

where = minA,A | i∈A si − j ∈A sj |, A ∪ A = {3, 4, ..., n}, A ∩ A = and A = A . For the general case the following is a makespan lower bound: n n 1 max . ai + smin , si + pmin 2 i=1

i=1

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

997

Setup Time

5

3

4

2 1

r = min ( si + pi ) i

Processing Time

Fig. 3. Processing time versus setup time.

Abdekhodaee and Wirth [2] report on the performance, for the general case, of two greedy heuristics. We shall return to these heuristics later. Abdekhodaee et al. [4] consider the computational complexity of two further special cases, namely equal processing time and equal setup time problems, and test the performance of various heuristics for these cases. These three special cases thus correspond to lines of gradient −1, and vertical and horizontal lines, respectively, in Fig. 3. These two papers effectively solve these cases and, more generally, contain various useful results about regular problems. Regularity helps to simplify the problem, allows us to explore conditions under which the problem may be polynomially solvable and makes it possible to code the problem as a relatively neat integer programming model. As stated earlier, regular means that: pi pj + sj

∀i, j.

The region which satisﬁes the above inequality is all of the shaded part of Fig. 3. If the shortest length job in the regular region is assumed to have length r then no job can lie in the region the region below pi + si = r. Furthermore, any job in the shaded part of Fig. 3 has a processing time which is less than or equal to r. This regular region itself can be divided in various zones. In zone 1, sj r/2 pi ∀i, j . In zone 2, r/2 si pi . On the other hand, each job in zone 3, has a larger setup time than its processing time, r/2 pi si . In zone 4, pi r/2 sj ∀i, j . Finally, zone 5 depicts the jobs for which pi r sj ∀i, j . Each zone may demand a special heuristic which may not necessarily be effective for other regions. For example, if a set of jobs lies completely in zone 4 or in zone 5 then any alternating schedule with the least processing time job as the ﬁnal job and an arbitrary sequence prior to that provides an optimal solution. On the other hand, if all jobs lie in zone 1, the longest processing time heuristic may be effective. The general problem may not satisfy the regularity condition so the assumption of alternating jobs between machines may not hold. Many jobs, for example, could be processed on one machine while the other machine is still processing a single job. Koulamas [3] has shown that any sequence can be reduced to another sequence which satisﬁes the regularity condition. Of course, such a reduction may result in a degradation of schedule performance.

998

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

In the rest of the paper, we explore various heuristics for solving the general problem. Firstly, we consider several ways of amalgamating jobs to achieve regularity. Then we review the greedy heuristics mentioned earlier. Next we introduce a genetic algorithm and ﬁnally we compare the performance of the above heuristics with the application of the well known Gilmore–Gomory procedure to job clustering.

3. Job amalgamation We present three methods, including the one due to Koulamas [3], which will reduce a general problem to a regular one. 3.1. Job addition We repeatedly aggregate short length jobs together until a regular set may be achieved. Fig. 4 illustrates this procedure. Suppose we have a nonregular set of jobs. The regularity region is deﬁned by pj +sj pmax . Jobs for which r pj + sj < pmax can be combined to form single jobs so that the resulting set is regular (Fig. 5). As the main objective is to reduce machine interference, it is advisable to select the job with the largest processing time as the ﬁnal job. This follows from the fact that if a set of jobs is to be considered as a single job, the setup of this set is s1 + p1 + · · · + sn−1 + pn−1 + sn and the processing time of the new job is pn . Therefore, we should select a job which has the highest possible processing time and the lowest idle time. Note that it may be the case that even if all the jobs in the nonregular region are combined the resulting job may still not fall in the regular region. 3.2. Job subtraction This procedure pairs a job with a long processing time with a number of short jobs as illustrated in Fig. 6. Thus, we effectively cut off long processing times and move the right hand boundary of the regular region to the left, with the aim of achieving a regular set of jobs.

Re

Setup time

n

io

eg

yr

rit

la

gu Jobs not in regularity region

p max Processing time

Fig. 4. Regularity region.

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

999

Fig. 5. Job addition.

Cut off processing time

Fig. 6. Job subtraction.

3.3. Koulamas’ reduction A method developed by Koulamas [3], implements both of the above concepts. The procedure is presented below: Reduction algorithm • Step 1: Sort all jobs i = 1, . . . , n in non-increasing order of pi . • Step 2: Sort all jobs j = 1, . . . , n in non-decreasing order of sj + pj . • Step 3: Do i = 1, . . . , n Do j = 1, . . . , n If sj + pj pi , then delete job j; update pi = pi − pj − sj . Endif; End Do j End Do i.

1000

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

Setup Time

Equal Setup

S0

S0

S0

S0

Processing Time

Fig. 7. Job clustering.

This approach trims the general region from both sides so that the ﬁnal result forms a regular set. Its computational complexity is O(n2 ). 3.4. Job clustering Now rather than combining jobs to create other jobs we construct job clusters each of which is regular. We illustrate this for the case of equal setups, however the technique may be applied generally. Consider a particular instance of equal setup times which can be expressed on a processing time versus setup time diagram as illustrated in Fig. 7. For any two jobs whose processing times difference is less than or equal to the setup time, the set is regular and efﬁcient heuristics can be used to solve this problem. We can apply this characteristic of the problem and introduce a method in which jobs are grouped in regular sets. We divide the jobs into setup-length intervals. For example, start from the shortest processing time job, and successively add the next shortest processing time job to the ﬁrst cluster, provided the difference in processing times between the ﬁrst job in the cluster and the last job to be added does not exceed the constant setup value. The selected jobs form a regular set and can be scheduled by previously developed approaches. If necessary, we repeat the procedure until every job is in a regular cluster. One advantage of this method may be that as well as attempting to minimize makespan we can also go some way to achieving ﬂowtime minimization. Alternately we could construct clusters of jobs of near equal length, or partition the processing time—setup time plane in other ways. For example, we used the L shaped regions, deﬁned by {(p, s) : k < p, s and (p or s < (k + 1)), k = 0, 1, . . .}. We then apply appropriate heuristics to each cluster and ﬁnally combine the partial solutions using another procedure, such as Gilmore–Gomory, as discussed below.

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

1001

4. Greedy heuristics Abdekhodaee and Wirth [2] apply two versions of the greedy heuristic to the general case. We recall these two and also mention some alternative versions. The basic aim at each step of the heuristic is, if possible, not to generate any machine idle time or alternately any server waiting time. In the ﬁrst step of our greedy heuristic jobs are arranged in a list according to some criterion. Then the last eligible job in the list of unsequenced jobs is selected for the next position. An eligible job is one that would not create any idle time or waiting time. If no eligible jobs exist we select the ﬁrst job in the list. The list may be arranged in increasing or decreasing order of setup times, processing times or job lengths. We may also extend this method to generate two separate lists, sort the lists and then select the jobs alternately from the lists. The initial lists may be generated by, say, using the longest processing time heuristic applied to job lengths. Extensive testing, in Abdekhodaee [5], of these alternative versions of the greedy heuristic indicates that overall the best versions are the two listed below. 4.1. Forward heuristic This version of the heuristic aims, myopically, to minimize machine idle time. The forward heuristic • Step 1: Sort the jobs in increasing order of setup times. Set to zero. • Step 2: Find a job with the largest setup less or equal to , if there is no such job select the shortest setup time job. Place it on the next available machine. Let k(j ) be the last job scheduled on machine 2(1). (If k or j = 0 then Ck2 or Cj1 = 0.) • Step 3: If Cj1 Ck2 , then = Ck2 − max(Ck2 − pk , Cj1 ) else = Cj1 − max(Cj1 − pj , Ck2 ). • Step 4: Repeat step 2 and 3 until all jobs are sequenced. 4.2. Backward heuristic If all the server waiting times are zero and if the last job has the shortest processing time then, for the regular case, the solution can be shown to be optimal. This result motivates the following version of the greedy heuristic. Backward heuristic • Step 1: Arrange the jobs in increasing order of processing times and place them in the unsequenced list. • Step 2: Sequence the shortest processing time job in the ﬁnal position. • Step 3: Set to the setup time of last sequenced job. • Step 4: From the unsequenced jobs, ﬁnd the job with the largest processing time less than or equal to . If there is no such a job select the shortest processing time job. Position the selected job just prior to most recently sequenced job. • Step 5: Repeat steps 3 and 4 until all jobs are sequenced. • Step 6: Starting with the ﬁrst job, schedule the resulting job sequence to the ﬁrst available machine at the earliest possible time.

1002

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

5. A genetic algorithm As the application of genetic algorithms to discrete optimization problems is now well known we shall restrict our discussion to the following points. 5.1. Individual representation One of the important issues in using GAs is deciding on how chromosomes or solutions should be encoded. Many GA applications use a binary representation. Perhaps this is due the fact that early implementations were in binary coding and also it might be difﬁcult to apply the original crossover for larger cardinality coding. However, in scheduling problems that involve ordering or permutations, real number representation has also been widely used. There are various real number representations such as adjacency listing and position listing representations. In adjacency listing representations, which is widely used for travelling salesperson problems, the position of each job in relation to neighboring jobs is important and the position of a job in the string is not emphasized. Position listing, on the other hand, is concerned with the position of each job in the string. Therefore, any string refers to the original permutation and shows the current positions. If the original permutation is (a, b, c, d) and the current string is (c, b, a, d), the position listing would be (3,2,1,4). We shall use the permutation representation where (c, b, a, d) means that the jobs are processed in that order as a machine becomes available. 5.2. Population We need to strike a balance between efﬁciency (computational time) and effectiveness (nearness to optimality) in choosing population size. Based on a number of earlier studies, such as Crauwels [6] who set the population size at n, Abdekhodaee [5] examined population sizes from n/2 to 2n. 5.3. Fitness function It is clear that the way the ﬁtness function is deﬁned, may have a role in the survival of solutions to the next generation. We used the objective function, makespan, as the ﬁtness function. 5.4. Crossover Crossover is the reproduction mechanism of a typical genetic algorithm. We used the following three methods. 5.4.1. Partially matched crossover (PMX) In partially matched crossover a section of a string is selected hence two points are marked. There is a one to one matching between elements in similar positions within this section. Consider, for example, the two parents: A = 2 3 1 5 7 8 4 9 6,

B = 1 6 7 3 4 2 8 9 5.

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

1003

Then there is matching between 53, 74, and 82. The segments between the marked points are swapped: A = X X X 3 4 2 X X X,

B = X X X 5 7 8 X X X.

The remainder is ﬁlled in using A and B if there is no conﬂict otherwise the matching is used: A = X X 1 3 4 2 X 9 6,

B = 1 6 X 5 7 8 X 9 X

and ﬁnally: A = 8 5 1 3 4 2 7 9 6, B = 1 6 4 3 4 2 2 9 3.

5.4.2. Order crossover (OX) In OX, sections of strings are selected and copied into the children. For example, if A = 2 3 1 5 7 8 4 9 6,

B = 1 6 7 3 4 2 8 9 5,

then A = X X X 5 7 8 X X X,

B = X X X 3 4 2 X X X.

Then, starting after the second marked point we ﬁll in the missing elements by using the sequence from the other parent. Thus for A we use: 8-9-5-1-6-7-3-4-2, deleting any numbers already used. We obtain: A = 3 4 2 5 7 8 9 1 6, B = 5 7 8 3 4 2 9 6 1.

(1) (2)

5.4.3. Union crossover #2 (UX2) The following steps summarize this procedure: Step 1: Choose a sub-string of elements from a parent and write to S1. Step 2: Write the remaining elements to S2 in the order they appear in the other parent. Step 3: Randomly select S1 or S2, copy the ﬁrst element to the child. Delete this element from S1 (or S2). Step 4: Repeat step 3 until all elements have been selected. 5.5. Mutation Mutation operators are simple moves, commonly applied on a single string or solution, designed to bring more diversity into the search procedure. Two important mutations are swap and insert mutations. In the swap mutation, two jobs change their positions in the sequence. In the insert mutation one job is deleted from its position and is inserted in another part of the sequence. The insert mutation appears to perform better than the swap mutation for our problem. 5.6. Selection In contrast with crossover and mutation operators which create diversity in the solution, selection is the procedure that regulates and limits the number of solutions that are passed to the next generation. If

1004

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

the limits applied by the selection procedure are too strict then the algorithm tends to converge to a local optimum. On the other hand, if the selection procedure is too relaxed then we would be unlikely to attain an efﬁcient GA algorithm. We used the tournament selection procedure. A certain size subset, say k, of the population is selected randomly. From this subset, the chromosome with the best ﬁtness survives to the next generation. This process is repeated until sufﬁciently many chromosomes are selected. 6. The Gilmore–Gomory procedure Gilmore and Gomory [7] introduced an interesting solvable scheduling problem which can be described both as a sequence dependent setup problem or as a special case of the travelling salesperson problem(TSP). Suppose a machine starts in state b0 (say the temperature in a furnace) and for each job i the machine must be in state ai before the job and state bi after the job. If job j is done immediately after i the setup time is |bi − aj |. Suppose further that the ﬁnal state, after all jobs are done, must be a0 . So the total setup time is |b0 − a1 | + |b1 − a2 | + |b2 − a3 | + · · · + |bn−1 − an | + |bn − a0 | if the sequence is {1, 2, . . . , n}. This is clearly a special case of TSP. The Gilmore–Gomory procedure solves this problem optimally in O(n log n). Consider a group of jobs i for which there is no idle time between jobs. Such a group of jobs is said to have a head, i , which is the setup of the ﬁrst job of the group and a tail, i , the difference in completion times of the last jobs of the group on the two machines. This is illustrated in Fig. 8. If we have a set of groups of jobs for which sliding of jobs within each group and between groups is not permitted then we have the following result, where we set 0 = 0 = 0. Proposition 2. For a collection of groups of jobs for which sliding of jobs within and between groups is not permitted, makespan minimization is equivalent to minimizing n−1 i=0 |i − i+1 | + |n − 0 |, where n is the number of groups. Proof. The proof is by induction. If n = 1 then: total machine idle time = |1 − 0| + |0 − 1 | = 1 + 1 Suppose the result is true for n = k then we have: k−1 total machine idle time for k groups = k−1 i=0 |i − i+1 | + |k − 0| = i=0 |i − i+1 | + k If n = k + 1 then we have the following cases: (i) k+1 k : Negative term(s) to be added = −k+1 , Positive term(s) to be added = k+1 .

αi

βi

Fig. 8. The head and tail of a job group.

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

1005

(ii) k+1 > k : Negative term(s) to be added = −k , Positive term(s) to be added = k+1 − k + k+1 . In either case, total machine idle time = ki=0 |i − i+1 | + k+1 . Since minimizing total idle time is equivalent to minimizing makespan the proof is complete. The objective function in the above proposition is identical to that of Gilmore and Gomory. So it is clear that if we could identify appropriate groups of jobs, then we would have no difﬁculty in solving the problem optimally. Even if some groups that may allow for sliding, the above function still provides an upper bound for the problem of makespan minimization (since sliding does not increase and generally decreases makespan). 6.1. The improvement procedure We may apply a version of the Gilmore–Gomory procedure to the initial set of jobs, as discussed below, or use the procedure as an improvement heuristic. Thus, a problem is ﬁrst solved through the use of some other heuristic, for example greedy. Then any subsequence not containing any idle time, can be identiﬁed as a block. These blocks of jobs are then rearranged using the Gilmore and Gomory algorithm. Finally, we allow sliding thereby possibly decreasing further the makespan. We found that this procedure may improve the solution by up to 2 or 3 percent. 6.2. The iterative procedure This heuristic applies Gilmore–Gomory iteratively starting with the initial set of jobs. At the initial stage each job constitutes a separate group with the head equal to the setup and the tail equal to the processing time. After the ﬁrst application of Gilmore–Gomory the jobs sequenced ﬁrst and second are identiﬁed as a single group and Gilmore–Gomory is applied to the resulting n − 1 groups. The process is repeated until we end up with a single group of n jobs. Starting with the ﬁrst job we then schedule the resulting job sequence to the ﬁrst available machine at the earliest possible time. The computational complexity of this procedure is O(n2 log n). Alternately we may apply the Gilmore–Gomory procedure once only. As will be illustrated below this has been found to be adequate especially for larger values of n. 7. Heuristic performance The performance of the various heuristics presented above appears to depend on the number of jobs and the server load, L, the ratio of the mean setup time to mean processing time, E(si )/E(pi ), for that job set. For example, for large L, most heuristics perform particularly well. It is particularly informative, as will be shown below to investigate the performance for L = 1. We generate the processing and setup times using uniform distributions. Thus, for example, if server load is 0.6 then, given that processing times are generated from a discrete uniform distribution function U (0, 100), then setups are generated from a uniform distribution function of U (0, 60). This is for the case when setup and processing times are uncorrelated. In the case where they are correlated si = L ∗ pi .

1006

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

s

s=p

δ δ 0

δ

δ

p

Fig. 9. Job clustering for heuristic L − GG.

The performance of the following heuristics were compared: Forward heuristic. This heuristic was described in Section 4 and will be designated by F hereafter. Backward heuristic. This heuristic was also described in Section 4 and will be referred to as B in the following discussions. Job clustering and the Gilmore and Gomory procedure. Many versions of this method can be considered. But we will only consider clustering the jobs into L-shaped regions, deﬁned by {(p, s) : k < p, s and (p or s < (k + 1)), k = 0, 1, . . .} as shown in Fig. 9, where = max(amin , 30) is used in our simulation. This version is designated by L − GG. Calculate the server load for each region and we apply F if the server load does not exceed 1, otherwise use B. (This choice is based on previous numerical analysis, as discussed in Abdekhodaee and Wirth [2].) Consider the sequence thus constructed. The ﬁrst maximal subsequence which contains no idle time (other than I0 ) is now considered a group. Calculate the load of the remaining jobs (that is, those not yet grouped) in this region and apply the appropriate heuristic. Repeat the grouping procedure. We form the ungrouped set from the jobs not successfully grouped at this stage. This procedure is repeated for all the regions and ﬁnally for jobs in the ungrouped set. Jobs still belonging to the ungrouped set at the end of this process will be considered as single-job groups. The Gilmore and Gomory procedure is then applied to all the groups. Genetic algorithm. This will be designated by GA. Based on previous experience outlined in Abdekhodaee [5] we selected a population size of 100, mutation rate of 0.44, generation gap of 0.85, crossover rate of 0.26, average cut of 0.82 and tournament size of 50 as the default GA parameters. Each run of GA consisted of 250 generations. Iterative Gilmore and Gomory procedure. This will be designated GG(k), where k is the number of times the Gilmore and Gomory procedure is used. We will use GG(n − 1) and G(1) in our simulations. We used Makespan/Lower bound ratio as the performance measure of a schedule, where the lower bound is as stated in Proposition 1. We will ﬁrst compare the performance of the above heuristics for the cases of 50 jobs and 100 jobs with loads varying from 0.1 to 2 at intervals of 0.1 and with the setup and

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009 F

B

GG(1)

GA

GG(n-1)

1007 L-GG

1.16

Worst case Makespan/LB

1.14 1.12 1.1 1.08 1.06 1.04 1.02 1 0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

2.2

Load

Fig. 10. Worst-case Makespan/Lower bound across loads 0.1 to 2 for 50 jobs. F

B

GG(1)

GA

GG(n-1)

L-GG

1.08 1.07

Mean Makespan/LB

1.06 1.05 1.04 1.03 1.02 1.01 1

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

2.2

Load

Fig. 11. Average Makespan/Lower bound across loads 0.1 to 2 for 50 jobs.

processing times uncorrelated. The processing times are generated from a discrete uniform distribution function U (0, 100). For each load, 50 sets of n jobs will be generated, where n ∈ {50, 100}. The worst case and mean performance of these 50 sets of n jobs for each load were evaluated. The results for n = 50 are shown in Figs. 1011 and and for n = 100 in Figs. 12 and 13.

1008

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009 F

B

GG(1)

GA

GG(n-1)

L-GG

1.1 1.09

Worst case Makespan/LB

1.08 1.07 1.06 1.05 1.04 1.03 1.02 1.01 1

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

2.2

2

2.2

Load

Fig. 12. Worst-case Makespan/Lower bound across loads 0.1 to 2 for 100 jobs. F

B

GG(1)

GA

GG(n-1)

L-GG

1.06

1.05

Mean Makespan/LB

1.04

1.03

1.02

1.01

1

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

Load

Fig. 13. Average Makespan/Lower bound across loads 0.1 to 2 for 100 jobs.

The above results indicate that for server loads less than about 0.8 the genetic algorithm appears to provide the best performance. For larger server loads Gilmore–Gomory, in particular GG(n − 1), is best. We note that for very small loads our problem reduces essentially to the classic P 2||Cmax . For this case

A.H. Abdekhodaee et al. / Computers & Operations Research 33 (2006) 994 – 1009

1009

many heuristics such as multiﬁt are readily available. For loads substantially larger than 1 the problem is solved by any sequence provided the last job is the one with minimal processing time. Zone 5 of Fig. 3 illustrates this. 8. Conclusions The above results together with the earlier work in Abdekhodaee and Wirth [2] and Abdekhodaee et al. [4] appear to provide comprehensive heuristic solutions to the general problem P 2, S1|pi , si |Cmax . By choosing the appropriate heuristic we are able to achieve a solution which is on average (in the worst case) within 2% (respectively, 5%) of the lower bound. Naturally, many questions such as how to deal with more than two machines and more complex setup and processing time patterns remain to be addressed. References [1] Brucker P, Dhaenens-Flipo C, Knust S, Kravchenko SA, Werner F. Complexity results for parallel machine problems with a single server. Journal of Scheduling 2002;5:429–57. [2] Abdekhodaee AH, Wirth A. Scheduling parallel machines with a single server: some solvable cases and heuristics. Computers and Operations Research 2002;29:295–315. [3] Koulamas CP. Scheduling two parallel semiautomatic machines to minimise machine interference. Computers and Operations Research 1996;23:945–56. [4] Abdekhodaee AH, Wirth A, Gan HS. Equal processing and equal setup times cases of scheduling parallel machines with a single server. Computers and Operations Research 2004;31:1867–89. [5] Abdekhodaee AH. Scheduling jobs with setup times on parallel machines with a single server. PhD dissertation, University of Melbourne; 1999. [6] Crauwels H. A comparative study of local search methods for one machine sequencing problems. PhD dissertation, Katholieke Universiteit Leuven; 1998. [7] Gilmore PC, Gomory RE. Sequencing a one state variable machine: a solvable cases of the travelling salesman problem. Operations Research 1964;12:655–79.