Journal of Intelligent Manufacturing, 13, 61±67, 2002 # 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.

A modi®ed genetic algorithm for the ¯ow shop sequencing problem to minimize mean ¯ow time L I X I N TA N G 1 and J I Y I N L I U 2 1

Department of System Engineering, Northeastern University, Shenyang, China Department of Industrial Engineering and Engineering Management, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong 2

Received January and accepted November 2000

A modi®ed genetic algorithm (MGA) is developed for solving the ¯ow shop sequencing problem with the objective of minimizing mean ¯ow time. To improve the general genetic algorithm (GA) procedure, two additional operations are introduced into the algorithm. One replaces the worst solutions in each generation with the best solutions found in previous generations. The other improves the most promising solution, through local search, whenever the best solution has not been updated for a certain number of generations. Computational experiments on randomly generated problems are carried out to compare the MGA with the general GA and special-purpose heuristics. The results show that the MGA is superior to general GA in solution quality with similar computation times. The MGA solutions are also better than those given by special-purpose heuristics though MGA takes longer computation time. Keywords: Flow shop scheduling, mean ¯ow time, modi®ed genetic algorithm

1. Introduction Genetic algorithms (GAs) were developed in the 1960s (Holland, 1975), as an adaptive searching technique for optimization, based on the mechanism of natural selection and evolution. Since then they have been applied to a wide variety of optimization problems. A survey on applications of GAs can be found in Goldberg (1989). Much early work is focused on exploring new application areas of GA, suggesting genetic representations of the problems and ensuring compatibility of crossover functions with the genetic representations. Many efforts have also been made on improving the performance of the method to better suit the problems, for example, on gene pool selection, genetic operator selection, parameter setting, etc. Although GA is widely applicable and its performance has been well demonstrated, it still remains a dif®cult task in practical applications to seek a good balance between solution quality and computation time. Further improvements are needed to make it

more cost (time) effective. This paper attempts to improve the GA procedure by modifying its evolution process and to apply the modi®ed GA to the ¯ow shop sequencing problem to minimize the mean ¯ow time. Many multi-stage production systems can be modeled as ¯ow shops. Therefore, ¯ow shop sequencing problems have been extensively studied. Most of the studies try to minimize the makespan. In today's highly competitive global market, quick response to customer orders becomes increasing important. This requires us to use more relevant criteria in making scheduling decisions. An appropriate criterion is to minimize the mean ¯ow time (or total ¯ow time). The ¯ow shop sequencing problem with mean ¯ow time criterion can be stated as follows. n jobs are to be processed on a series of m machines. All the jobs are available at time 0. Each machine can process at most one job at a time, and each job can be processed on at most one machine at any time. No preemption is allowed. The processing time t i; j is given for any job i on any machine j. Given a permutation ( processing sequence) of the jobs fJ1 ; J2 ; . . . ; Jn g, the

62

Tang and Liu

completion times of the jobs on the machines and the mean ¯ow time C of the jobs in the ¯ow shop can be calculated as follows: C J1 ; 1 t J1 ; 1 C Ji ; 1 C Ji 1 ; 1 t Ji ; 1;

i 2; . . . ; n

C J1 ; j C J1 ; j

j 2; . . . ; m

1 t J1 ; j;

C Ji ; j maxfC Ji 1 ; j; C Ji ; j i 2; . . . ; n; j 2; . . . ; m C

1g t Ji ; j

n 1X C i; m n i1

The problem is to ®nd a permutation of jobs so as to minimize the mean ¯ow time. Optimal solution to this problem has been attempted by Ignall and Schrage (1965), Bansal (1977), and Szwarc (1983). These optimal methods are not applicable to large sized problems due to the NP-hardness of the problem (Lenstra et al., 1977). Special purpose heuristic algorithms have been proposed by Gupta (1972), Miyazaki et al. (1978), Ho and Chang (1991) and Rajendran and Chaudhuri (1991), and Rajendran (1993). These heuristics can generate reasonable good solutions in a very short time. However, they themselves cannot improve the solutions any further even if time is allowed. On the other hand, though GAs are improvement algorithms, it may take unpractical long time for the general GA to get a solution comparable to these heuristic solutions. In this paper, we propose a modi®ed genetic algorithm (MGA) to produce improved solutions with reasonable computation time. It is intended to achieve a better balance between solution quality and computation time. In the following sections, we ®rst present the MGA procedure (Section 2). The implementation of the procedure for the ¯ow shop sequencing problem is described in Section 3. Computational experiments are then reported in Section 4. Finally, conclusions are given in Section 5.

2. The modi®ed genetic algorithm 2.1. The framework The general procedure of GA can be illustrated as in Fig. 1. When GA is used to solve an optimization problem, it is hoped to converge quickly. The

Fig. 1. The general GA procedure.

selection and the crossover operations are often designed to achieve this. On the other hand, premature convergence (being trapped in local optimum) should be avoided. The mutation operation is included for this purpose. However, it is often dif®cult for the general GA procedure to achieve both (short computational time and good solution quality). The MGA presented here intends to obtain a better trade-off between these two con¯ict criteria. In the MGA procedure, two new operations, called ®ltering and cultivation, are added. The ®rst operation is added after the selection operation. It replaces the worst solutions selected with the best solutions obtained in the previous generations so as to speed up the convergence. The other operation is added after the calculation of ®tness values of the new generation to ``cultivate'' the best solution in the population when there is a sign of premature convergence. The details of these new operations are described below.

A modi®ed genetic algorithm for the ¯ow shop sequencing problem

2.2. Filtering In the selection step of GA, solutions are selected with probability. Although the probabilities are so assigned that good solutions have better chance to be selected, there is no guarantee that the best solution will be selected. To increase the chance for the optimal solution being approached quickly, in the ®ltering step of each generation, the two worst selected solutions are ®ltered out before genetic operations. Their positions are ®lled with the best solution of the current population and the best solution recorded so far. The following are among the reasons for adding this new step: (1) According to GA convergence theorem in the general sense, if the best solution is held in each generation, then the GA converges to optimal solution when the number of generations approaches in®nite. (2) Based on the general knowledge of natural genetic evolution, if an excellent individual is put into a population, evolution of the population will be improved.

63

population, cultivation always improves the best solution while mutation may improve or deteriorate the solution.

3. Implementation of the MGA for the ¯ow shop sequencing problem We implemented the MGA for the ¯ow shop sequencing problem based on the structure described above. Implementation details are as follows. 3.1. Genetic representation of solutions In order to apply GA to a sequencing problem, a solution is usually represented by a sequence of natural numbers. Each number represents the job in that position of the sequence. This type of genetic representation is used here in the implementation of the MGA. 3.2. Creation of initial population

2.3. Cultivation Computation shows that when the best solution keeps unimproved for a certain number of generations in a GA process, the solution quality will be dif®cult to be improved further even if the iteration continues. Therefore, we add the cultivation operation to overcome this. When the number of iterations without improving the best solution is greater than a pre-speci®ed constant, premature convergence can be assumed and the cultivation operation is activated. It improves the best solution recorded so far using one round of adjacent pairwise exchange search. The improved solution is put into next iteration. Since premature convergence is less likely to occur at early stage of the GA procedure, The monitoring of the best solutions may be started only after the iterations have passed half of the maximum generation. From the ®ltering operation, we know that the solution cultivated will be passed to the next generation. It brings new features to the population ( just like mutation). If the fact of no improvement for certain generations is due to premature convergence, the new feature may make the population start improving again. We note that although cultivation and mutation both diversify the solutions in the

Most GAs in other contexts assume that the initial population is chosen completely at random. However, the ef®ciency of GAs can be greatly increased by selecting a good initial population. To increase the quality of the initial population, one-®fteenth of the randomly generated solutions in the initial population was improved by one round of local search. The ®rst round of local search can make obvious improvements on the initial solution with minimal computation effort. If the search continues, the efforts will increase while further improvements are not distinct. 3.3. The ®tness function The selection operation in GAs requests the ®tness function to be a maximization function so that the probability for a solution being selected is proportional to its ®tness value. Since our sequencing problem is a minimization problem, the following transformation, following Gupta et al. (1993), is used. f x Cmax C x when C x5Cmax 0 otherwise where C x is the objective value of a solution for our sequencing problem, f x is the ®tness value of the

64 solution, Cmax is an estimation of the maximum value among the objective values of all the job sequences. Cmax is given as an input parameter in the implementation of the MGA.

Tang and Liu

used for comparison. The general GA procedures were implemented in a way similar to the MGA except the two additional operations. The parameters were set the same as in the MGA.

3.4. Crossover Crossover combines the elements from two parent (current) solutions to produce child (new) solutions. We use the partially matched crossover (PMX) (Goldberg and Lingle, 1985), which is commonly used for sequencing problems. 3.5. Mutation Mutation increases the variety in the population in order to escape from local optima. Mutation methods for sequencing problems can be classi®ed into two types: (a) exchanging methodÐtwo elements in the sequence are chosen at random and exchanged; (b) inserting methodÐone element is chosen randomly and shifted a random number of positions to the right or left. The inserting method is used in our implementation. 3.6. Selection The selection operation selects solutions from the population according to their ®tness values. In many cases, the same individual is selected many times. This reduces the variety of the parents for reproduction and may cause pre-mature convergence. To avoid this, we set a limit in our implementation of the MGA so that any individual can be selected at most twice in each generation. 3.7. Parameter setting Parameters in the MGA are set as follows: Population size 80; probability of crossover 0.99; probability of mutation 0.12 and maximum generations 1000.

4.1. Generation of problem instances Problem instances of different sizes were generated for the experiments. To make the problem instances representative, the number of jobs was chosen at ®ve levels ranging from 50 to 150 and the number of machines was chosen at four levels ranging from 5 to 20. Therefore, we had 20 different problem sets. For each problem set, 10 problem instances were generated. Totally, 200 problem instances were generated and used in the experiments. The processing times were randomly generated from the uniform distribution with the range being 1±500. All the problems were solved using the above-mentioned methods. The performances of these methods were compared in terms of their solution quality and computation time.

4.2. Solution quality Because none of these algorithms can guarantee optimal solution to the problem, we do not have an absolute measure for the optimality performance of the algorithms. Therefore, we used a relative quality measure C/C*, where C* is the best among the results for a problem instance given by the four algorithms and C is the result for the instance given by the algorithm being evaluated. The average solution qualities of these algorithms against different problem parameters are shown in Tables 1 and 2, respectively. ``GA'' in the tables represents the general GA. From the results in the tables, we can see that *

4. Computational experiments To evaluate the performance of the MGA, computational experiments were carried out on a large number of randomly generated problem instances. Two recent special purpose heuristic algorithms, Ho and Chang (1991) and Rajendran (1993), and a general GA were

*

The MGA performed consistently better then the general GA. This demonstrates that the modi®cations we made to the GA procedure improve the effectiveness of the algorithm. The average improvement is signi®cant (14.8%). The solution quality of MGA is not affected, by the problem structure or problem size, as much as that of the general GA. This indicates that the MGA procedure is more robust.

65

A modi®ed genetic algorithm for the ¯ow shop sequencing problem Table 1. Average solution quality with different number of jobs Number of jobs

MGA

GA

Rajendran

Ho and Chang

50 75 100 125 150 Overall average

1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

1.1245 1.1445 1.1499 1.1601 1.1616 1.1481

1.0314 1.0318 1.0277 1.0287 1.0243 1.0288

1.1547 1.1643 1.1761 1.1924 1.1870 1.1749

*

The MGA solution quality is better than the general purpose heuristics, 2.88% better than Rajendran and 17.49% better than Ho and Chang on average.

can generate reasonable solutions very quickly and GAs have the potential to obtain much higher quality solution with more computation efforts. While further studies can be done both on developing more effective heuristics and on further improvement of the GA procedure, an interesting future research topic is how to utilize the quick heuristic solutions in the GA process to make it more computationally ef®cient.

4.3. Computation time Tables 3 and 4 show the average running times of the algorithms against different number of jobs and different number of machines, respectively. All the times shown are in seconds on a Pentium PC. From the ®gures in Tables 3 and 4, we have the following observations about the computation time performance of the algorithms: *

*

5. Conclusions In this paper, an MGA framework has been proposed and implemented for the ¯ow shop sequencing problem with objective of minimizing mean ¯ow time. Two new operations have been introduced to the framework. In the implementation of the MGA to the ¯ow shop scheduling problems, improvements have also been made on initial population generation. Compared to the general GA, the MGA can converge faster while keeping diversity of solutions in the population. Computational experiments on a large number of representative problem instances showed that the solution quality of the MGA is signi®cantly better than that of the general GA with similar computation efforts. With longer computational time, the MGA also produces better solutions than the two special purpose heuristics tested. The MGA

The computational time of all these algorithms increases when problem size increases. The computation time of the MGA and GA is more affected by the number of jobs than by the number of machines. MGA takes similar amount of computation time with the general GA, while both GA procedures take much longer time than the heuristic algorithms.

Looking both solution quality and computation time performances, the advantages of special purpose heuristics and the GAs can be easily seen. Heuristics

Table 2. Average solution quality with different number of machines Number of machines

MGA

GA

Rajendran

Ho and Chang

5 10 15 20 Overall average

1.0000 1.0000 1.0000 1.0000 1.0000

1.1717 1.1529 1.1402 1.1278 1.1481

1.0212 1.0302 1.0314 1.0324 1.0288

1.2590 1.1727 1.1438 1.1241 1.1749

66

Tang and Liu

Table 3. Average computation time with different number of jobs Number of jobs

MGA

GA

Rajendran

Ho and Chang

50 75 100 125 150

62.425 107.475 166.125 242.325 338.575

64.775 106.875 155.900 212.150 274.250

0.375 0.925 1.975 3.650 6.125

0.250 0.425 0.600 0.900 1.225

Table 4. Average computation time with different number of machines Number of machines

MGA

GA

Rajendran

Ho and Chang

5 10 15 20

97.620 155.980 212.980 266.960

114.620 147.660 179.580 209.300

1.220 2.140 3.080 4.000

0.320 0.580 0.800 1.020

framework is general enough to be applied to other optimization problems. A similar procedure has been applied to the single machine scheduling problem with ready times (Liu and Tang, 1999). Given the fact that the special purpose heuristics generate reasonably good solutions very quickly, an interesting future research topic is how to utilize the quick heuristic solutions in the GA process to make it more computationally ef®cient. Acknowledgment This research is supported by National Natural Science Foundation of China (Grant No. 79700006), the National 863/CIMS Research Scheme of China (Grant No. 863-511-708-009), and HKUST Direct Allocation Grant (Grant No. DAG94/95.E19). References Bansal, S. P. (1977) Minimizing the sum of completion times of n-jobs over M-machines in a ¯owshopÐA branch and bound approach. AIIE Transaction, 9, 306± 311. Goldberg, D. E. (1989) Genetic Algorithms in Search, Optimisation, and Machine Learning, Addison Wesley, Reading, MA. Goldberg, D. E. and Lingle, R. (1985) Alleles, loci and the

traveling salesman problem. In Grefenstette (ed.), Proceedings of the First International Conference on Genetic Algorithms and Their Applications, Pittsburgh, PA, pp. 154±159. Gupta, J. N. D. (1972) Heuristic algorithms for multistage ¯owshop scheduling problem. AIIE Transaction, 4, 11±18. Gupta, M. C., Gupta, Y. P. and Kumar, A. (1993) Minimizing ¯ow time variance in a single machine system using genetic algorithms. European Journal of Operational Research, 81, 289±303. Ho, J. C. and Chang, Y.-L. (1991) A new heuristic for the njob, m-machine ¯ow-shop problem. European Journal of Operational Research, 52, 194±202. Holland, J. H. (1975) Adaptation in Natural and Arti®cial Systems, University of Michigan Press, Ann Arbor. Ignall, E. and Schrage, L. (1965) Application of the branch and bound technique to some ¯owshop scheduling problem. Operations Research, 13, 400±412. Lenstra, J. K., Rinnooy Kan, A. H. G. and Brucker, P. (1977) Complexity of machine scheduling problems. Annals of Discrete Mathematics, 1, 343±362. Liu, J. and Tang, L. (1999) A modi®ed genetic algorithm for single machine scheduling. Computers and Industrial Engineering, 37, 43±46. Miyazaki, S., Nishiyama, N. and Hashimoto, F. (1978) An adjacent pairwise approach to the mean ¯owtime scheduling problem. Journal of Operations Research Society of Japan, 21, 287±299. Rajendran, C. (1993) Heuristic algorithm for scheduling in a ¯owshop to minimize total ¯owtime. International Journal of Production Economics, 29, 65±73.

A modi®ed genetic algorithm for the ¯ow shop sequencing problem Rajendran, C. and Chaudhuri, D. (1991) An ef®cient heuristic approach to the scheduling of jobs in a ¯owshop. European Journal of Operational Research, 61, 318±325.

67

Szwarc, W. (1983) The ¯owshop problem with mean completion time criterion. AIIE Transaction, 15, 172±176.

A modi®ed genetic algorithm for the ¯ow shop sequencing problem to minimize mean ¯ow time L I X I N TA N G 1 and J I Y I N L I U 2 1

Department of System Engineering, Northeastern University, Shenyang, China Department of Industrial Engineering and Engineering Management, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong 2

Received January and accepted November 2000

A modi®ed genetic algorithm (MGA) is developed for solving the ¯ow shop sequencing problem with the objective of minimizing mean ¯ow time. To improve the general genetic algorithm (GA) procedure, two additional operations are introduced into the algorithm. One replaces the worst solutions in each generation with the best solutions found in previous generations. The other improves the most promising solution, through local search, whenever the best solution has not been updated for a certain number of generations. Computational experiments on randomly generated problems are carried out to compare the MGA with the general GA and special-purpose heuristics. The results show that the MGA is superior to general GA in solution quality with similar computation times. The MGA solutions are also better than those given by special-purpose heuristics though MGA takes longer computation time. Keywords: Flow shop scheduling, mean ¯ow time, modi®ed genetic algorithm

1. Introduction Genetic algorithms (GAs) were developed in the 1960s (Holland, 1975), as an adaptive searching technique for optimization, based on the mechanism of natural selection and evolution. Since then they have been applied to a wide variety of optimization problems. A survey on applications of GAs can be found in Goldberg (1989). Much early work is focused on exploring new application areas of GA, suggesting genetic representations of the problems and ensuring compatibility of crossover functions with the genetic representations. Many efforts have also been made on improving the performance of the method to better suit the problems, for example, on gene pool selection, genetic operator selection, parameter setting, etc. Although GA is widely applicable and its performance has been well demonstrated, it still remains a dif®cult task in practical applications to seek a good balance between solution quality and computation time. Further improvements are needed to make it

more cost (time) effective. This paper attempts to improve the GA procedure by modifying its evolution process and to apply the modi®ed GA to the ¯ow shop sequencing problem to minimize the mean ¯ow time. Many multi-stage production systems can be modeled as ¯ow shops. Therefore, ¯ow shop sequencing problems have been extensively studied. Most of the studies try to minimize the makespan. In today's highly competitive global market, quick response to customer orders becomes increasing important. This requires us to use more relevant criteria in making scheduling decisions. An appropriate criterion is to minimize the mean ¯ow time (or total ¯ow time). The ¯ow shop sequencing problem with mean ¯ow time criterion can be stated as follows. n jobs are to be processed on a series of m machines. All the jobs are available at time 0. Each machine can process at most one job at a time, and each job can be processed on at most one machine at any time. No preemption is allowed. The processing time t i; j is given for any job i on any machine j. Given a permutation ( processing sequence) of the jobs fJ1 ; J2 ; . . . ; Jn g, the

62

Tang and Liu

completion times of the jobs on the machines and the mean ¯ow time C of the jobs in the ¯ow shop can be calculated as follows: C J1 ; 1 t J1 ; 1 C Ji ; 1 C Ji 1 ; 1 t Ji ; 1;

i 2; . . . ; n

C J1 ; j C J1 ; j

j 2; . . . ; m

1 t J1 ; j;

C Ji ; j maxfC Ji 1 ; j; C Ji ; j i 2; . . . ; n; j 2; . . . ; m C

1g t Ji ; j

n 1X C i; m n i1

The problem is to ®nd a permutation of jobs so as to minimize the mean ¯ow time. Optimal solution to this problem has been attempted by Ignall and Schrage (1965), Bansal (1977), and Szwarc (1983). These optimal methods are not applicable to large sized problems due to the NP-hardness of the problem (Lenstra et al., 1977). Special purpose heuristic algorithms have been proposed by Gupta (1972), Miyazaki et al. (1978), Ho and Chang (1991) and Rajendran and Chaudhuri (1991), and Rajendran (1993). These heuristics can generate reasonable good solutions in a very short time. However, they themselves cannot improve the solutions any further even if time is allowed. On the other hand, though GAs are improvement algorithms, it may take unpractical long time for the general GA to get a solution comparable to these heuristic solutions. In this paper, we propose a modi®ed genetic algorithm (MGA) to produce improved solutions with reasonable computation time. It is intended to achieve a better balance between solution quality and computation time. In the following sections, we ®rst present the MGA procedure (Section 2). The implementation of the procedure for the ¯ow shop sequencing problem is described in Section 3. Computational experiments are then reported in Section 4. Finally, conclusions are given in Section 5.

2. The modi®ed genetic algorithm 2.1. The framework The general procedure of GA can be illustrated as in Fig. 1. When GA is used to solve an optimization problem, it is hoped to converge quickly. The

Fig. 1. The general GA procedure.

selection and the crossover operations are often designed to achieve this. On the other hand, premature convergence (being trapped in local optimum) should be avoided. The mutation operation is included for this purpose. However, it is often dif®cult for the general GA procedure to achieve both (short computational time and good solution quality). The MGA presented here intends to obtain a better trade-off between these two con¯ict criteria. In the MGA procedure, two new operations, called ®ltering and cultivation, are added. The ®rst operation is added after the selection operation. It replaces the worst solutions selected with the best solutions obtained in the previous generations so as to speed up the convergence. The other operation is added after the calculation of ®tness values of the new generation to ``cultivate'' the best solution in the population when there is a sign of premature convergence. The details of these new operations are described below.

A modi®ed genetic algorithm for the ¯ow shop sequencing problem

2.2. Filtering In the selection step of GA, solutions are selected with probability. Although the probabilities are so assigned that good solutions have better chance to be selected, there is no guarantee that the best solution will be selected. To increase the chance for the optimal solution being approached quickly, in the ®ltering step of each generation, the two worst selected solutions are ®ltered out before genetic operations. Their positions are ®lled with the best solution of the current population and the best solution recorded so far. The following are among the reasons for adding this new step: (1) According to GA convergence theorem in the general sense, if the best solution is held in each generation, then the GA converges to optimal solution when the number of generations approaches in®nite. (2) Based on the general knowledge of natural genetic evolution, if an excellent individual is put into a population, evolution of the population will be improved.

63

population, cultivation always improves the best solution while mutation may improve or deteriorate the solution.

3. Implementation of the MGA for the ¯ow shop sequencing problem We implemented the MGA for the ¯ow shop sequencing problem based on the structure described above. Implementation details are as follows. 3.1. Genetic representation of solutions In order to apply GA to a sequencing problem, a solution is usually represented by a sequence of natural numbers. Each number represents the job in that position of the sequence. This type of genetic representation is used here in the implementation of the MGA. 3.2. Creation of initial population

2.3. Cultivation Computation shows that when the best solution keeps unimproved for a certain number of generations in a GA process, the solution quality will be dif®cult to be improved further even if the iteration continues. Therefore, we add the cultivation operation to overcome this. When the number of iterations without improving the best solution is greater than a pre-speci®ed constant, premature convergence can be assumed and the cultivation operation is activated. It improves the best solution recorded so far using one round of adjacent pairwise exchange search. The improved solution is put into next iteration. Since premature convergence is less likely to occur at early stage of the GA procedure, The monitoring of the best solutions may be started only after the iterations have passed half of the maximum generation. From the ®ltering operation, we know that the solution cultivated will be passed to the next generation. It brings new features to the population ( just like mutation). If the fact of no improvement for certain generations is due to premature convergence, the new feature may make the population start improving again. We note that although cultivation and mutation both diversify the solutions in the

Most GAs in other contexts assume that the initial population is chosen completely at random. However, the ef®ciency of GAs can be greatly increased by selecting a good initial population. To increase the quality of the initial population, one-®fteenth of the randomly generated solutions in the initial population was improved by one round of local search. The ®rst round of local search can make obvious improvements on the initial solution with minimal computation effort. If the search continues, the efforts will increase while further improvements are not distinct. 3.3. The ®tness function The selection operation in GAs requests the ®tness function to be a maximization function so that the probability for a solution being selected is proportional to its ®tness value. Since our sequencing problem is a minimization problem, the following transformation, following Gupta et al. (1993), is used. f x Cmax C x when C x5Cmax 0 otherwise where C x is the objective value of a solution for our sequencing problem, f x is the ®tness value of the

64 solution, Cmax is an estimation of the maximum value among the objective values of all the job sequences. Cmax is given as an input parameter in the implementation of the MGA.

Tang and Liu

used for comparison. The general GA procedures were implemented in a way similar to the MGA except the two additional operations. The parameters were set the same as in the MGA.

3.4. Crossover Crossover combines the elements from two parent (current) solutions to produce child (new) solutions. We use the partially matched crossover (PMX) (Goldberg and Lingle, 1985), which is commonly used for sequencing problems. 3.5. Mutation Mutation increases the variety in the population in order to escape from local optima. Mutation methods for sequencing problems can be classi®ed into two types: (a) exchanging methodÐtwo elements in the sequence are chosen at random and exchanged; (b) inserting methodÐone element is chosen randomly and shifted a random number of positions to the right or left. The inserting method is used in our implementation. 3.6. Selection The selection operation selects solutions from the population according to their ®tness values. In many cases, the same individual is selected many times. This reduces the variety of the parents for reproduction and may cause pre-mature convergence. To avoid this, we set a limit in our implementation of the MGA so that any individual can be selected at most twice in each generation. 3.7. Parameter setting Parameters in the MGA are set as follows: Population size 80; probability of crossover 0.99; probability of mutation 0.12 and maximum generations 1000.

4.1. Generation of problem instances Problem instances of different sizes were generated for the experiments. To make the problem instances representative, the number of jobs was chosen at ®ve levels ranging from 50 to 150 and the number of machines was chosen at four levels ranging from 5 to 20. Therefore, we had 20 different problem sets. For each problem set, 10 problem instances were generated. Totally, 200 problem instances were generated and used in the experiments. The processing times were randomly generated from the uniform distribution with the range being 1±500. All the problems were solved using the above-mentioned methods. The performances of these methods were compared in terms of their solution quality and computation time.

4.2. Solution quality Because none of these algorithms can guarantee optimal solution to the problem, we do not have an absolute measure for the optimality performance of the algorithms. Therefore, we used a relative quality measure C/C*, where C* is the best among the results for a problem instance given by the four algorithms and C is the result for the instance given by the algorithm being evaluated. The average solution qualities of these algorithms against different problem parameters are shown in Tables 1 and 2, respectively. ``GA'' in the tables represents the general GA. From the results in the tables, we can see that *

4. Computational experiments To evaluate the performance of the MGA, computational experiments were carried out on a large number of randomly generated problem instances. Two recent special purpose heuristic algorithms, Ho and Chang (1991) and Rajendran (1993), and a general GA were

*

The MGA performed consistently better then the general GA. This demonstrates that the modi®cations we made to the GA procedure improve the effectiveness of the algorithm. The average improvement is signi®cant (14.8%). The solution quality of MGA is not affected, by the problem structure or problem size, as much as that of the general GA. This indicates that the MGA procedure is more robust.

65

A modi®ed genetic algorithm for the ¯ow shop sequencing problem Table 1. Average solution quality with different number of jobs Number of jobs

MGA

GA

Rajendran

Ho and Chang

50 75 100 125 150 Overall average

1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

1.1245 1.1445 1.1499 1.1601 1.1616 1.1481

1.0314 1.0318 1.0277 1.0287 1.0243 1.0288

1.1547 1.1643 1.1761 1.1924 1.1870 1.1749

*

The MGA solution quality is better than the general purpose heuristics, 2.88% better than Rajendran and 17.49% better than Ho and Chang on average.

can generate reasonable solutions very quickly and GAs have the potential to obtain much higher quality solution with more computation efforts. While further studies can be done both on developing more effective heuristics and on further improvement of the GA procedure, an interesting future research topic is how to utilize the quick heuristic solutions in the GA process to make it more computationally ef®cient.

4.3. Computation time Tables 3 and 4 show the average running times of the algorithms against different number of jobs and different number of machines, respectively. All the times shown are in seconds on a Pentium PC. From the ®gures in Tables 3 and 4, we have the following observations about the computation time performance of the algorithms: *

*

5. Conclusions In this paper, an MGA framework has been proposed and implemented for the ¯ow shop sequencing problem with objective of minimizing mean ¯ow time. Two new operations have been introduced to the framework. In the implementation of the MGA to the ¯ow shop scheduling problems, improvements have also been made on initial population generation. Compared to the general GA, the MGA can converge faster while keeping diversity of solutions in the population. Computational experiments on a large number of representative problem instances showed that the solution quality of the MGA is signi®cantly better than that of the general GA with similar computation efforts. With longer computational time, the MGA also produces better solutions than the two special purpose heuristics tested. The MGA

The computational time of all these algorithms increases when problem size increases. The computation time of the MGA and GA is more affected by the number of jobs than by the number of machines. MGA takes similar amount of computation time with the general GA, while both GA procedures take much longer time than the heuristic algorithms.

Looking both solution quality and computation time performances, the advantages of special purpose heuristics and the GAs can be easily seen. Heuristics

Table 2. Average solution quality with different number of machines Number of machines

MGA

GA

Rajendran

Ho and Chang

5 10 15 20 Overall average

1.0000 1.0000 1.0000 1.0000 1.0000

1.1717 1.1529 1.1402 1.1278 1.1481

1.0212 1.0302 1.0314 1.0324 1.0288

1.2590 1.1727 1.1438 1.1241 1.1749

66

Tang and Liu

Table 3. Average computation time with different number of jobs Number of jobs

MGA

GA

Rajendran

Ho and Chang

50 75 100 125 150

62.425 107.475 166.125 242.325 338.575

64.775 106.875 155.900 212.150 274.250

0.375 0.925 1.975 3.650 6.125

0.250 0.425 0.600 0.900 1.225

Table 4. Average computation time with different number of machines Number of machines

MGA

GA

Rajendran

Ho and Chang

5 10 15 20

97.620 155.980 212.980 266.960

114.620 147.660 179.580 209.300

1.220 2.140 3.080 4.000

0.320 0.580 0.800 1.020

framework is general enough to be applied to other optimization problems. A similar procedure has been applied to the single machine scheduling problem with ready times (Liu and Tang, 1999). Given the fact that the special purpose heuristics generate reasonably good solutions very quickly, an interesting future research topic is how to utilize the quick heuristic solutions in the GA process to make it more computationally ef®cient. Acknowledgment This research is supported by National Natural Science Foundation of China (Grant No. 79700006), the National 863/CIMS Research Scheme of China (Grant No. 863-511-708-009), and HKUST Direct Allocation Grant (Grant No. DAG94/95.E19). References Bansal, S. P. (1977) Minimizing the sum of completion times of n-jobs over M-machines in a ¯owshopÐA branch and bound approach. AIIE Transaction, 9, 306± 311. Goldberg, D. E. (1989) Genetic Algorithms in Search, Optimisation, and Machine Learning, Addison Wesley, Reading, MA. Goldberg, D. E. and Lingle, R. (1985) Alleles, loci and the

traveling salesman problem. In Grefenstette (ed.), Proceedings of the First International Conference on Genetic Algorithms and Their Applications, Pittsburgh, PA, pp. 154±159. Gupta, J. N. D. (1972) Heuristic algorithms for multistage ¯owshop scheduling problem. AIIE Transaction, 4, 11±18. Gupta, M. C., Gupta, Y. P. and Kumar, A. (1993) Minimizing ¯ow time variance in a single machine system using genetic algorithms. European Journal of Operational Research, 81, 289±303. Ho, J. C. and Chang, Y.-L. (1991) A new heuristic for the njob, m-machine ¯ow-shop problem. European Journal of Operational Research, 52, 194±202. Holland, J. H. (1975) Adaptation in Natural and Arti®cial Systems, University of Michigan Press, Ann Arbor. Ignall, E. and Schrage, L. (1965) Application of the branch and bound technique to some ¯owshop scheduling problem. Operations Research, 13, 400±412. Lenstra, J. K., Rinnooy Kan, A. H. G. and Brucker, P. (1977) Complexity of machine scheduling problems. Annals of Discrete Mathematics, 1, 343±362. Liu, J. and Tang, L. (1999) A modi®ed genetic algorithm for single machine scheduling. Computers and Industrial Engineering, 37, 43±46. Miyazaki, S., Nishiyama, N. and Hashimoto, F. (1978) An adjacent pairwise approach to the mean ¯owtime scheduling problem. Journal of Operations Research Society of Japan, 21, 287±299. Rajendran, C. (1993) Heuristic algorithm for scheduling in a ¯owshop to minimize total ¯owtime. International Journal of Production Economics, 29, 65±73.

A modi®ed genetic algorithm for the ¯ow shop sequencing problem Rajendran, C. and Chaudhuri, D. (1991) An ef®cient heuristic approach to the scheduling of jobs in a ¯owshop. European Journal of Operational Research, 61, 318±325.

67

Szwarc, W. (1983) The ¯owshop problem with mean completion time criterion. AIIE Transaction, 15, 172±176.