A PARTICLE SWARM OPTIMIZATION ALGORITHM ...

0 downloads 0 Views 311KB Size Report
mization algorithm for job-shop scheduling problems with multi-purpose machines or. MPMJSP. To meet its objective, this paper proposes a new variant of ...
May 12, 2009 20:6 WSPC/APJOR

00215.tex

Asia-Pacific Journal of Operational Research Vol. 26, No. 2 (2009) 161–184 c World Scientific Publishing Co. & Operational Research Society of Singapore 

A PARTICLE SWARM OPTIMIZATION ALGORITHM ON JOB-SHOP SCHEDULING PROBLEMS WITH MULTI-PURPOSE MACHINES

PISUT PONGCHAIRERKS∗ Industrial Engineering Program Sirindhorn International Institute of Technology Thammasat University, Pathum Thani, 12121, Thailand [email protected] VORATAS KACHITVICHYANUKUL Industrial Engineering and Management School of Engineering and Technology Asian Institute of Technology Pathum Thani, 12120, Thailand [email protected]

Received 15 May 2007 Accepted 26 April 2008 This paper is a contribution to the research which aims to provide an efficient optimization algorithm for job-shop scheduling problems with multi-purpose machines or MPMJSP. To meet its objective, this paper proposes a new variant of particle swarm optimization algorithm, called GLN-PSOc , which is an extension of the standard particle swarm optimization algorithm that uses multiple social learning topologies in its evolutionary process. GLN-PSOc is a metaheuristic that can be applied to many types of optimization problems, where MPMJSP is one of these types. To apply GLN-PSOc in MPMJSP, a procedure to map the position of particle into the solution of MPMJSP is proposed. Throughout this paper, GLN-PSOc combined with this procedure is named MPMJSP-PSO. The performance of MPMJSP-PSO is evaluated on well-known benchmark instances, and the numerical results show that MPMJSP-PSO performs well in terms of solution quality and that new best known solutions were found in some instances of the test problems. Keywords: Particle swarm optimization; job-shop scheduling; MPMJSP; makespan.

1. Introduction Scheduling is a decision-making process that is important in most manufacturing and service industries. It is used in many areas such as production planning, transportation and logistics, communication and information processing. Many ∗ Corresponding

author. 161

May 12, 2009 20:6 WSPC/APJOR

162

P. Pongchairerks & V. Kachitvichyanukul

scheduling problems are complex and cannot be solved to optimality in polynomial time. One of those is the job-shop scheduling problem with multi-purpose machines (MPMJSP). The problem normally comes with a given set of jobs where each job consists of a chain of operations. For this entire process, there is a set of multi-purpose machines which is equipped with different tools that enable it to function for more than one purpose. Associated with each operation, there is a set of machines which can process an operation where each operation must be processed during an uninterrupted time period of a given length. MPMJSP attempts to minimize the latest completion time of the given jobs. This latest completion time will be called makespan throughout this paper. Particle swarm optimization algorithm (PSO) is one of the well-known metaheuristics. In order to improve the search performance, this paper proposes a new variant of particle swarm optimization algorithm, called GLN-PSOc , which is an extension of the standard particle swarm optimization that uses multiple social learning topologies in its evolutionary process. GLN-PSOc is a generic algorithm that can be applied to many types of optimization problems, MPMJSP being one of them. In order to apply GLN-PSOc to MPMJSP, a mapping procedure is proposed. GLN-PSOc combined with this mapping procedure is named MPMJSPPSO throughout this paper. One advantage of this proposed algorithm is that it can always generate feasible solutions without requiring any repair method. The performance of MPMJSP-PSO is evaluated on well-known benchmark instances, and is also compared with the result from a tabu-search algorithm from literature. The numerical results show that MPMJSP-PSO performs well in terms of solution quality. The remainder of this paper is partitioned into five sections. Section 2 speaks about the description of the job-shop scheduling problem with multipurpose machines (MPMJSP) being presented along with the MPMJSP benchmark instances, the description of parameterized active schedules, and the relevant existing researches on PSO. Section 3 presents the variant of PSO algorithm proposed in this research called GLN-PSOc . Section 4 presents a procedure for mapping GLN-PSOc to MPMJSP. Section 5 presents the parameter setting for the proposed algorithm. Section 6 presents the performance evaluation of MPMJSP-PSO (i.e., GLN-PSOc combined with the proposed mapping procedure in Sec. 4). Finally, Sec. 7 concludes the findings of the research.

2. Literature Review This section presents a description of the job-shop scheduling problem with multipurpose machines (MPMJSP). MPMJSP is presented along with literature review of relevant PSO research in order to provide a basic and common understanding regarding the past development of the scheduling problem as well as the algorithms developed prior to the introduction of this paper’s proposed algorithm seen in the later sections.

00215.tex

May 12, 2009 20:6 WSPC/APJOR

A Particle Swarm Optimization Algorithm on Job-Shop Scheduling Problems

00215.tex

163

2.1. Problem description The job-shop scheduling problem with multi-purpose machines (MPMJSP) can be stated as follows: there are n jobs J1 , J2 , J3 , . . . , Jn and m machines M1 , M2 , M3 , . . . , Mm . Each job Ji consists of m operations Oi1 , Oi2 , Oi3 , . . . , Oij , . . . , Oim ; where Oij represents the jth operation on the ith job. This order of operations is predefined and, thus, cannot be changed. Furthermore, operation Oij has to be processed by exactly one machine in a set of machines Aij ⊆ {M1 , M2 , M3 , . . . , Mm } during pij time units without preemption. To undergo the process, these following constraints have to be satisfied: (1) Each machine cannot simultaneously handle two or more jobs at any given time period; (2) At one given time period, each job cannot be processed on more than one machine. For the above statements, the problem requires finding a feasible allocation of all operations to time intervals on the given machines such that makespan is minimized. Allocation of all operations to time intervals is likewise known as schedule. Previous studies were already done by acknowledged authors. Brucker and Schlie (1990) derive a polynomial time algorithm for this problem with 2 jobs and m machines. Sotskov (1991) demonstrates this problem to be NP-hard for all cases where there are at least three machines. A branch and bound algorithm is presented by Jurisch (1992) to schedule several groups of problem instances with different sizes in the number of machines and jobs. The result shows that the branch and bound algorithm can only yield good solutions for instances with small sizes. In many cases, the branch and bound algorithm can not improve the initial solution within a predefined time limit. Hurink et al. (1994), on the other hand, introduced a tabusearch heuristic to schedule large size instances. It yields good results for several benchmark problems. Interesting MPMJSP literature also includes Brucker et al. (1997), and Brucker (2004).

2.2. MPMJSP benchmark instances Three sets of benchmark instances from Jurisch (1992) will be used to evaluate the performance of the algorithm proposed in this paper. Each set of instances which includes 43 instances has different characteristics as follows: EDATA set: The instances in EDATA set are very similar to the classical jobshop scheduling problems. The number of operations which can be processed by different machines is few and only a small number of machines can process an operation. RDATA set: In each instance in the RDATA set, the number of operations which can be processed by different machines is large, and the maximum number of machines which can process an operation is small.

May 12, 2009 20:6 WSPC/APJOR

164

00215.tex

P. Pongchairerks & V. Kachitvichyanukul

VDATA set: In this set, each instance satisfies the property that the number of operations which can be processed by different machines and the maximum number of machines which can process an operation are large. In order to describe the characteristics of these data sets, let |Aij |avg refer to the average number of machines which can process an operation, and |Aij |max refer to the maximum number of machines which can process an operation. The main characteristics of these sets of instances are summarized in Table 1. 2.3. Parameterized active schedules A schedule is an allocation of the operations to given time intervals on the machines. Three well-known classes of schedules are defined below: Active schedule: a feasible schedule such that no operation can be started earlier without delaying some other operation. Non-delay schedule: an active schedule such that no machine is kept idle when it can start processing some operation. Parameterized active schedule: an active schedule such that a machine between operations can only be left idle within a predefined length of time once it starts processing. Although an optimal schedule is always in the set of all active schedules, this set of schedules is very large in number and contains many schedules with unnecessary delay times, and hence low quality in terms of makespan. On the other hand, the set of all non-delay schedules is much smaller in number, but unfortunately, it may not contain an optimal schedule. In order to improve the solution space, several researches, such as Storer et al. (1992) and Bierwirth and Mattfeld (1999), used the concept of parameterized active schedules. The basic idea of parameterized active schedules is to control the maximum delay times allowed for each operation so that the solution space is either reduced or increased. A set of parameterized active schedules is efficient in its form and can improve the performance of meta-heuristic approaches. However, previous research works on the class of the parameterized active schedules are only for general job-shop scheduling problems (Storer et al., 1992; Bierwirth and Mattfeld, 1999; Mattfeld and Bierwirth, 2004; and Gon¸caves et al., 2005). These researches paved the way to the creation of ideas on venturing into further studies on MPMJSP. Table 1. Characteristics of the benchmark instances of Jurisch (1992). Set of instances EDATA

|Aij |avg

RDATA VDATA

|Aij |avg |Aij |avg

Characteristics n2 when m ≤ 6 = 1.15, and |Aij |max = 3 when m ≥ 10 = 2, and |Aij |max = 3 = 0.5 m, and |Aij |max = 0.8 m

May 12, 2009 20:6 WSPC/APJOR

A Particle Swarm Optimization Algorithm on Job-Shop Scheduling Problems

00215.tex

165

2.4. Previous studies of particle swarm optimization Particle swarm optimization (PSO) is a population-based stochastic search algorithm. Since PSO was first introduced by Kennedy and Eberhart (1995), it has been successfully applied to optimize various combinatorial problems such as the traveling salesman problem (Clerc, 2004), flow-shop scheduling problem (Liao et al., 2007; Tasgetiren et al., 2007; Liu et al., 2007), and job-shop scheduling problem (Xia and Wu, 2005). The underlying motivation for the development of PSO algorithm is the social behavior of animals such as fish schooling and bird flocking. The concept of the implementation of PSO is based on social sharing of information among individuals in a population. The algorithm started with a population of particles. Each particle consists of a potential solution (called position in PSO) and a velocity. Each particle moves around the D-dimensional search space with a velocity which is dynamically adjusted based on its own moving experience and that of its neighbors. To describe the PSO algorithm, the following notations and terminologies are defined:

Uniform[0, 1]: u represents a uniform random number on the interval [0, 1]. This interval will be used throughout this paper. Particle: an element in a population. It has a position and a velocity; it knows its position and also the objective function value of this position; it also remembers its personal best position as well as its neighbor’s best position and objective function value. Population: a set of K particles with corresponding indices {1, . . . , r, . . . , K} in the PSO system such that r ∈ Z+ ; also known as swarm. Position: Let Xi (t) denote the position of the ith particle at the tth iteration represented by a D dimensional point Xi (t) = (xi1 (t), . . . , xid (t), . . . , xiD (t)), where xid (t) is the position value of the dth dimension of the ith particle at the tth iteration. (Xi (t) is a point with a sequence of real numbers. xid (t), therefore, being a part of the sequence, is considered as an element of Xi (t). This notation will be essential for future references as to the definition of xid (t) as an element found in a sequence Xi (t)). Each position Xi (t) represents a solution of a specific problem. Fitness value: f (Xi (t)) represents the objective function value of the solution of a specific problem decoded from the position Xi (t) and is also called the fitness value of the particle position Xi (t). Velocity: Vi (t) denotes the velocity of the ith particle at the tth iteration and is represented by a vector of D dimensions Vi (t) = (vi1 (t), . . . , vid (t), . . . , viD (t)), where vid (t) is the velocity value of the dth dimension of the ith particle at the tth iteration. Vi (t + 1) can be defined as a rate at which the ith particle moves from position Xi (t) to position Xi (t + 1). Maximum Velocity: Vmax defines a boundary of the velocities where each vid (t) cannot be outside the range [−Vmax , Vmax ].

May 12, 2009 20:6 WSPC/APJOR

166

00215.tex

P. Pongchairerks & V. Kachitvichyanukul

Inertia weight: w(t) is a parameter employed to control the impact of the previous velocities of K particles on the current velocities. Personal best position: Pi denotes the position found by the ith particle that contains the best fitness value and is represented by Pi = (pi1 , . . . , pid , . . . , piD ). Global best position: Pg is a position of a particle with the best fitness value in a population, where Pg = (pg1 , . . . , pgd , . . . , pgD ). Local best position: Pli represents the local best position of the ith particle and is represented by D number of dimensions as Pli = (pli1 , . . . , plid , . . . , pliD ), (i.e., plid is the local best position of the ith particle in the dth dimension). Pli is the best position found by the k adjacent neighbors of the ith particle; such that k ∈ (Z+ ∧ odd numbers). In order to define the k adjacent neighbors of a particle, the arrangement of all particle indices {1, . . . , r, . . . , K} is best illustrated in a cyclical figure as shown in Fig. 1. From the said figure, the set of k adjacent neighbors of the ith particle consists of the closest (k − 1)/2 neighboring particle(s) in the counterclockwise direction from the ith particle, the closest (k − 1)/2 neighboring particle(s) in the clockwise direction from the ith particle, and the ith particle itself. For example, assuming that the number of particles in the population (K) = 10 and the number of adjacent neighbors of each particle (k) = 5, then, all adjacent neighbors of the particles is illustrated as follows; the 5 adjacent neighbors of the 1st particle are the particles 9th, 10th, 1st, 2nd, and 3rd; those of the 2nd particle are the particles 10th, 1st, 2nd, 3rd, and 4th; those of the 3rd particle are the particles 1st, 2nd, 3rd, 4th, and 5th; and so on. Near neighbor best position: Pni represents the near neighbor best position (Veeramachaneni et al., 2003) of the ith particle and is represented by D number of dimensions as follows. For minimization problem, Pni = (pni1 , . . . , pnid , . . . , pniD ) with pnid = pjd chosen to maximize the value of FDR(j, i, d) presented in Eq. (2.1) below: F DR(j, i, d) =

f (Xi ) − f (Pj ) |pjd − xid |

such that i = j, i and d as predefined

(2.1)

where FDR is called the Fitness-Distance-Ratio and pjd is the personal best position of the jth particle at the dth dimension. K 1 2 K-1 K-2 K-3

3 4 5 r

Fig. 1. Adjacent neighborhood.

May 12, 2009 20:6 WSPC/APJOR

A Particle Swarm Optimization Algorithm on Job-Shop Scheduling Problems

00215.tex

167

Acceleration constant: the acceleration constant tied in each best position (e.g, personal best, global best, local best, and near neighbor best position) is a parameter that impacts the velocity value. It, therefore, has a strength to control the velocity value when the constant’s value is adjusted. cp — the acceleration constant of the personal best position, cg — the acceleration constant of the global best position, cl — the acceleration constant of the local best position, and cn — the acceleration constant of the near neighbor best position. The standard PSO algorithm was originally developed by Kennedy and Eberhart (1995) and later modified by Shi and Eberhart (1998) with the introduction of an inertia weight, w, and has become the most popular version of PSO ever used. The standard PSO uses only comparison with the global best position to define the relationship of particles within the population. PSO uses Eqs. (2.2) and (2.3) to solve for the new velocity and positions, respectively, as shown below:  vid (t + 1) = w(t)vid (t) + cp u(pid − xid (t)) + cg u(pgd − xid (t))   (2.2) −Vmax if vid (t + 1) ≤ −Vmax  vid (t + 1) =  Vmax if vid (t + 1) ≥ Vmax xid (t + 1) = xid (t) + vid (t + 1)

(2.3)

During the past years, researchers have explored several variants of PSO algorithm (e.g. the local best PSO, FDR-PSO, and GLN-PSO) by modifying Eq. (2.2) to update the velocities of the standard PSO. The local best PSO (Kennedy, 1999) is a variant of the standard PSO that uses local best position instead of the global best position to find the best fitness value to be used in the algorithm. The equation for updating the velocity in a local best PSO is presented below:  vid (t + 1) = w(t)vid (t) + cp u(pid − xid (t)) + cl u(pgd − xid (t))   (2.4) −Vmax if vid (t + 1) ≤ −Vmax  vid (t + 1) =  Vmax if vid (t + 1) ≥ Vmax For FDR-PSO (Veeramachaneni et al., 2003), the particle velocity at the next iteration is the summation of the four following terms: the current velocity, personal best position, global best position, and near neighbor best position. The equation for updating the velocity in FDR-PSO is presented as follows:  vid (t + 1) = w(t)vid (t) + cp u(pid − xid (t)) + cg u(pgd − xid (t))    + cn u(pnd − xid (t)) (2.5)   −Vmax if vid (t + 1) ≤ −Vmax   vid (t + 1) =  Vmax if vid (t + 1) ≥ Vmax GLN-PSO (Pongchairerks and Kachitvichyanukul, 2005, 2006) simultaneously uses the personal best position, global best position, local best position, and near

May 12, 2009 20:6 WSPC/APJOR

168

00215.tex

P. Pongchairerks & V. Kachitvichyanukul

neighbor best position in order to update the particle velocities. The main modification is in the additional social learning terms in Eq. (2.6). The main advantages of the additional social learning terms are presented as follows. First, it can avoid the quick clustering of swarm, which causes the swarm to be trapped in a local optimal solution, by using several neighborhood best positions as reference points. Second, the local best position and near neighbor best position are also working similar to the division of particles into subpopulations with differently defined neighborhood. Thus, the swarm can explore various regions in the search space simultaneously and can perform well in complex problems. In GLN-PSO, particle velocity for the next iteration is derived by considering the following five terms; the current velocity, personal best position, global best position, local best position, and near neighbor best position. The equation for updating the velocity in GLN-PSO is given below:  vid (t + 1) = w(t)vid (t) + cp u(pid − xid (t)) + cg u(pgd − xid (t))    + cl u(pld − xid (t)) + cn u(pnd − xid (t))   −Vmax if vid (t + 1) ≤ −Vmax   vid (t + 1) =  Vmax if vid (t + 1) ≥ Vmax

(2.6)

3. GLN-PSOc Algorithm GLN-PSOc is a variant of GLN-PSO which selects some particles randomly and updates their positions, using crossover operators instead of their velocities. The purpose of this modification is to maintain the diversity of the swarm to keep it from premature convergence to a local optimum. The implementation procedure of GLN-PSOc is obviously explained below. Particles in GLN-PSOc are randomly selected based on a probability to determine whether the traditional GLN-PSO equations or the alternative equations, performing crossover of the current position of each particle and the global best position, will be used to update the particles. In other words, at each iteration of GLN-PSOc , some particles are randomly chosen based on a pre-specified probability pc to determine whether Eqs. (3.1) and (3.2) or (3.3) and (3.4) will be applied to update the particles. pc is defined as the probability of choosing Eqs. (3.3) and (3.4), and thus, its complementary value 1 − pc is the probability of choosing (3.1) and (3.2). At each iteration, each particle selected with probability (1 − pc ) updates its velocity and position based on Eqs. (3.1) and (3.2) below:  vid (t + 1) = w(t)vid (t) + cp u(pid − xid (t)) + cg u(pgd − xid (t))    + cl u(plid − xid (t)) + cn u(pnid − xid (t))   −Vmax if vid (t + 1) ≤ −Vmax   vid (t + 1) =  Vmax if vid (t + 1) ≥ Vmax

(3.1)

xid (t + 1) = xid (t) + vid (t + 1)

(3.2)

May 12, 2009 20:6 WSPC/APJOR

A Particle Swarm Optimization Algorithm on Job-Shop Scheduling Problems

00215.tex

169

On the other hand, for each particle selected with probability pc , the new position is updated by performing uniform crossover of the current position and the global best position. The particle then keeps the current velocity since its velocity is not updated in this iteration. The formulae below are used in getting the velocity and position of the particle at probability pc . vid (t + 1) = vid (t)  xid (t) if u < pu xid (t + 1) = pgd otherwise

(3.3) (3.4)

where pu is the probability that xid (t + 1) is equal to xid (t), and, thus, its complementary value 1 − pu is the probability that xid (t + 1) is equal to pgd . In this paper, pu = 0.7. The procedure for GLN-PSOc algorithm is presented below: (1) Set current iteration t = 1. Initialize the positions and velocities of K particles in a population in a D-dimensional search space; (2) For each particle, decode its position into a solution of a specific problem and evaluate the objective function value of the solution as the fitness value of the position. The decoding procedure is specific to each problem. MPMJSP, therefore, has its own specified procedure which is well elaborated in Sec. 4 of this paper; (3) Update the personal best positions; (4) Update the global best position; (5) Update the local best positions; (6) Update the near neighbor best positions; (7) For each particle, determine whether Eqs. (3.1) and (3.2) or Eqs. (3.3) and (3.4) will be used to update its velocity and position based on the predefined probability pc ; (8) Update the velocities and the positions of all particles based on the selection in Step 7; (9) If the stopping criterion is met, stop. Otherwise, set t = t + 1 and repeat the process from Step 2.

4. Application of GLN-PSOc to MPMJSP GLN-PSOc algorithm presented in Sec. 3 is a generic optimizer and thus applicable for many types of optimization problems. The key issue in a GLN-PSOc application is how to decode a particle position into a solution of the specific problem. Thus, a mapping procedure to apply GLN-PSOc in MPMJSP is introduced in this section calling the combination MPMJSP-PSO. The particle representation used in an MPMJSP-PSO algorithm is a random keys representation similar to the proposed genetic algorithm by Bean (1994).

May 12, 2009 20:6 WSPC/APJOR

170

00215.tex

P. Pongchairerks & V. Kachitvichyanukul

A random keys representation encodes a solution with random numbers used as sort keys to decode the solution. The important feature of random keys is that all particle positions can represent feasible solutions without requiring any repair method. In MPMJSP-PSO, the particle positions are decoded into parameterized active schedules. The procedure is described in Sec. 4.2. The implementation procedure of MPMJSP-PSO is similar to that of GLNPSOc which is specified in three key steps, i.e., population initialization, solution decoding, and fitness value evaluation. These key steps are described in turn as follows: 4.1. Population initialization This is in Step 1 of the GLN-PSOc procedure (mentioned in Sec. 3). In this research, the number of dimensions (D) of MPMJSP-PSO is set to the total number of operations of MPMJSP (i.e., D = m × n). The initial velocities vid (1) and positions xid (1) are randomly generated in a specified interval ∀i = 1, . . . , r, . . . , K and d = 1, . . . , r, . . . , D such that r ∈ Z+ . In this paper, xid (1) = u and vid (1) = 0. 4.2. Solution decoding The procedure to decode a particle position into a parameterized active schedule consists of two steps. The first step is to decode a particle position into an operationbased permutation, and the second step is then to decode the permutation obtained from the first step into a parameterized active schedule. 4.2.1. Decoding a particle position into an operation-based permutation This section presents a method to decode a particle position, i.e. X = (x1 , x2 , x3 , . . . , xmn ), into an operation-based permutation, i.e. π = (π1 , . . . , πa , . . . , πmn ), which is a representation of operation priorities. For an n-job/m-machine instance, the operation-based permutation (Gen et al., 1994) is a sequence of mn integers, where each job index appears in the permutation for exactly m times. To simplify the decoding method, let X be presented by the table below, where first row presents dimension indices and second row presents values of the dimensions; thus, the table has mn columns. dimension x

1 x1

2 x2

3 x3

... ...

mn xmn

Similarly, π should be presented by the table below. dimension π

1 π1

2 π2

3 π3

... ...

mn πmn

May 12, 2009 20:6 WSPC/APJOR

00215.tex

A Particle Swarm Optimization Algorithm on Job-Shop Scheduling Problems

171

The method to decode a particle position into an operation-based permutation is presented below: (1) Sort the X dimensions (i.e. values in the first row) by their values (i.e. values in the second row) in ascending order; (2) To construct the operation-based permutation, the values of the π dimensions are then assigned through the following steps: (a) Arrange the π dimensions in the same order with the X dimensions sorted in Step (1); (b) Assign 1 as values of the π dimensions in the first m columns, 2 as those in the second m columns, 3 as those in the third m columns and so on. Some examples of the operation-based permutations decoded from the particle positions based on the decoding method above are presented below: For 2-job, 2-machine instance, supposing the particle position, i.e. X = (0.2, 0.7, 0.8, 0.4), has to be decoded into an operation-based permutation. To do so, the position X should be presented by the table below: dimension x

1 0.2

2 0.7

3 0.8

4 0.4

Based on the decoding method mentioned above, the operation-based permutation is built as follows: Step (1) — Sort the X dimensions by their values in ascending order dimension x

1 0.2

4 0.4

2 0.7

3 0.8

Step (2) — Construct the operation-based permutation a. Arrange the π dimensions in the same order with the X dimensions sorted in Step (1) dimension π

1 —

4 —

2 —

3 —

b. Assign the π dimension values dimension π

1 1

4 1

2 2

3 2

dimension π

1 1

2 2

3 2

4 1

Equivalent,

As a result, X = (0.2, 0.7, 0.8, 0.4) → π = (1, 2, 2, 1). For 3-job, 2-machine instance, supposing the particle position, i.e. X = (0.9, 0.2, 0.7, 0.6, 0.8, 0.3), will be decoded into an operation-based permutation.

May 12, 2009 20:6 WSPC/APJOR

172

00215.tex

P. Pongchairerks & V. Kachitvichyanukul

Thus, the particle position should be presented by the table below: dimension x

1 0.9

2 0.2

3 0.7

4 0.6

5 0.8

6 0.3

Then, the operation-based permutation is constructed, based on the decoding method, as follows: Step (1) — Sort the X dimensions by their values in ascending order dimension x

2 0.2

6 0.3

4 0.6

3 0.7

5 0.8

1 0.9

Step (2) — Construct the operation-based permutation a. Arrange the π dimensions in the same order with the X dimensions sorted in Step (1) dimension π

2 —

6 —

4 —

3 —

5 —

1 —

b. Assign the π dimension values dimension π

2 1

6 1

4 2

3 2

5 3

1 3

dimension π

1 3

2 1

3 2

4 2

5 3

6 1

Equivalent,

As a result, X = (1.0, 0.2, 0.7, 0.6, 0.8, 0.3) → π = (3, 1, 2, 2, 3, 1). 4.2.2. Decoding an operation-based permutation into a parameterized active schedule An operation-based permutation π, decoded from a particle position X based on the step mentioned in Sec. 4.2.1, represents the operation priorities for constructing a parameterized active schedule. The relationship between the operation priorities and the operation-based permutation π is stated below: A number i shown in the operation-based permutation π stands for the index in job Ji , for example, 1 stands for the index in job J1 , 2 stands for the index in job J2 , 3 stands for the index in job J3 , and so on. The permutation digits scanned from left to right is interpreted as follows; the jth occurrence of a job refers to the jth operation of this job and that an operation with a lower dimension in this operation-based permutation will be assigned a higher priority.

May 12, 2009 20:6 WSPC/APJOR

00215.tex

A Particle Swarm Optimization Algorithm on Job-Shop Scheduling Problems

173

Some examples to show the interpretation of permutation π into operation priorities are presented below: 2-job/2-machine instance: π = (1, 2, 2, 1) → Priority level of O11 > Priority level of O21 > Priority level of O22 > Priority level of O12 3-job/2-machine instance: π = (3, 1, 2, 2, 3, 1) → Priority level of O31 > Priority level of O11 > Priority level of O21 > Priority level of O22 > Priority level of O32 > Priority level of O12 From the above decoding processes, the mapping procedure for MPMJSP-PSO proceeds as follows: MPMJSP-PSO uses the operation priorities given from the permutation π to construct a parameterized active schedule. In this paper, a method to construct a parameterized active schedule is modified from the method of Bierwirth and Mattfeld (1999). The method schedules one operation at a time. An operation is schedulable if all the operations which precede it have already been scheduled. The method will iterate for m × n stages since there are m × n operations. Following the notations of French (1987), at stage t, let Pt = the partial schedule of the (t − 1) scheduled operations, St = the set of operations schedulable at stage t, i.e. all the operations that must precede those in St are in Pt , pij = the processing time of the operation Oij , σij = the earliest time that operation Oij in St could be started, φij = the earliest time that operation Oij in St could be completed where φij = σij + pij , and δ = the bound on the length of time a machine is allowed to remain idle. Note that δ ∈ [0, 1] is a parameter that must be predefined by the user, and it is an additional parameter combined with that of the GLN-PSOc algorithm that defines MPMJSP-PSO. In the boundaries, MPMJSP-PSO algorithm with δ = 0 generates non-delay schedules while the algorithm with δ = 1 generates active schedules. The method to construct a parameterized active schedule using the operation priorities given from π is presented below: (1) Let t = 1 with P1 being null. Initially, St includes all operations with no predecessors; (2) Find φ∗ = minOij ∈St {φij }, σ ∗ = minOij ∈St {σij }, and the machine M ∗ on which φ∗ occurs. (If there are at least two choices for M ∗ , select the lowest-index machine among those choices);

May 12, 2009 20:6 WSPC/APJOR

174

P. Pongchairerks & V. Kachitvichyanukul

(3) Select an operation O∗ ∈ St such that — O∗ is in M ∗ , and — σO∗ < σ ∗ + δ(φ∗ − σ ∗ ) for δ > 0, or σO∗ = σ ∗ for δ = 0, where σO∗ is the σ of O∗ (If there are at least two choices for O∗ , select the highest-priority operation among those choices according to the sequence in π); (4) Move to the next stage by these following steps: (4.1) Create Pt+1 by adding O∗ to Pt ; (4.2) Create St+1 by deleting O∗ and at the same time adding the direct successor of operation O∗ to St ; (4.3) Set t = t + 1; (5) Repeat the process from step (2) until a complete schedule is constructed. 4.3. Evaluate fitness value of particle position At each iteration, in Step 2 of the GLN-PSOc procedure, the fitness value of a particle position f (Xi ) is equal to the makespan value of the schedule constructed in Sec. 4.2. 5. Parameter Setting As already mentioned, GLN-PSOc proposed in Sec. 3 is a variant of GLN-PSO (Pongchairerks and Kachitvichyanukul, 2006) which selects some particles randomly and updates their positions by using crossover operators; and MPMJSP-PSO is then the GLN-PSOc combined with the mapping procedure mentioned in Sec. 4. Thus, this paper will set the parameter values of MPMJSP-PSO by taken from the previous literature (Pongchairerks and Kachitvichyanukul, 2006), except for pc (i.e. crossover probability) and δ (i.e. parameter in the mapping procedure). The parameter values taken from the literature abovementioned are shown below: — — — — —

Population size K = 40; Maximum iteration T = 1000; Number of adjacent neighbors k = 7; Acceleration constants cp = 0.5, cg = 0.5, cl = 1.5, cn = 1.5; Maximum velocity Vmax = the interval width of initial position value divided by 4 = (1 − 0)/4 = 0.25; — Inertia weight w is linearly decreased from 0.9 at t = 1 to 0.4 at t = T ; — The algorithm is stopped when either the maximum iteration is reached or the optimal solution is found. Later on, the probability pc is set at 0.3 after experimenting with values between 0.1 and 0.5 and the value that gave the best result is chosen. The experiment is done on five values of pc (pc ∈ {0.1, 0.2, 0.3, 0.4, 0.5}) and 129 instances of Jurisch

00215.tex

May 12, 2009 20:6 WSPC/APJOR

A Particle Swarm Optimization Algorithm on Job-Shop Scheduling Problems

00215.tex

175

(1992) which add up to 645 runs. Meanwhile, the algorithm is assigned to search in the set of all non-delay schedules (thus, δ = 0), and the other parameters are fixed at the values mentioned above. Eventually, the values of δ, which return the best result, are fit in a model shown in Eq. (5.1). This model uses the ratio of n and m as factors in its terms. To construct the model, an experiment is done on 21 values of δ (δ ∈ {0.0, 0.025, 0.05, 0.075, . . . , 0.5}) and 129 instances of Jurisch (1992). Meanwhile, the other parameters including pc are fixed at the mentioned values.   2 for EDATA  0.05(n/m) − 0.4(n/m) + 0.8   2 δ = 0.025(n/m) − 0.225(n/m) + 0.45 for RDATA (5.1)  −0.05(n/m) + 0.1 for VDATA    δ = 0 if δ ≤ 0 6. Performance Evaluation This section tests the MPMJSP-PSO performance, based on the conditions below: — The parameter values used in the experiment are taken from Sec. 5; — Three sets of benchmark instances are used, i.e. EDATA, RDATA, and VDATA (Jurisch, 1992); — For each instance, MPMJSP-PSO is set to run for 30 times. The results presented in this section contain the best solution value found in 30 runs (Best), the average of the solution values found in 30 runs (Average), and the average computation time (sec.) until the stopping criterion is met (T¯ ); — Results of MPMJSP-PSO are obtained on a Pentium M processor 1.60 GHz with the algorithm coded in C#; — The MPMJSP-PSO results are compared with the results of the tabu-search algorithm, N1-1000 (Hurink et al., 1994). 6.1. Results for the instances of EDATA set The results of MPMJSP-PSO for EDATA instances are presented in Table 2. There are 21 instances that MPMJSP-PSO shows better results, 10 instances that both algorithms shows equal results, and 11 instances that N1-1000 shows better results. Based on the results shown in Table 2, it is observed that MPMJSP-PSO performs better than N1-1000 for EDATA instances (highlights on the table). Moreover, this research found the new best solutions in four instances, l29, l32, l36, and l39 by using MPMJSP-PSO. The summary of the results for EDATA benchmark instances is presented in Table 3. The first column shows the test problem types. The second column shows the number of instances contained in the problem type. The third column presents the average deviation (%) between the best known solution value and the best solution value found by each algorithm. And the fourth column shows the number of instances that each algorithm had found the best known solution.

May 12, 2009 20:6 WSPC/APJOR

176

00215.tex

P. Pongchairerks & V. Kachitvichyanukul Table 2. Results for the MPMJSP benchmark instances of EDATA set. Instance

m06 m10 m20 l01 l02 l03 l04 l05 l06 l07 l08 l09 l10 l11 l12 l13 l14 l15 l16 l17 l18 l19 l20 l21 l22 l23 l24 l25 l26 l27 l28 l29 l30 l31 l32 l33 l34 l35 l36 l37 l38 l39 l40

Size (m × n) 6×6 10 × 10 5 × 20 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 15 5 × 15 5 × 15 5 × 15 5 × 15 5 × 20 5 × 20 5 × 20 5 × 20 5 × 20 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 10 × 15 10 × 15 10 × 15 10 × 15 10 × 15 10 × 20 10 × 20 10 × 20 10 × 20 10 × 20 10 × 30 10 × 30 10 × 30 10 × 30 10 × 30 15 × 15 15 × 15 15 × 15 15 × 15 15 × 15

Best known

55 871 1088 609 655 550 568 503 833 762 845 878 866 (1103) 960 1053 1123 1111 892 707 842 796 857 (1054) (905) (972) (929) (960) (1149) (1222) (1180) (1175)← (1234) (1552) (1743)← 1547 (1604) 1736 (1202)← (1418) (1176) (1220)← (1195)

N1-1000

57 917 1109 611 655* 573 578 503* 833* 765 845* 878* 866* 1106 960* 1053* 1151 1111* 924 757 864 850 919 1066 919 980 952 970 1169 1230 1204 1210 1253 1596 1769 1575 1627 1736* 1247 1453 1185 1226 1214

MPMJSP-PSO Best

Average

T¯ (sec.)

55* 892 1116 609* 655* 567 582 503* 833* 765 845* 878* 866* 1103* 960* 1053* 1123* 1111* 893 707* 847 820 859 1057 912 994 939 974 1173 1247 1195 1175* 1262 1620 1743* 1578 1662 1736* 1202* 1425 1209 1220* 1197

55.0 925.5 1159.6 609.6 674.6 567.9 588.2 503.0 833.0 778.6 845.9 878.4 866.0 1105.5 960.0 1053.0 1123.0 1125.0 914.1 727.2 851.7 830.0 877.1 1079.9 926.4 1012.9 963.4 996.5 1198.4 1270.9 1230.0 1210.5 1296.8 1648.2 1763.2 1610.8 1676.5 1755.6 1225.5 1448.0 1240.2 1256.4 1229.4

0.02 62.9 66.8 6.2 18.7 19.0 20.1 0.6 1.3 36.7 14.6 14.4 3.1 61.7 4.7 9.4 18.8 60.1 58.0 57.3 60.5 59.1 58.4 128.0 129.2 130.0 127.8 127.7 218.2 220.5 229.0 231.3 223.3 489.0 475.5 474.4 476.2 450.0 272.5 271.8 272.7 273.8 277.3

(. . .) represents the best known solution value, no provably optimal solution exists for this instance. A solution is bold if it is the best in the comparison and is marked * if it is the upper bound value. A best known solution is pointed with ←, if it is found by this research.

May 12, 2009 20:6 WSPC/APJOR

00215.tex

A Particle Swarm Optimization Algorithm on Job-Shop Scheduling Problems

177

Table 3. Summary of the results for EDATA benchmark instances. Problem type (m × n)

# Instances

6×6 5 × 10 5 × 15 5 × 20 10 × 10 10 × 15 10 × 20 10 × 30 15 × 15 All

Deviation (%)

# Best known solution found

N1-1000

MPMJSP-PSO

N1-1000

MPMJSP-PSO

1 5 5 6 6 5 5 5 5

3.6 1.3 0.1 0.8 5.4 1.4 1.8 1.5 1.8

0.0 1.1 0.1 0.4 1.1 1.2 1.5 2.0 0.7

0 2 4 3 0 0 0 1 0

1 3 4 5 1 0 1 2 2

43

1.9

1.0

10

19

Deviation (%)

20

15 10x10 10x15 15x15

10

5

0 1

201

401

601

801

Iteration Fig. 2. Convergence rate of MPMJSP-PSO for EDATA set.

Based on the information shown in Table 3, MPMJSP-PSO showed better results in seven problem types in EDATA while N1-1000 performed better in only one problem type. The convergence behavior for each problem type in the EDATA set of MPMJSPPSO is shown in Fig. 2. This figure plots the relationship between the number of iterations used and the average deviation of the best known solution values and the solution values found by MPMJSP-PSO for each of the three problem types, 10×10, 10 × 15, and 15 × 15. In the problem type 10 × 10, the algorithm’s convergence rate runs very fast for the first 200 iterations and gets almost stable for the succeeding ones. Larger data sets of 10 × 15 and 15 × 15 returns a fast convergence rate in the first 400 iterations and getting slower in the next 200. It is only after the 600th iteration that convergence rate becomes almost stable.

May 12, 2009 20:6 WSPC/APJOR

178

00215.tex

P. Pongchairerks & V. Kachitvichyanukul

6.2. Results for the instances of RDATA set Table 4 presents the results of MPMJSP-PSO for RDATA instances. For RDATA set, MPMJSP-PSO finds the new best known solutions in the three instances m20, l12, and l15. The summary of the results of RDATA instances is presented in Table 5. Based on the information shown in Table 5 of the RDATA set, it is noticed that MPMJSP-PSO shows better results than N1-1000 for three problem types when the number of machines is small, such as five or six, while N1-1000 shows better results for five problem types when the number of machines is large such as ten or more. The convergence behavior of MPMJSP-PSO for the problem types 10 × 10, 10 × 15, and 15 × 15 are presented in Fig. 3. The convergence rate of the algorithm in the problem type 10 × 10 is noted at a very fast rate on the first 200 iterations. The succeeding iterations return more stable rates. However, it is noticed that for larger problem types, convergence rate is much slower as against that of a smaller problem type. 6.3. Results for the instances of VDATA set Table 6 presents the results of MPMJSP-PSO for VDATA instances. For VDATA set of instances, MPMJSP-PSO algorithm finds a new best known solution for the instance l14. The summary of the results of VDATA instances is presented in Table 7. It is noticed that MPMJSP-PSO shows better results for three problem types in VDATA when the number of machines is observed to be small such as five or six, while N1-1000 shows better results for five problem types in VDATA when the number of machines is large such as ten or more. The convergence behavior of

Deviation (%)

20

15 10x10 10x15

10

15x15

5

0 1

201

401

601

801

Iteration Fig. 3. Convergence rate of MPMJSP-PSO for RDATA set.

May 12, 2009 20:6 WSPC/APJOR

00215.tex

A Particle Swarm Optimization Algorithm on Job-Shop Scheduling Problems Table 4. Results for the MPMJSP benchmark instances of RDATA set. Instance

m06 m10 m20 l01 l02 l03 l04 l05 l06 l07 l08 l09 l10 l11 l12 l13 l14 l15 l16 l17 l18 l19 l20 l21 l22 l23 l24 l25 l26 l27 l28 l29 l30 l31 l32 l33 l34 l35 l36 l37 l38 l39 l40

Size (m × n) 6×6 10 × 10 5 × 20 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 15 5 × 15 5 × 15 5 × 15 5 × 15 5 × 20 5 × 20 5 × 20 5 × 20 5 × 20 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 10 × 15 10 × 15 10 × 15 10 × 15 10 × 15 10 × 20 10 × 20 10 × 20 10 × 20 10 × 20 10 × 30 10 × 30 10 × 30 10 × 30 10 × 30 15 × 15 15 × 15 15 × 15 15 × 15 15 × 15

Best known

47 (707) (1025)← (571) (530) (478) 502 457 (800) (750) (766) (854) (805) 1071 (936)← 1038 1070 (1090)← 717 646 666 (703) 756 (861) (782) (867) (818) (811) (1073) (1101) (1087) (1005) (1099) (1527) (1667) (1502) (1540) (1552) (1037) (1095) (983) (1041) (980)

N1-1000

47* 737 1028 574 535 481 509 460 801 752 767 859 806 1073 937 1039 1071 1093 717* 646* 674 725 756* 861* 790 884 825 823 1086 1109 1097 1016 1105 1532 1668 1511 1542 1559 1054 1122 1004 1041* 1009

MPMJSP-PSO Best

Average

T¯ (sec.)

47* 724 1025* 574 534 480 506 459 800* 750* 767 854* 806 1072 936* 1039 1070* 1090* 732 654 694 730 756* 916 839 892 870 858 1114 1141 1135 1046 1148 1549 1691 1530 1556 1577 1119 1190 1063 1131 1057

47.0 740.2 1027.9 581.8 537.6 485.6 512.3 461.5 802.7 753.0 769.3 857.6 809.6 1074.0 938.6 1041.0 1073.2 1093.5 766.0 663.3 706.4 759.4 767.9 935.4 861.6 922.3 887.9 880.5 1133.8 1172.7 1161.4 1067.4 1176.2 1560.1 1701.0 1539.7 1567.5 1593.7 1159.1 1216.8 1093.3 1149.9 1095.1

0.1 61.2 63.2 19.2 18.7 18.8 19.0 19.0 38.2 39.0 39.2 40.2 39.4 66.1 66.4 66.5 66.8 66.4 63.8 62.9 63.1 63.0 63.0 132.7 134.4 133.1 132.3 133.3 227.3 226.8 226.5 226.9 198.9 490.0 491.4 490.4 490.9 498.6 277.5 277.2 277.7 277.7 277.5

(. . .) represents the best known solution value, no provably optimal solution exists for this instance. A solution is bold if it is the best in the comparison and is marked * if it is the upper bound value. A best known solution is pointed with ←, if it is found by this research.

179

May 12, 2009 20:6 WSPC/APJOR

180

00215.tex

P. Pongchairerks & V. Kachitvichyanukul Table 5. Summary of the results for RDATA benchmark instances.

Problem type (m × n)

# Instances

6×6 5 × 10 5 × 15 5 × 20 10 × 10 10 × 15 10 × 20 10 × 30 15 × 15 All

Deviation (%)

# Best known solution found

N1-1000

MPMJSP-PSO

N1-1000

MPMJSP-PSO

1 5 5 6 6 5 5 5 5

0.0 0.8 0.3 0.2 1.4 1.1 0.9 0.3 1.8

0.0 0.6 0.1 0.0 2.3 5.7 4.1 1.5 8.2

1 0 0 0 3 1 0 0 1

1 0 3 4 1 0 0 0 0

43

0.8

2.7

6

9

MPMJSP-PSO for the problem types 10 × 10, 10 × 15, and 15 × 15 are presented in Fig. 4. This figure reveals a similar behavior in the convergence rate among the three problem types. Comparing it, however, to the previous Figs. 2 and 3, Fig. 4 shows all of its problem types to have slower convergence rates. The reason lies in the fact that the values in the initial solutions of VDATA are already very close to the best known solution value. In summary, MPMJSP-PSO shows better results than N1-1000 for most problem types in EDATA. For RDATA and VDATA, MPMJSP-PSO shows better results for the problem types where the number of machines is small, while N1-1000 performs better for the problem types where the number of machines is large. On average, the convergence rate of MPMJSP-PSO is fast during the first 200 iterations. For 201th until 600th iteration, the convergence rate goes slower, and become stable thereafter.

7. Conclusions This paper attempts to provide an efficient optimization method for the jobshop scheduling problems with multi-purpose machines (MPMJSP). To do so, the research proposes a variant of particle swarm optimization algorithm named GLNPSOc , which is a generic optimizer. In order to apply GLN-PSOc for MPMJSP problem, the authors designed a procedure to map the position of a particle into a solution of MPMJSP problem. The GLN-PSOc combined with the mapping procedure is named MPMJSP-PSO throughout this paper. MPMJSP-PSO is tested on three sets of benchmark instances, namely EDATA, RDATA, and VDATA, taken from Jurisch (1992). The test results are compared to those of the efficient tabu-search approach called N1-1000 from Hurink et al. (1994). The test results show that MPMJSP-PSO outperforms N1-1000 for most problem types in EDATA set. For RDATA and VDATA, the results of MPMJSP-PSO are better than those of N1-1000 for the problem types where the number of machines is small.

May 12, 2009 20:6 WSPC/APJOR

00215.tex

A Particle Swarm Optimization Algorithm on Job-Shop Scheduling Problems Table 6. Results for the MPMJSP benchmark instances of VDATA set. Instance

Size (m × n)

Best known

N1-1000

MPMJSP-PSO Best

m06 m10 m20 l01 l02 l03 l04 l05 l06 l07 l08 l09 l10 l11 l12 l13 l14 l15 l16 l17 l18 l19 l20 l21 l22 l23 l24 l25 l26 l27 l28 l29 l30 l31 l32 l33 l34 l35 l36 l37 l38 l39 l40

6×6 10 × 10 5 × 20 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 15 5 × 15 5 × 15 5 × 15 5 × 15 5 × 20 5 × 20 5 × 20 5 × 20 5 × 20 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 10 × 15 10 × 15 10 × 15 10 × 15 10 × 15 10 × 20 10 × 20 10 × 20 10 × 20 10 × 20 10 × 30 10 × 30 10 × 30 10 × 30 10 × 30 15 × 15 15 × 15 15 × 15 15 × 15 15 × 15

47 655 1022 570 529 (478) 502 (459) 799 (750) (766) (854) 804 1071 936 1038 1070← (1090) 717 646 663 617 756 (810) (742) (823) (788) (763) (1056) (1088) (1071) (995) (1070) (1521) (1658) (1498) (1536) (1550) 948 986 943 922 955

47* 655* 1023 573 531 482 504 464 802 751 766* 854* 805 1073 940 1040 1071 1091 717* 646* 663* 617* 756* 826 745 826 796 770 1058 1088* 1073 995* 1071 1521* 1658* 1498* 1536* 1553 948* 986* 943* 922* 955*

47* 655* 1024 571 530 479 504 460 799* 750* 766* 855 805 1071* 936* 1038* 1070* 1090* 717* 646* 663* 619 756* 819 755 828 790 775 1058 1091 1076 1003 1078 1524 1664 1503 1541 1555 955 993 943* 945 955*

Average

T¯ (sec.)

47.0 655.0 1024.9 573.8 532.8 484.1 507.7 463.2 801.2 752.0 767.8 855.9 806.5 1072.6 937.4 1039.8 1071.8 1091.3 717.0 646.0 663.0 634.2 756.0 827.1 761.4 837.1 795.4 781.6 1064.4 1096.8 1083.4 1007.9 1084.0 1528.5 1668.5 1506.4 1555.0 1558.7 962.8 1010.2 945.8 956.4 955.5

0.03 5.2 64.8 19.2 19.1 19.2 18.9 19.0 37.2 38.5 39.3 39.3 39.4 62.7 63.2 65.2 66.1 66.5 11.2 0.8 0.8 65.2 0.2 136.9 136.8 136.8 137.3 136.9 237.2 234.3 234.2 233.4 235.2 503.7 501.3 450.0 514.4 502.7 306.9 289.6 246.7 287.3 167.2

(. . .) represents the best known solution value, no provably optimal solution exists for this instance. A solution is bold if it is the best in the comparison and is marked * if it is the upper bound value. A best known solution is pointed with ←, if it is found by this research.

181

May 12, 2009 20:6 WSPC/APJOR

182

00215.tex

P. Pongchairerks & V. Kachitvichyanukul Table 7. Summary of the results for VDATA benchmark instances.

Problem type (m × n)

# Instances

6×6 5 × 10 5 × 15 5 × 20 10 × 10 10 × 15 10 × 20 10 × 30 15 × 15 All

Deviation (%)

# Best known solution found

N1-1000

MPMJSP-PSO

N1-1000

MPMJSP-PSO

1 5 5 6 6 5 5 5 5

0.0 0.6 0.1 0.2 0.0 0.9 0.1 0.0 0.0

0.0 0.2 0.0 0.0 0.1 1.1 0.5 0.3 0.8

1 0 2 0 5 0 2 4 5

1 0 3 5 5 0 0 0 2

43

0.2

0.4

19

16

Deviation (%)

20

15 10x10 10x15 15x15

10

5

0 1

201

401

601

801

Iteration

Fig. 4. Convergence rate of MPMJSP-PSO for VDATA set.

Moreover, MPMJSP-PSO also found new best solutions for eight instances of the three data sets. References Bean, J (1994). Genetic algorithms and random keys for sequencing and optimization. ORSA Journal on Computing, 6, 154–160. Bierwirth, C and DC Mattfeld (1999). Production scheduling and rescheduling with genetic algorithms. Evolutionary Computation, 7, 1–17. Brucker, P, B Jurisch and A Kramer (1997), Complexity of scheduling problems with multi-purpose machines. Annals of Operations Research, 70, 57–73. Brucker, P (2004). Multi-purpose machine. In Scheduling Algorithms. Springer-Verlag, pp. 289–312. Brucker, P and R Schlie (1990). Job-shop scheduling with multi-purpose machines. Computing, 45, 369–375.

May 12, 2009 20:6 WSPC/APJOR

A Particle Swarm Optimization Algorithm on Job-Shop Scheduling Problems

00215.tex

183

Clerc, M (2004). Discrete particle swarm optimization illustrated by the traveling salesman problem, New Optimization Techniques in Engineering, 219–239. French, S (1987). Sequencing and Scheduling: An Introduction to the Mathematics of the Job-Shop. New York: John Wiley & Sons. Gen, M, Y Tsujimura and E Kubota (1994). Solving job-shop scheduling problem using genetic algorithms. In Proceedings of the 16th International Conference on Computers and Industrial Engineering, pp. 576–579. Gon¸calves, JF, JJM Mendes and MGC Resende (2005). A hybrid genetic algorithm for the job shop scheduling problem. European Journal of Operational Research, 165, 77–95. Hurink, J, B Jurisch and M Thole (1994). Tabu search for the job-shop scheduling problem with multi-purpose machines. OR Spektrum, 15, 205–215. Jurisch, B (1992). Scheduling Jobs in Shops with Multi-Purpose Machines, Dissertation, Fachbereich Mathematik/Informatik, Universit¨ at Osnabr¨ uck, Germany. Kennedy, J (1999). Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance. Congress on Evolutionary Computation, 1931–1938. Kennedy J and RC Eberhart (1995). Particle swarm optimization. IEEE International Conference on Neural Network, 4, 1942–1948. Liao, C-J, C-T Tseng and P Luarn (2007). A discrete version of particle swarm optimization for flowshop scheduling problems. Computers & Operations Research, 34, 3099–3111. Liu B, L Wang and Y-H Jin Y (2007). An effective hybrid particle swarm optimization for no-wait flow shop scheduling. International Journal of Advanced Manufacturing Technology, 31, 1001–1011. Mattfeld, DC and C Bierwirth (2004). An efficient genetic algorithm for job shop scheduling with tardiness objectives. European Journal of Operational Research, 155, 616–630. Pongchairerks, P and V Kachitvichyanukul (2005). A non-homogeneous particle swarm optimization with multiple social structures. Proceedings of International Conference on Simulation and Modelings, Thailand, A5-02. Pongchairerks, P and V Kachitvichyanukul (2006). Particle swarm optimization with multiple social learning structures. 36th CIE Conference on Computer & Industrial Engineering, Taipei, Taiwan, 1556–1567. Sotskov, YN (1991). Stability of an optimal schedule. European Journal of Operational Research, 55, 91–102. Shi, Y, and RC Eberhart (1998). A modified particle swarm optimizer. IEEE International Conference on Evolutionary Computation, 69–73. Storer, RH, SD Wu and R Vaccari (1992). New search spaces for sequencing problems with application to job shop scheduling. Management Science, 38, 1495–1509. Tasgetiren, MF, Y-C Liang, M Sevkli and G Gencyilmaz (2007). A particle swarm optimization algorithm for makespan and total flowtime minimization in the permutation flowshop sequencing problem. European Journal of Operational Research, 177, 1930–1947. Veeramachaneni, K, T Peram, C Mohan and LA Osadciw (2003). Optimization using particle swarms with near neighbor interactions. Proceedings of the Genetic and Evolutionary Computation Conference, 110–122. Xia, W-J and Z-M Wu (2005). A hybrid particle swarm optimization approach for the jobshop scheduling problem. International Journal of Advanced Manufacturing Technology, 29, 360–366.

May 12, 2009 20:6 WSPC/APJOR

184

P. Pongchairerks & V. Kachitvichyanukul

Pisut Pongchairerks is a full-time lecturer in Industrial Engineering Program at Sirindhorn International Institute of Technology (SIIT), Thammasat University. He holds B.Eng. and M. Eng. in Industrial Engineering from Kasetsart University and D.Eng. in Industrial Engineering & Management from Asian Institute of Technology. His research interests focus on production planning and scheduling, supply chain, and applied operations research. Voratas Kachitvichyanukul is an Associate Professor in Industrial Engineering & Management, School of Engineering and Technology, Asian Institute of Technology, Thailand. He received a Ph. D. from the School of Industrial Engineering at Purdue University in 1982. He has extensive experiences in simulation modeling of manufacturing systems. He had worked for FORTUNE 500 Companies such as Compaq Computer Corporation and Motorola Incorporated. He had also worked for SEMATECH as technical coordinator of the future factory program. His teaching and research interests include planning and scheduling, high performance computing and applied operations research with special emphasis on industrial systems.

00215.tex