A novel search algorithm based on waterweeds ...

8 downloads 1662 Views 3MB Size Report
The job shop scheduling problem (JSSP) originated from the machine ... fast and inexpensive computing technology, meta-heuristic methods have become ...
Int J Adv Manuf Technol DOI 10.1007/s00170-015-8023-0

ORIGINAL ARTICLE

A novel search algorithm based on waterweeds reproduction principle for job shop scheduling problem Lin Cheng1 · Qingzhen Zhang1 · Fei Tao1 · Kun Ni1 · Yang Cheng1

Received: 10 June 2015 / Accepted: 25 October 2015 © Springer-Verlag London 2015

Abstract Along with the mushroom development of new information technology, scheduling plays an increasing important role in manufacturing systems. A new search algorithm which imitates reproduction principle of waterweeds in searching for water sources is proposed for solving the job shop scheduling problems (JSSPs). Inspired by the swarm intelligence in waterweeds’ collaborative behavior and inheriting their strong survivability, the new waterweeds (WW) algorithm with few user-defined parameters and simple structure shows remarkable performance in solving continuous unconstrained optimization problems, which is proved by two experiments against five well-known benchmark functions. Furthermore, according to special needs of JSSPs solving, a series of modifications are introduced into original WW algorithm and the computational experiments on a set of problem instances indicate that the new discrete WW algorithm has competitive effectiveness and efficiency in comparison with other classical JSSPs solving methods in the literature. Successful application of WW algorithm in solving JSSPs illustrates its bright prospect in manufacturing field and other related optimization areas. Keywords Manufacturing scheduling · Swarm intelligence · Waterweeds reproduction principle · Waterweeds algorithm · Numerical optimization · Job shop scheduling problem

 Fei Tao

[email protected] 1

School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China

1 Introduction In the past decades, big data [36] and cloud technology [58, 64] have been widely studied and applied in many fields, as they can provide a new method to realize the full sharing, fcirculation, on-demand use, and optimal allocation of various manufacturing resources and capability [62, 75]. At the same time, the growth of competition market globalization and customer demand diversification led to the increasing demand of agility, networking, service, green, and socialization of manufacturing [61]. At the same time, the growth of competition market globalization and customer demand diversification led to the increasing demand of agility, networking, service, green, and socialization of manufacturing. Benefit from computer technology improvement, it is easy to collect a massive amount of data from the entire process of the design, production, and service of a product [56]. Researchers devote themselves to structure a data-based cloud manufacturing service system which carries out the data collection, data management, information extraction, and optimization decision. Facing massive data processing and complex and changeable customer demand, in recent years, intelligent optimization algorithm [57] (such as neural network and genetic algorithm) plays an increasingly prominent role in data mining and utilization, because of its unique characteristics that it can accomplish data processing and analysis without complete object model [53]. Production scheduling algorithm, which is essential to improve the degree of resources utilization and relative level of operational management [59], has become one of the most important issues in planning and controlling manufacturing systems [43]. The job shop scheduling problem (JSSP) originated from the machine manufacturing field, and its research findings

Int J Adv Manuf Technol

are widely applied in transportation, computer systems, logistics, and other fields. The problem can be briefly described as follows [73]: There are a set of n jobs and m machines. Each job consists of a sequence of operations, and each operation is processed through the machines in a particular order. Every machine can process at most one operation at one time without interruption. The problem is aimed to find a schedule to minimize the makespan (Cmax ), the time that all operations completed. The JSSP is well-known to be NP-hard [22] and belongs to the most difficult combinatorial problems. Because of its remarkable practice and profound theory significance, in the past four decades, JSSP has attracted plenty of scholars with different solutions and the algorithms can be roughly divided into two categories, classical math optimization algorithms, and approximation methods. In the early time, researchers tried to solve the JSSP within a reasonable computational time by exact algorithms such as Dynamic Programming (DP) and the Branch and Bound (BB) [65]. The BB method, which uses a dynamically-constructed tree structure as means of representing the solution space of all feasible sequences, was successfully applied in solving small instances by Brooks and White [7] and was improved by a series of further researches [2, 4, 8, 9, 19, 26, 35, 41, 46]. Although the improvements effectively shortened the computational time, they still could not solve instances larger than 250 operations in a reasonable time and results sharply deteriorate along with the increase of instance scale. This reason forces more and more researchers to turn their attention on approximation methods. Approximation methods, including priority dispatching rules (PDRs), shifting bottleneck approach, meta-heuristic methods and so on, have been developed in recent decades and then become quite good alternatives for solving JSSP. PDRs are popularly applied for solving daily production scheduling owing to their easy implementation and fast solution speed [13, 51]. Meanwhile, PDRs only consider current state status of problems but ignore their global characteristics, the poor solution quality and performance degradation in large scale problems limit their further development. Cooperating with other meta-heuristic methods, such as genetic algorithm (GA), simulated annealing (SA), and taboo search (TS), is the future research direction to PDRs. A more sophisticated algorithm, which builds up and improves a schedule by iterative solution of a single bottleneck machine problem, is the shifting bottleneck approach (SB) proposed by Adams et al. [1]. More recently, along with pervasive applications of fast and inexpensive computing technology, meta-heuristic methods have become extremely popular in solving practical JSSPs [49]. Representative algorithms and their improved versions include but not limited to GA [17, 28,

38, 44, 47, 66], Artificial Bee Colony (ABC) [74], particle swarm optimization (PSO) [37, 42, 50, 69], artificial immune system (AIS) [10], ant colony optimization (ACO) [27, 70], and Memetic algorithm (MA) [21, 34]. These meta-heuristic algorithms are widely applicable on various instances and especially good at solving complex problems which are unapproachable to traditional techniques. Moreover, they are capable of producing high-quality solutions with a reasonable computational effort. Among meta-heuristics, neighborhood search algorithms, mainly referring to GA, SA, and TS, perform outstandingly in solving JSSPs. Their general idea is to modify current solutions by a neighborhood modification principle, which improves the algorithm’s local search capability but also lead to fall into local optimal solution easily. In recent years, more and more scholars tend to solve the JSSPs by combining neighborhood search methods with other meta-heuristic algorithms. Swarm intelligence algorithms draw the most attention due to their excellent global search capability. For example, D.Y Sha et al. [50] modified the particle position exchange principle based on the tabu list concept, and the hybrid algorithm performed well in subsequent tests. Hongwei Ge et al. [23] also introduced a simulated annealing process to help the PSO algorithm avoid being trapped in local minimum. A neighborhood property of the problem is discovered by Rui Zhang et al. [68], and a special tree search algorithm on account of this property is devised to enhance the exploitation capability of ABC. Similar algorithms also include the knowledgebased ant colony optimization algorithm (KBACO) [70], the artificial immune algorithm (AIA) [3], and so on [54, 60]. Swarm intelligence (SI) algorithms, as an independent research branch in the optimization field, are famous for their outstanding global search capability and have attracted many research scientists of related fields in recent years [29, 55, 63, 75]. Bonabeau has defined the swarm intelligence as “any attempt to design algorithm or distributed problemsolving devices inspired by the collective behavior of social insect colonies and other animal societies” [11]. In nature, evolutionary pressure forces organisms (e.g., birds, insects, and plant) to develop highly optimized organs and skills to take advantages of fighting for food, territories, and mate, and these organs and skills can be well refined as optimization algorithms [15]. Based on the swarm collaborative behavior observed in simple livings, many famous SI algorithms have been presented, such as PSO [31], ACO [14], bacterial foraging optimization [45], fish school search [6, 40], glowworm swarm optimization [33], bees algorithm (BA) [48], firefly algorithm (FA) [71]. Compared to algorithms by imitating animal behavior, few researches pay attention to the plant, since the movement of plant is visually slow and easy to be neglected.

Int J Adv Manuf Technol

Tong Li et al. [39] proposed a global optimization algorithm, plant growth simulation algorithm (PGSA), by simulating the plant growth process against integer programming global optimization problems. Another is flower pollination algorithm (FPA), which was proposed by X.S. Y [72] in 2014. Both algorithms showed excellent solution properties in subsequent instance tests. In the natural environment, weeds are widely distributed plants with strong survivability. They bore with resource and die when resource runs out. The most commendable thing is that, facing with harsh living conditions, weeds always can find the most resourceful districts and quickly spawn new groups around them. On single weed does not exhibit any intelligence behavior, but when they lives together, the intelligence which is ascribable to swarm collaborative behaviors ensures their survivals for millions of years without extinction. Inspired by this idea, this paper proposed a new search algorithm for numerical optimization problems. Overall, this paper focuses on two issues: (1) Based on analyzing the waterweeds reproductive process and extracting effective evolutionary principle, the new Waterweeds algorithm (WW) will be given with detailed mathematical derivation and algorithm structure. (2) The performance of the new algorithm is tested by several continuous optimization problems (OPs) and famous JSSPs. This paper is organized as follows: Section 2 expounds the reproductive process between previous generation waterweeds and next generation waterweeds, and redefines it into efficient algorithm evolutionary principle. A new algorithm named Waterweeds algorithm is proposed in Section 3 and the detailed mathematical derivation and algorithm structure are given at the same time. Parameters sensitivity and solution performance of WW algorithm are tested by five well-known continuous benchmark functions in Section 4 and a large amounts of JSSP instances in Section 5. Simulation results and comparison analysis demonstrate the university and superiority of the new algorithm. Finally, some conclusions and future work are presented in Section 6.

Fig. 1 Waterweeds widely distribute in nature

Fig. 2 Water sources

2 Collaborative behavior in waterweeds swarm Facing with unpredictable changes in nature, weeds live with high swarm intelligence and survival resilience, as is shown in Fig. 1. They survive in groups and distribute everywhere, or in vast grassland, or around a mountain spring, or between two stones. The only thing to be sure is that they always multiply at the richest resources area in a region. Weeds breed by sexual or asexual reproduction. Weeds complete the sexual reproduction through four stages, including flowering, fertilization, seed setting, and germination. Seeds, which not only inherit characteristics of their parent plants but also have some new features, will germinate into new generation of weeds. New weeds are expected to find more resourceful water source and reproduce new groups. The inheritance and variation during the reproduction process contribute to weeds swarm intelligence and guarantee their sustainable survivor. Through observation, weeds show the following behavior during reproduction. Behavior 1: Weeds rapidly breed at rich resources area and extinguish with water exhaustion; Behavior 2: Weeds grow better at more resourceful area; Behavior 3: The weed with better growth status is able to open more flowers and produce more seeds; Behavior 4: Seeds inherit most of characteristics from their parent weeds, and this is a process of intelligence accumulation;

Fig. 3 Waterweeds

Int J Adv Manuf Technol

Definition 1 In view of assumption that the redefined water is the only factor that affects the growth of weeds, so this paper names the weeds as waterweeds and the algorithm as waterweeds algorithm. Definition 2 Waterweeds reproduction principle comprises a series of modes which contribute to swarm intelligence in the progress of waterweeds searching for water sources and reproducing around them.

Fig. 4 Seeds

Behavior 5: Seeds also have new characteristics since they are affected by their uncertain father weeds, which provide pollens and some unpredictable natural factors, such as leaf branches, wind, water flushing, and so on; Behavior 6: Weeds produce a large amount of seeds every year, but not all seeds can germinate, only seeds located at suitable environment are fortunate enough to grow into new weeds. Behavior 7: Darwinian theorems, survival of the fittest, are effective in the progress of weeds reproduction; resources are limited in certain growth environment, and only a certain number of weeds with best growth status can survive and have the chance to produce seeds. Behavior 8: The life of one weed is limited, and it will die and be substituted by new weed eventually, no matter how strong it is. 2.1 Waterweeds reproduction principle Based on collaborative behavior in weeds swarm, a set of typical weeds reproduction principle is proposed here. To simplify the question, the new principle is presented on the basis of four assumptions and two definitions: (1) In this paper, weeds refer in particular to a class of plants, and they breed by sexual reproduction; (2) Weeds are monoecious, in other words, one weed bears both female flowers and male flowers. Female flowers take pollens to fertilize seeds and male flowers offer pollens for male flowers; (3) Male flower only offers pollens for male flowers of other weeds. This mechanism can improve interaction efficiency between different individuals. (4) Water is the sum of all factors that influence the growth state of weeds, in other words, the growth status of one weed only be determined by richness-degree of the redefined water at its location.

In the waterweeds reproduction principle, there are three essential components: water sources, parent waterweeds and seeds, which are respectively displayed in Figs. 2, 3 and 4 symbolically. (1) Water sources: Throughout the growth space, water sources with different richness degrees of redefined water are distributed. Since water is the only factor affecting waterweeds’ growth status under consideration, water sources with richer water tend to be surrounded by more luxuriant waterweeds. (2) Parent waterweeds: Limited by water restriction, only a fixed number of waterweeds survive with different growth status. The waterweed grows better at richer water source and worse at poorer one, so the growth status of waterweeds carries with the resources information of water sources. Parent waterweeds grow at the known richest water sources and only they have qualifications to produce seeds. Since waterweeds are hermaphroditic, one waterweed not only can produce seeds but also can offer pollen to others. It is mother waterweed when it blooms and father waterweed when offers pollen. A mother waterweed is fertilized by a randomly choosing father waterweeds and produces a certain number of seeds. The exact number of seeds is determined by its growth status. Accordingly, mother waterweeds with better growth status are able to produce more seeds, and the strongest waterweed produces most ones. The life of one waterweed is limited, and it will die and be substituted by new waterweed eventually. (3) Seeds: Seeds’ most characteristics are succeeded from their mother waterweeds, as a result, they distribute around their mothers. At the same time, seeds’ locations are influenced by their father waterweeds and they tend to locate nearly to their fathers. Above behavior can be thought of as the process of swarm intelligence transmission and accumulation. On the other hand, seeds’ locations also have uncertain degrees of randomness because of some unpredictable factors. The randomness improves the global searching capability of waterweeds. Seeds are expected to germinate into new waterweeds and represent discovery

Int J Adv Manuf Technol

probability of new better water sources. Not all seeds are fortunate enough to germinate into waterweeds, and for one mother waterweed, only seed staying at most suitable environment will grow into new waterweeds. New waterweeds, also called candidate parent waterweeds, will compete with their corresponding mother waterweeds and the winners will become the parent waterweeds of next generation. Therefore, in the process of one seed growing into one parent waterweed, it has to come through two natural selections. Summing up the above description, there are seven modes followed in waterweeds reproduction principle: 1 Parent waterweeds survive at the known richest water sources; 2 Waterweeds grow better at richer water sources. The growth states of waterweeds indicate the quality information of water sources. 3 Stronger mother waterweed is able to produce more seeds. The exact number of seeds is determined by relative growth status of waterweeds in present generation. 4 Seeds distribute around their mother waterweeds and tend to approach their father waterweeds. But the uncertainty of amplitude toward father waterweeds and some other unpredictable nature factors result in a certain level of randomness to their locations. 5 Seed germination needs suitable external conditions, and only seed staying at the richest water source is fortunate enough to germinate into new candidate waterweed. 6 Limited by water restriction, only a fixed number of waterweeds can blossom and yield seeds. Old mother waterweeds would complete with their strongest respective child waterweeds and the winners will be the parent waterweeds of next generation. 7 If one parent waterweed has survived for a certain number of generations, it will die of natural death and be substituted by it’s strongest child waterweed. This mode help waterweeds avoid sink in local region and greatly enhances their exploration capability for rich water sources. For ease of understanding, waterweeds’ reproduction principle is transformed into a flow diagram in Fig. 5. As seen from the diagram, the principle of waterweeds searching for water is simple in structuration. Firstly, a fixed number of initial parent waterweeds are distributed randomly within the range of growth, and they survive in different growth status because of their different water conditions. Then, according to the different growth status of parent waterweeds, mother waterweeds will produce different number of seeds correspondingly, and the seeds at richest water sources will germinate into candidate parent

waterweeds. Thirdly, the new candidate waterweeds would compete with their old parent waterweeds and the winners will be parent waterweeds of next generation. Additionally, if one parent waterweed has survived for a certain number of generations, it will be replaced by its strongest child waterweed. A simple example with three parent waterweeds is given in Fig. 6, which describes the principle process of waterweeds reproduction. In (A), there are three parent waterweeds, named as P 1 (parent waterweed), P 2, and P 3 respectively. If this is the first generation of cycling, P 1– P 3 will be initialized randomly within the feasible range. Suppose P 1 waterweed survives at the best water source and grows in the best status, P 2 at the second-best water source, and P 3 at the worst one and has the worst growth obviously. In (B), since waterweed P 1 is the strongest waterweed among P 1–P 3, it produces most seeds randomly around it, S11–S14 (Seed). At the same time, waterweed P 2 produces two seeds, S21 and S22, and waterweed P 3 produces only one seed S31 because of its poor growth status. Among S11–S14, S14 stays at the richest water source and is fortunate enough to grow into CP 1(candidate parent waterweed), and S21 germinates into CP 2 and S31 germinates into CP 3 similarly. Then in (C), the new candidate waterweeds will compete with their respective mother waterweeds and the ones with better growth status win the competitions. However, when one mother waterweed has survived for a certain number of generations, it is too old to seed and it will abandon the competition. This phenomenon is called waterweed natural death strategy in this paper and it help waterweeds jump out of local optimum. As is seen from (D), CP 1, CP 2, and P 3 become parent waterweeds of next generation, and they are renamed as new P 1–P 3. Additionally, P 3 gets one year older. Exploration means capability of hunting for possible optimal positions throughout search space, and it is vital to maintain the diversity of group. At the same time, exploitation takes advantage of the current group information and help the algorithm focus on the areas around the individuals with higher fitness values. In practice, exploration capability and exploitation capability are equally important but also contradictory. Insufficient exploration will result in algorithm’s bad global convergence ability, and a lack of exploitation will lead to low convergence rate and accuracy. To maintain the balance between exploration and exploitation is the basic requirement of an algorithm, because excessive pursuit of one capability will deteriorate the other one. In the waterweeds reproduction principle, the mechanism that seeds locate around their mother waterweeds and try to approach their father waterweeds (both mother waterweeds and father waterweeds stay at the present best locations)

Int J Adv Manuf Technol

guarantees exploitation capability of waterweeds. Meanwhile, randomness of seeds’ locations and parent waterweed natural death strategy ensure the new principle’s exploration capability. Basic properties which self-organization relies on in the new principle are given as follows:

(4) Multiple interactions: The growth status of waterweeds reveals richness information of water sources and nature can make greedy selections based on Kelvin’s law.

3 Waterweeds algorithm (1) Positive feedback: Parent waterweeds grow better at richer water source and are strong enough to produce more seeds. (2) Negative feedback: One parent waterweed will be abandoned if it has survived for a certain number of generations. (3) Fluctuations: Seeds are produced randomly around their mother waterweeds.

Fig. 5 Flow diagram of waterweeds reproduction principle

Waterweeds possess outstanding biological swarm intelligence in searching for superior water sources. Mathematically, it is a process in which individuals collaborate to search for highest gain point within the feasible space. From the view of purpose, this process achieves the same goal of other swarm intelligence algorithms. Inspired by the waterweeds reproduction principle, a novel intelligent swarm

Int J Adv Manuf Technol

algorithm, waterweeds algorithm, is proposed in this paper for numerical optimization problems. The general form of numerical optimization problems is given in expression (1)

cycles), and lifelimit (the upper life limit that one parent waterweed survives for). 3.1 Initialization operator of parent waterweeds

Xmin

min f(X) 30), consequently, this function is strongly multimodal. Function f4 (x) is Rastrigin function and it produces many local minima based on Sphere function with the addition of cosine modulation. This function is difficult to find the optimal solution, because the optimization algorithm is easily trapped in a local optimum on its way towards the global optimum position. Function f5 (x) is Rosenbrock valley function, a well-known classic optimization problem. Since its global optimum stays inside a long, narrow, parabolicshaped flat valley, the variables are strongly dependent and the gradients generally do not point towards the optimum, it is difficult to converge to the global optimum and frequently used to test the performance of optimization algorithms. In experiment I, the user-defined parameters are taken as following: Mns =10, maxcycle =2000, lifelimit =50, f1 (x) has two parameters, f2 (x) has five parameters, f3 (x), f4 (x) and f5 (x) have 50 parameters. To analyze the behavior of the WW algorithm, it has been run 30 times with different parent waterweeds sizes (Npw = 10, 20, 50, 100) and the means of best values are present in Table 2. The convergence curves under different swarm sizes are respectively shown in Figs. 9, 10, 11, 12, 13 as well. As is seen from the table and figures, WW algorithm shows better convergence performance as the size increases. In f1 (x) and f3 (x), algorithm gets obviously superior results with sizes 50 and 100, and sizes cause little impact to the f2 (x) since f2 (x) is relatively simple to solve. Size 10 shows obvious instability against size 20, 50, and 100 in f4 (x), while sizes have significant effect on algorithm performance in f5 (x). In summary, on the one hand, too small swarm size will result in algorithm’s instability, but on the other hand, there is limited room for algorithm’s improvement with too big swarm size. The value of size is acceptable with 50. 4.2 Experiment II

Fig. 8 Detailed pseudo-code of the WW algorithm

For further estimations of WW algorithms performance, simulation results of WW algorithm are compared with other classical algorithms, which are presented by Krink et al. [32] of DE, PSO, EA, and Karaboga et al. [30] of ABC. Numerical values of the WW control parameters adopted in the simulation studies, and the values assigned for the control parameters of DE, PSO, EA in ref. [32] and ABC in ref. [30] are given in Table 3. To provide a fair comparison, total number of fitness function f(X) calculations in each algorithm is almost the same as the other’s TEN = EEN · maxcycle . Here, TEN =

Int J Adv Manuf Technol Table 1 Numerical benchmark functions Function  sin2 ( x12 )+x12 )−0.5

f1 (x) = 0.5 + (1+0.001(x12 )+x12 )2  f2 (x) = ni=1 xi2

n xi −100 1 n √ ( i=1 (xi − 100)2 ) − f3 (x) = 4000 i=1 cos i

 f4 (x) = ni=1 xi2 − 10cos(2πxi ) + 10)  f15 (x) = ni=1 (100(xi+1 − xi2 )2 + (xi − 1)2 )

Ranges

Demension

Minimum value

Optimum solution

[-100,100]

2

0

(0,0)

[-100,100]

5

0

(0,...,0)

[-100,100]

50

0

(100,...,100)

[-100,100] [-100,100]

50 50

0 0

(0,...,0) (1,...,1)

EEN represents the number of function calculations in all, and EEN indicates calculations amount in one cycle, and maxcycle is iterations times. In WW algorithm with Mns = 10 and Npw = 50, algorithm calculates the function about 250 times, so EEN ≈ 250. In experiment II, total accounts of evaluations are 100,000 for the first two functions (f1 (x) and f2 (x)) and 500,000 for the other three functions (f3 (x)), f4 (x)), and f5 (x))), respectively, as in ref. [32]. Each algorithm was repeated 30 times, and the statistical results of mean and standard deviations are given in Table 4. Values less than 10−12 are reported as 0, but results of WW algorithm is given in detail and 0 in WW represents value which is less the matlab’s realmin 2.2251 · 10−308 . The table shows that DE, EA, ABC, and WW can got competitive results on f1 (x) and f2 (x), but PSO got worse ones. On f3 (x) and f4 (x), DE, ABC, and WW demonstrated match performance, while PSO and EA performed unsatisfactorily. Against the last f5 (x) function, WW produced second best results among the five algorithms. In conclusion, WW algorithm performs outstandingly among the algorithms considered in the present investigation and shows a good application prospect in solving continuous numerical optimization problems.

5 Discrete waterweeds algorithm for JSSPs The job shop scheduling problems (JSSPs) are the most difficult combinatorial optimization problems. Successful

Table 2 Mean of best function values by WW algorithm under different sizes

Function

f1 (x) f2 (x) f3 (x) f4 (x) f5 (x)

application in JSSPs will further validate the effectiveness of WW algorithm. Since original WW algorithm is designed for continuous optimization problems and JSSPs are integer ones. Apparently, it is unsuited to employ original WW algorithm to solve JSSPs in the same form. Thus, a series of improvements including discrete coding, new evolution strategy, and neighborhood structure are introduced into original WW algorithm. The new discrete WW algorithm is expected to perform outstandingly in test on a set of benchmark instances. 5.1 Position representation In JSSPs, a proper representation of parent waterweeds for the solution is the primary and key issue. In this paper, a preference list-based representation is adopted, which offers a preference list to each machine. For an n-job m-machine problem, the solution to the JSSPs can be represented as a matrix, and the row is the preference list of machine, which is expressed in Eq. 7 ⎡ ⎤ x11 x12 x13 · · · x1n ⎢ x21 x22 x23 · · · x2n ⎥ ⎢ ⎥ ⎢ ⎥ k (7) X = ⎢ x31 x32 x33 · · · x3n ⎥ ⎢ . ⎥ ⎣ .. ⎦ xm1 xm2 xm3 · · · xmn where k is the waterweed number and xij ∈ {1, 2, · · · , n} denotes the job on location j in the preference list of machine i. The total number of possible schedules is (n!)m ,

Size 10

20

50

100

1.6224E-003 1.6686E-276 1.6450E-003 3.3165E-002 1.1095E+001

6.4776E-004 1.0869E-285 0.0000E+000 0.0000E+000 1.5401E+000

0.0000E+000 2.5876E-292 0.0000E+000 0.0000E+000 2.2314E-001

0.0000E+000 7.6446E-295 0.0000E+000 0.0000E+000 9.5881E-002

Int J Adv Manuf Technol Convergence curves under different swarm sizes

0

Convergence curves under different swarm sizes

5

10

10

size=10 size=20 size=50 size=100

0

10 Mean of cost function value

Mean of cost function value

−5

10

−10

10

size=10 size=20 size=50 size=100

−15

10

−10

10

−15

10

−20

10

−5

10

−20

0

500

1000

1500

10

2000

0

500

Cycle

1000 Cycle

1500

2000

Fig. 9 Convergence curves under different sizes of f1 (x)

Fig. 11 Convergence curves under different sizes of f3 (x)

and only active schedules, in which all operations can be started without delaying some other operation, are feasible to problem [20]. The optimal JSSPs solution should be an active schedule. Taking three typical JSSPs, FT06, FT10, and FT20 as examples [18], the proportion of active schedules space in whole possible space is given in Table 5. In the table, WRN is the account of random waterweeds X and ASN represents the number of active ones. Two conclusions can be drawn from this test: (1) Even if the values of and are small, the number of Possible schedules is too big to exhaust all alternatives. (2) The proportion of active schedules against possible schedules is extremely small, so locking search in active solution space will remarkably improve the scouting speed.

In this paper, Giffler and Thompsons heuristic (G&T algorithm) is introduced to recode one waterweed into an active schedule, which is described as follows [24, 50]: Notation: (i, j ): the operation of job i that needs to be processed on machine j ; H : the partial schedule that contains scheduled operations; S : the set of schedulable operations; JC (i): the last completion time of job i; MC (j ): the last completion time of machine j ; p(i, j ): the processing time of operation (i, j ); f (i, j ): the earliest finish time when operation (i, j ) could be finished, f (i, j ) = max(JC (i), MC (j )) + p(i, j ).

Convergence curves under different swarm sizes

50

size=10 size=20 size=50 size=100

0

−50

10

−100

10

−150

10

−200

10

size=10 size=20 size=50 size=100

5

10 Mean of cost function value

Mean of cost function value

10

0

10

−5

10

−10

10

−15

10

−250

10

−300

10

Convergence curves under different swarm sizes

10

10

10

−20

0

500

1000 Cycle

1500

Fig. 10 Convergence curves under different sizes of f2 (x)

2000

10

0

500

1000 Cycle

1500

Fig. 12 Convergence curves under different sizes of f4 (x)

2000

Int J Adv Manuf Technol

Step 5: If a complete schedule has been generated, stop. Else, delete (i ∗ , j ∗ ) from S and include its immediate successor in S , then go to step 2.

Convergence curves under different swarm sizes

12

10

size=10 size=20 size=50 size=100

10

Mean of cost function value

10

8

10

For example, there are two jobs and two machines, as shown on Table 6, and the preference schedule is   21 (8) Xk = 12

6

10

4

10

2

10

0

10

−2

10

0

500

1000 Cycle

1500

2000

Fig. 13 Convergence curves under different sizes of f5 (x)

Process 5.1: G&T algorithm Step 1: Initialize H = φ, S is initialized to contain all operations without predecessors. Step 2: If any two operations in S have the same j , the operations will be processed on same machine and the one with lower priority will be deleted from S based on preference list of Xkj , Xkj is the j row of Xk . Step 3: f (i, j ) = max(JC (i), MC (j )) + p(i,j), determine f ∗ = min(i,j )∈S {f (i, j )}, and operation is (i ∗ , j ∗ ). Step 4: Add (i ∗ , j ∗ ) to H and assign JC (i ∗ ) = f ∗ and MC (j ∗ ) = f ∗ .

Table 3 Parameter values used in the experiments

DE popsSize CF f

The operation process of this example is displayed in Table 6 where real lines represent operation process of jobs and dotted lines represent operations on machines. As seen from the graph, operation on machine 1 should be executed after the completion of operation on machine 2. Limited by the process constraint, operation shall be implemented before operation. Obviously, as is demonstrated is Fig. 14, these four operations will sink into continuous circular waiting state, which is called deadlock academically. G&T algorithm is employed to recode into an active schedule in the following steps: Initialization Step 1: JC = 0; MC = 0; H = φ ; S = {(1, 1), (2, 2)}; Iteration 1 Step 2: No operations are carried out on same machine, as a result, S remains unchanged; Step 3: f (1,1) = 5 and f(2,2) = 4, f ∗ = f (2,2) and (i ∗ , j ∗ ) = (2,2); Step 4: H = (2, 2), JC (2) = 4 and MC (2) = 4; Step 5: Update S = {(1, 1), (2, 1)}.

PSO 50 0.8 0.5

posSize ω ϕmin ϕmax

EA 20 1.0→0.7 0 2.0

popSize pe pm σm n

ABC 100 1.0 0.3 0.01 10

popSize no ne ns limit

WW 100 50 50 1 ne D

Npw Mns lifelimit

50 10 50

popSize population size, CF crossover factor for DE, f scaling factor, ω inertia weight, ϕmin , ϕmax , lower and upper bounds of the random velocity rule weight, pe crossover rate for EA, pm mutation rate, σm mutation variance, n elite size, no onlooker number, ne employed bee number, ns scout number, D dimension of the problem, Npw number of parent waterweeds, Mns max number of seeds, lifelimit the upper life limit which one waterweed can survive for;

Int J Adv Manuf Technol Table 4 The statistical results obtained by DE, PSO, EA, ABC, and WW algorithms f unction

DE

PSO

EA

ABC

WW

f1 (x) f2 (x) f3 (x) f4 (x) f5 (x)

0±0 0±0 0±0 0±0 35.31760 ± 0.27444

0.00453 ± 0.00090 2.51130 · 10−8 ± 0.00090 1.54900 ± 0.06695 13.1162 ± 1.44815 5124.45 ± 2929.47

0±0 0±0 0.00624 ± 0.00138 32.6679 ± 1.94017 79.8180 ± 10.4477

0±0 0±0 0±0 0±0 0.1331 ± 0.2622

0±0 4.9940 · 10−60 ± 1.3801 · 10−59 0±0 0±0 0.223 ± 0.233

Step 5: Update S = φ. A complete schedule has been generated, and then stops.

Iteration 2 Step 2: Operation (1,1) and operation (2,1) are processed on same machine 1, and job 2 has higher priority, so (1,1) is deleted from S and update S = {(2, 1)}. Step 3: f (2, 1) = 7, f ∗ = f (21 and i∗, j ∗) = (2, 1); Step 4: H = {(2, 2), (2, 1)}, JC (2) = 7 and MC (1) = 7; Step 5: Update S = {(1, 1)}. Iteration 3

For the above example, the real operation process is given in Fig. 15. Renovate  Xk =

21 21

 (9)

The new schedule not only is active but also tend to have small makespan. 5.2 Position crossover strategy

Step 2: S remains unchanged since it only has one operation; Step 3: f (1, 1) = 16 , f ∗ = f (1, 1) and (i ∗ , j ∗ ) = (1, 1); Step 4: H = {(2, 2), (2, 1), (1, 1)} and JC (1) = 12, MC (1) = 12; Step 5: Update S = {(1, 2)}. Iteration 4 Step 2: S remains unchanged since it only includes one operation; Step 3: f (1, 2) = 16, f ∗ = f (1, 2) and (i ∗ , j ∗ ) = (1, 2); Step 4: H = {(2, 2), (2, 1), (1, 1), (1, 2)} and JC (1) = 16, MC (1) = 16;

Table 5 Proportion of active schedules space in whole possible space

The purpose of crossover strategy is to produce new better generations by utilization of the present parent waterweeds location information, which can be regarded as fertilization process in waterweeds swarm. In discrete WW algorithm, due to the particularity of JSSPs, which are integer programming problems and more easy to fall into local optima, candidate parent waterweed CXk will directly upgrade to parent waterweed. This modification is aimed to enhance the exploration capacity of the discrete WW algorithm. In this paper, PXk represents the kth present best waterweed and GX the global known best one. With reference to crossover research in GAs by Falkenauer et al. [16], a special design crossover strategy, in which new Xk can acquire information from a randomly chosen PXl (l = k) with a

Problem

Size (n × m)

PSB

WRN

ASN

Proportion

FT06 FT10 FT20

6×6 10 × 10 20 × 5

1.3931 · 10+17 3.9594 · 10+65 8.5236 · 10+91

10 · 10+6 10 · 10+6 10 · 10+6

4 0 4

4 · 10−06 0 4 · 10−06

Size n × m represents n-job m-machine problem, PSB number of possible schedules, calculated by (n!)m , WRN waterweeds number of test, ASN active schedules number in test; proportion, ASN/TRN.

Int J Adv Manuf Technol Table 6 A 2 × 2 example jobs

Machine sequence

Processing times

1 2

1;2 2;1

p(1, 1) = 5, p(1, 2) = 4 p(2, 2) = 4, p(2, 1) = 4

certain probability pc , is raised and the calculation process can be depicted as follows subprocess: Process 5.2: Position crossover strategy Step 1: If rand < pc , go to step 2, else go to step 5; Step 2: Randomly choose a row j in Xk , marked as Xkj , j ∈ {1, 2, · · · , m}; Step 3: Randomly choose one waterweed l from PX set, marked as PXl , l ∈ {1, 2, · · · , Npw } and l = k; Step 4: Xkj = PXlj , recalculate Xk by GT algorithm and obtain new active Xk and Cmax (Xk ); Step 5: End. 5.3 Neighborhood strategies A neighborhood structure is a mechanism which can obtain a new set of neighbor seeds by applying perturbation to the mother waterweed. Since neighborhood structure directly affects the efficiency of discrete WW algorithm, unnecessary and infeasible moves must be eliminated if it is possible. N4 neighborhood structure proposed by DellAmico and Trubian [12] was successfully applied in JPPSs. (1) Neighborhood structures Let ij denotes jth operation of the job i. P M(ij ) and SM(ij ) represent the immediate predecessor and the immediate successor of ij on machine u separately. Besides, P M(ij ) = 0 or SM(ij ) = 1 when

Fig. 14 The operation process of the 2 × 2 example

ij have no predecessor or successor. The neighborhood structures designed in this paper is described as follows: Process 5.3 Position crossover strategy Step 1: calculate the critical path of parent waterweed Xk , cp = {i1 j1 , i2 j2 , · · · , iq jq }, s = 1; Step 2: seedk (s) = Xk ; Step 3: randomly chose an index c, c ∈ {1, 2, · · · , q}; Step 4: If two successive operations ic jc and ic+1 jc+1 are handled on same machine, go to step 5. Otherwise, go to step 3; Step 5: Suppose neither P M(ic jc ) nor SM(ic+1 jc+1 ) belongs to cp, the move will only consist reversal of ic jc and ic+1 jc+1 , and the interchange shall be memorized. Otherwise, go to step 6; Step 6: The node P M(ic jc ) or SM(ic+1 jc+1 ) which belongs to cp combines with block {ic+1 jc+1 , ic jc } and the new possible sequence will replace old one. The move will be memorized; Step 7: In one iteration, same seeds of one mother waterweed are redundant and the repeated move will be abandoned, then algorithm returns to step 3; Step 8: Modify the seedk (s) guided by memory move;

Fig. 15 The real operation process of the 2 × 2 example

Int J Adv Manuf Technol

Step 9: If s < Ns (k) , s = s + 1, go to step 2; else, stop. (2) Critical paths calculation Calculation of critical paths is the basic of neighborhood modifications, and an active schedule usually has multiple paths. This article provides a sub algorithm which can get all paths by one calculation. S(ij ) and C(ij ) respectively denote the start and completion time of ij . Process 5.4: Position crossover strategy Step 1: An active waterweed Xk ; K = {ij |i = 1, 2, · · · , n; j = 1, 2, · · · , m}, Q = φ; Step 2: ∀α ∈ K and C(α) = Cmax (Xk ), if no β ∈ K satisfies C(α) = S(β), α was deleted from K and update K. Repeat above operations until all α satisfy the condition; Step 3: Search operations from K to meet the condition S(α) = 0, and mark them with {α(1), α(2), · · · , α(nf )}. Set Q(1) = α(1), Q(2) = α(2), = α(nf ). and Q(nf ) α1 = α(1), PC = 1, path index of ongoing calculation; PN = nf , the number of all paths; Step 4: Search operations from K to meet the condition C(αk ) = S(αk+1 ), and mark them as {α(1), α(2), · · · , α(nf )}, nf is the number of operations which satisfy this condition, k = k + 1. Set Q(PC ) = [Q(PC ) α(1)], Q(PN + 1) = [Q(PC ) α(2)] ,· · · , Q(PN + nf − 1) = [Q(PC ) α(nf )]. αk = α(1), PN = PN + nf − 1. Step 5: If C(αk ) = Cmax (Xk ), then go to step 4. Else, go to step 6. Step 6: If PN = PC , PC = PC + 1, k is valued with the length of Q(PC ) and αk = Q(PC , k). Else, stop the algorithm. After the above calculation in Process 5.4, Q(1), Q(2), · · · , Q(PN ) represent

PN critical paths of Xk . List all elements {α1 , α2 , · · · , αend } in Q(i), i ∈ {1, 2, · · · , PN }, and α1 → α2 → · · · → αend constitute the ith critical path of Xk . (3) Tabu list Inspired by the tabu search algorithm, a tabu list is introduced into discrete WW algorithm, with aim to avoid the search process turning back to the solutions visited in the previous steps. The attributes of past moves are stored in tabu list and new moves will replace the oldest ones. Moves recorded in tabu list are not permitted, and this systematic use of stored information significantly improved the efficiency of the exploration process of discrete WW. Detailed operating procedure is revealed in Process 5.5 as following: Process 5.5: tabu list and its implementation Step 1: Mother waterweed Xk and its seeds seedk (s), s ∈ {1,2,· · ·, Ns (k)}; Step 2: Reorder seedk (s) according their fitness, better one has smaller index; Step 3: If Cmax (seedk (1)) < Cmax (Xk ), Xk = seedk (1); else, go to step 4; Step 4: for s = 1 : 1 : Ns (k) if seedk (s) is not in tabu list Xk = seedk (s); Break the cycle; Endif; Endfor; Step 5: store the move of seedk (s) into tabu list. 5.4 Algorithm structure and convergence proof Discrete WW algorithm inherits the main structure of the original WW algorithm, as well as most of formation mechanisms. At the same time, subject to the specific characteristics of JSSPs, a series of modifications are introduced into discrete WW algorithm, mainly including position representation, position crossover strategy, and neighborhood strategies. Ultimately, a new discrete WW algorithm specially applied into JSSPs is proposed and algorithm structure is displayed in Fig. 16.

Int J Adv Manuf Technol

Likewise, the possibility of losing X∗ is 0 < Pnot (X0 → X∗ ) < 1 − σt < 1

(12)

Average the possibility to every step, then 1

0 < Pnot (Xk → Xk+1 ) < (1 − σt ) t < 1

(13)

After g times of neighborhood moves, the possibility of missing X∗ is g

lim Pnot (g) < (1 − σt ) t = 0

g→∞

(14)

So lim P (g) = 1 − lim Pnot (g) = 1

g→∞

g→∞

(15)

Based on the above proof, the conclusion can be drawn that discrete WW algorithm can lead any initial waterweed to the best water source by limited generations. 5.5 Computational results and analysis

Fig. 16 Discrete-WW-algorithm-structure

The convergence properties of new discrete WW is given by the following theorem. Theorem 1 For arbitrary initial feasible solution, discrete WW algorithm converges to globally minimal solution with probability 1 in limited iterations, the same to lim P (g) = g→∞

1, where P represents the probability of finding the global optimal solution. Proof Based on DellAmico and Trubians theory [12], any feasible solution X0 can be leaded to the global minimal solution X∗ by a finite sequence of moves. This transform process can be expressed as X0 → X1 → X2 → · · · → Xt = X∗ . q ∗ denotes length of the longest critical path of X. Referring to process 5.3, if ic jc and ic+1 jc+1 are handed on same machine, there are up to six combinations among ic jc ,ic+1 jc+1 , P M(ij ) and SM(ij ). Therefore, the possibility that Xk+1 be got from Xk by a single neighborhood move should satisfy the expression as following: P (Xk → Xk+1 ) ≥ (pc ·

1 ) = σc > 0 6q ∗

(10)

Similarly, P (X0 → X∗ ) represents the possibility that X0 can be transformed into X∗ after t moves, and we have P (X0 → X∗ ) ≥ (σc )t = σt > 0

(11)

The discrete WW has been tested on a set of 43 typical problem instances: three from Fisher and Thompson (FT06, FT10, and FT20) and 40 from Lawrence (LA01–LA40). These problems with various sizes and hardness level are provided by OR-Library (http://mscmga.ms.ic.ac.uk/info. htma). The optimal known solution values of 42 instance in 43 have be confirmed, while problem LA29 is still open. The parameters of Discrete WW algorithm are determined experimentally, and the exact values used in tests are given as follows: Npw = 5, pc = 0.2, Mns = 100, lifelimit = 50, maxcycle = 100 · n, where n is the number of jobs. In order to obtain meaningful results, each instance is necessary to be carried out multiple runs since the proposed algorithm is not deterministic. The program ran 10 times from different starting solutions on the same problem, and the computational results are given in Table 7. The proposed discrete WW algorithm is compared with classical algorithms in literature, including SBII (Balas and Vazacopolos in [5]), TSAB (Pezzela and Mereli in [47]), TSSB (Sun et al. in [52]), GASA (Wang and Zheng in [67]), HGS-Param (Goncalves et al. in [25]), HPSO (Sha and Cheng in [50]). As seen from the table, WW obtains 34 BKS in 43 instances, best known solution or best known lower bound, which is less than TSAB with 37 and TSSB with 36, but is better than SBII with 20, HGS-Param with 10 and HPSO with 31. Tabu search is still one of the most effective methods in solving JSSPs and WW also shows outstanding general characteristics. Gap is the relative deviation degree of the optimum value (Opt) to BKS and calculated by the expression Gap = (Opt −BKS)/BKS·100 %. WW shows excellent accuracy with a smaller average Gap, 0.12302 %, than 1.3838 % of SBII, 0.2734 % of GASA, 7.4021 % of

Int J Adv Manuf Technol Table 7 Computational results by discrete WW algorithm and other classical algorithms Problem

FT06 FT10 FT20 LA01 LA02 LA03 LA04 LA05 LA06 LA07 LA08 LA09 LA10 LA11 LA12 LA13 LA14 LA15 LA16 LA17 LA18 LA19 LA20 LA21 LA22 LA23 LA24 LA25 LA26 LA27 LA28 LA29 LA30 LA31 LA32 LA33 LA34 LA35 LA36 LA37 LA38 LA39 LA40 LA40

Size

6×6 10 20 × 5 10 × 5 10 × 5 10 × 5 10 × 5 10 × 5 15 × 5 15 × 5 15 × 5 15 × 5 15 × 5 20 × 5 20 × 5 20 × 5 20 × 5 20 × 5 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10

Average Gap No. of instances No. of BKS obtained

BKS

55 930 1165 666 655 597 590 593 926 890 863 951 958 1222 1039 1150 1292 1207 945 784 848 842 902 1046 927 1032 935 977 1218 1235 1216 1152 1355 1784 1850 1719 1721 1888 1268 1397 1196 1233 1222 1222

SBII

TSAB

55 930 1178 666 669 605 593 593 926 890 863 951 959 1222 1039 1150 1292 1207 978 787 859 860 914 1084 944 1032 976 1017 1224 1291 1250 1239 1355 1784 1850 1719 1721 1888 1305 1423 1255 1273 1269 1269

55 930 1165 666 655 597 590 593 926 890 863 951 958 1222 1039 1150 1292 1207 945 784 848 842 902 1047 927 1032 939 977 1218 1236 1216 1160 1355 1784 1850 1719 1721 1888 1268 1407 1196 1233 1229 1229

1.3838 % 43 20

0.0501 % 43 37

GASA

55 930 1165 666

926

1222

945

1042

1218

1784

1268

0.1015 % 43 36

TSAB

HGS-Param

HPSO

WW algorithm OP t

average 55 935 1165 666 655 597 590 593 926 890 863 951 958 1222 1039 1150 1292 1207 945 784.2 847 842 908.3 1048.5 940.2 1032 938.4 979.1 1218 1241.9 1216 1167.4 1355 1784 1850 1719 1721 1888 1282.3 1411.2 1203.1 1245.6 1231.53 1231.53

55 930 1165 666 655 597 590 593 926 890 863 951 958 1222 1039 1150 1292 1207 945 784 848 842 902 1058 927 1032 938 979 1218 1235 1216 1168 1355 1784 1850 1719 1721 1888 1292 1411 1201 1240 1233 1233

55 930 1165 666 655 597 590 593 926 890 863 951 958 1222 1039 1150 1292 1207 945 784 848 842 907 1046 935 1032 953 986 1218 1256 1232 1196 1355 1784 1850 1719 1721 1888 1279 1408 1219 1246 1241 1241

55 930 1165 666 655 597 590 593 926 890 863 951 958 1222 1039 1150 1292 1207 945 784 848 842 902 1046 927 1032 935 977 1218 1235 1216 1163 1355 1784 1850 1719 1721 1888 1268 1397 1196 1233 1224 1224

55 930 1165 666 655 597 590 593 926 890 863 951 958 1222 1039 1150 1292 1207 945 784 848 842 907 1046 935 1032 937 977 1218 1236 1216 1160 1355 1784 1850 1719 1721 1888 1279 1407 1196 1242 1129 1129

0.2734 % 11 9

7.4021 % 43 10

0.3719 % 43 31

0.12302 % 43 34

Int J Adv Manuf Technol

HGS-Param, and 0.3719 % of HPSO. On the other hands, WW failed to find the optimal solutions in problem LA20, LA22, LA24, L27, LA29, LA36, and LA37 which are considered particularly hard. This shows that increasing of the problem scale will lead to reduction of algorithms accuracy, just like all other algorithms. Opt average of 10 runs further proves this trend. To improve the ability of WW algorithm in solving large-scale JSSPs is a good research direction. On the whole, WW is successfully applied in solving JSSPs, and computational results and comparisons with other classical methods demonstrate its competitive effectiveness and efficiency.

6 Conclusion and future work Wateweeds show great swarm intelligence in searching for rich water sources. Inspired by the collaborative behavior in waterweeds swarm, a new search algorithm called waterweeds algorithm is proposed. Around this new algorithm, this paper mainly carried out the following works: 1. A set of waterweeds reproduction principles and modes are condensed from collaborative behavior in waterweeds swarm, and basic properties which selforganization relies on in the principles confirmed their potentials as an algorithm. 2. Through a series of parameters and operators designing, waterweeds reproduction principles are transformed into a quantitative optimization algorithm. The new waterweeds algorithm with few user-defined parameters and simple structure is tested by five continuous optimization problems and shows competitive performance in comparison with other advanced algorithms. 3. Furthermore, taking into account of that job shop scheduling problems belong to a special class of optimization problems, a series of modifications, including discrete coding, new evolution strategy, and neighborhood structure, are introduced into original waterweeds algorithm. In the subsequent benchmark function comparison tests with other classical algorithms, new discrete waterweeds obtained competitive results both in effectiveness and accuracy. The job shop scheduling is typical optimization problems in industrial production under cloud environment. Since JSSPs are considered to be most difficult to deal with, waterweeds algorithm’s successful application in JSSPs shows great prospect in other related areas. Waterweeds algorithm is a new algorithm, hence, a lot of works including performance test and mechanism

improvement are needed to been done. Several important subjects of ongoing work are given as following: 1. In this paper, only a very limited set of problems were tested with WW algorithm, one direction of further investigation is to apply the algorithm in more benchmark functions and make sure in which type of problems the WW algorithm is more efficient. 2. In WW algorithm, waterweeds stays at different water sources and grow in different status, but the probabilities of being selected as father waterweeds are same. It is worth to test the performance of mechanism in which waterweed with better growth status produces more pollen and is more likely to be selected as father waterweed. 3. WW algorithm also has a good prospect in solving constrained optimization problems, thus its performance evaluation is another work. 4. WW algorithm shows outstanding capability in solving JSSPs, but its performance declines sharply as the size of problem increase. Aiming at the application problem of JSSPs, to carry out more effective corrections to WW algorithm is another good research direction.

Acknowledgments This article is partly supported by National Natural Science Foundation of China (Grants 51522501 and 51475032), Beijing Natural Science Foundation (Grant 4152032), Beijing Youth Talent Plan (Grant 29201411), the Fundamental Research Funds for the Central Universities in China, and the 863 Program project in China under Grand No. 2013AA041302. The authors thank the anonymous referees and the editor for their valuable suggestions which helped to improve the quality of the final manuscript.

References 1. Adams J, Balas E, Zawack D (1988) The shifting bottleneck procedure for job shop scheduling. Manag Sci 34(3):391–401 2. Applegate D, Cook W (1991) A computational study of the jobshop scheduling problem. ORSA J Comput 3(2):149–156 3. Bagheri A, Zandieh M, Mahdavi I, Yazdani M (2010) An artificial immune algorithm for the flexible job-shop scheduling problem. Futur Gener Comput Syst 26(4):533–541 4. Balas E (1969) Machine sequencing via disjunctive graphs: an implicit enumeration algorithm. Oper Res 17(6):941–957 5. Balas E, Vazacopoulos A (1998) Guided local search with shifting bottleneck for job shop scheduling. Manag Sci 44(2): 262–275 6. Bastos Filho CJ, de Lima Neto FB, Lins AJ, Nascimento AI, Lima MP (2009) Fish school search. In: Nature-inspired algorithms for optimisation. Springer, pp 261–277 7. Brooks GH, White CR (1965) An algorithm for finding optimal or near optimal solutions to production scheduling problem. J Ind Eng 16(1):34

Int J Adv Manuf Technol 8. Brucker P, Jurisch B, Sievers B (1994) A branch and bound algorithm for the job-shop scheduling problem. Discret Appl Math 49(1):107–127 9. Carlier J, Pinson E´ (1989) An algorithm for solving the job-shop problem. Manag Sci 35(2):164–176 10. Coello CAC, Rivera DC, Cortes NC (2003) Use of an artificial immune system for job shop scheduling. In: Artificial Immune Systems. Springer, pp 1–10 11. De Castro LN, Von Zuben FJ (1999) Artificial immune systems: Part i–basic theory and applications. Universidade Estadual de Campinas, Dezembro de, Tech Rep 12. Dell’Amico M, Trubian M (1993) Applying tabu search to the jobshop scheduling problem. Ann Oper Res 41(3):231–252 13. Dominic PD, Kaliyamoorthy S, Kumar MS (2004) Efficient dispatching rules for dynamic job shop scheduling. Int J Adv Manuf Technol 24(1-2):70–75 14. Dorigo M, Maniezzo V, Colorni A (1996) Ant system: optimization by a colony of cooperating agents. IEEE Trans Syst, Man, and Cybern Part B: Cybern 26(1):29–41 15. Engelbrecht AP (2005) Fundamentals of computational swarm intelligence, vol 1. Wiley, Chichester 16. Falkenauer E, Bouffouix S (1991) A genetic algorithm for job shop. In: Proceedings of the IEEE International Conference on robotics and automation, pp 824–829 17. Fayad C, Petrovic S (2005) A fuzzy genetic algorithm for realworld job shop scheduling. In: Innovations in applied artificial intelligence, Springer, pp 524–533 18. Fisher H, Thompson GL (1963) Probabilistic learning combinations of local job-shop scheduling rules. Industrial scheduling 3:225–251 19. Florian M, Trepant P, McMahon G (1971) An implicit enumeration algorithm for the machine sequencing problem. Manag Sci 17(12):B–782 20. French S (1982) Sequencing and scheduling: an introduction to the mathematics of the job-shop, vol 683. Ellis Horwood, Chichester 21. Gao L, Zhang G, Zhang L, Li X (2011) An efficient memetic algorithm for solving the job shop scheduling problem. Comput Ind Eng 60(4):699–705 22. Garey MR, Johnson DS, Sethi R (1976) The complexity of flowshop and jobshop scheduling. Math Oper Res 1(2):117–129 23. Ge H, Du W, Qian F (2007) A hybrid algorithm based on particle swarm optimization and simulated annealing for job shop scheduling. In: 3rd international conference on natural computation, ICNC 2007. IEEE, vol 3, pp 715–719 24. Giffler B, Thompson GL (1960) Algorithms for solving production-scheduling problems. Oper Res 8(4):487–503 25. Gon˙calves JF, de Magalh˜aes Mendes JJ, Resende MG (2005) A hybrid genetic algorithm for the job shop scheduling problem. Eur J Oper Res 167(1):77–95 26. Greenberg HH (1968) A branch-bound solution to the general scheduling problem. Oper Res 16(2):353–361 27. Huang RH (2010) Multi-objective job-shop scheduling with lotsplitting production. Int J Prod Econ 124(1):206–213 28. Jawahar N, Aravindan P, Ponnambalam S (1998) A genetic algorithm for scheduling flexible manufacturing systems. Int J Adv Manuf Technol 14(8):588–607 29. Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Tech. rep., Technical report-tr06, Erciyes university, engineering faculty, computer engineering department 30. Karaboga D, Basturk B (2008) On the performance of artificial bee colony (abc) algorithm. Appl Soft Comput 8(1):687–697

31. Kennedy J, Kennedy JF, Eberhart RC (2001) Swarm intelligence. Morgan Kaufmann 32. Krink T, Filipic B, Fogel GB (2004) Noisy optimization problems—a particular challenge for differential evolution? In: Congress on evolutionary computation, CEC2004. IEEE, vol 1, pp 332–339 33. Krishnanand K, Ghose D (2006) Glowworm swarm based optimization algorithm for multimodal functions with collective robotics applications. Multiagent and Grid Systems 2(3):209–222 34. Lacomme P, Larabi M, Tchernev N (2013) Job-shop based framework for simultaneous scheduling of machines and automated guided vehicles. Int J Prod Econ 143(1):24–34 35. Lageweg B, Lenstra J, Rinnooy Kan A (1977) Job-shop scheduling by implicit enumeration. Manag Sci 24(4):441–450 36. Li J, Tao F, Cheng Y, Zhao L (2015) Big data in product lifecycle management. Int J Adv Manuf Technol 81:667–684 37. Li Jq, Yx Pan (2013) A hybrid discrete particle swarm optimization algorithm for solving fuzzy job shop scheduling problem. Int J Adv Manuf Technol 66(1–4):583–596 38. Li JQ, Pan QK, Suganthan P, Chua T (2011) A hybrid tabu search algorithm with an efficient neighborhood structure for the flexible job shop scheduling problem. Int J Adv Manuf Technol 52(5-8):683–697 39. Li T, Wang C, Wang W, Su W (2005) A global optimization bionics algorithm for solving integer programming-plant growth simulation algorithm. Syst Eng Theory Pract 25(1):76–85 40. de Lima Neto F, Lins A, Nascimento AI, Lima MP et al (2008) A novel search algorithm based on fish school behavior. In: IEEE international conference on systems, man and cybernetics, SMC 2008. IEEE, pp 2646–2651 41. Martin PD (1996) A time-oriented approach to computing optimal schedules for the job-shop scheduling problem. Cornell University 42. Moslehi G, Mahnam M (2011) A pareto approach to multiobjective flexible job-shop scheduling problem using particle swarm optimization and local search. Int J Prod Econ 129(1): 14–22 43. Naderi B, Ghomi SF, Aminnayeri M, Zandieh M (2011) Scheduling open shops with parallel machines to minimize total completion time. J Comput Appl Math 235(5):1275–1287 44. Nowicki E, Smutnicki C (1996) A fast taboo search algorithm for the job shop problem. Manag Sci 42(6):797–813 45. Passino KM (2002) Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst 22(3): 52–67 46. Perregaard M, Clausen J (1998) Parallel branch-and-bound methods for the job-shop scheduling problem. Ann Oper Res 83: 137–160 47. Pezzella F, Merelli E (2000) A tabu search method guided by shifting bottleneck for the job shop scheduling problem. Eur J Oper Res 120(2):297–310 48. Pham D, Ghanbarzadeh A, Koc E, Otri S, Rahim S, Zaidi M (2005) The bees algorithm. technical note. Manufacturing Engineering Centre, Cardiff University, UK, pp 1–57 49. Ponnambalam S, Aravindan P, Rajesh S (2000) A tabu search algorithm for job shop scheduling. Int J Adv Manuf Technol 16(10):765–771 50. Sha D, Hsu CY (2006) A hybrid particle swarm optimization for job shop scheduling problem. Comput Ind Eng 51(4):791–808 51. Subramaniam V, Ramesh T, Lee G, Wong Y, Hong G (2000) Job shop scheduling with dynamic fuzzy selection of dispatching rules. Int J Adv Manuf Technol 16(10):759–764

Int J Adv Manuf Technol 52. Sun D, Batta R, Lin L (1995) Effective job shop scheduling through active chain manipulation. Comput Oper Res 22(2): 159–172 53. Tao F, Zhao D, Hu Y, Zhou Z (2008) Resource service composition and its optimal-selection based on particle swarm optimization in manufacturing grid system. IEEE Trans Ind Inf 4(4): 315–327 54. Tao F, Zhang L, Zhang Z, Nee A (2010) A quantum multiagent evolutionary algorithm for selection of partners in a virtual enterprise. CIRP Ann Manuf Technol 59(1):485–488 55. Tao F, Zhao D, Yefa H, Zhou Z (2010) Correlation-aware resource service composition and optimal-selection in manufacturing grid. Eur J Oper Res 201(1):129–143 56. Tao F, Zhang L, Venkatesh V, Luo Y, Cheng Y (2011) Cloud manufacturing: a computing and service-oriented manufacturing model. Proc IME B J Eng Manufact 225(10):1969–1976 57. Tao F, LaiLi Y, Xu L, Zhang L (2013) Fc-paco-rm: a parallel method for service composition optimal-selection in cloud manufacturing system. IEEE Trans Ind Inf 9(4):2023–2033 58. Tao F, Cheng Y, Da Xu L, Zhang L, Li BH (2014a) Cciot-cmfg: cloud computing and internet of things-based cloud manufacturing service system. IEEE Trans Ind Inf 10(2):1435–1442 59. Tao F, Feng Y, Zhang L, Liao T (2014b) Clps-ga: a case library and pareto solution-based hybrid genetic algorithm for energyaware cloud service scheduling. Appl Soft Comput 19:264–279 60. Tao F, Laili Y, Liu Y, Feng Y, Wang Q, Zhang L, Xu L (2014c) Concept, principle and application of dynamic configuration for intelligent algorithms. IEEE Syst J 8(1):28–42 61. Tao F, Zuo Y, Da Xu L, Zhang L (2014) Iot-based intelligent perception and access of manufacturing resource toward cloud manufacturing. IEEE Trans Ind Inf 10(2):1547–1557 62. Tao F, Cheng Y, Zhang L, Nee A (2015) Advanced manufacturing systems: socialization characteristics and trends. J Intell Manuf: 1–16 63. Tao F, Li C, Liao T, Laili Y (2015) Bgm-bla: a new algorithm for dynamic migration of virtual machines in cloud computing 64. Tao F, Zhang L, Liu Y, Cheng Y, Wang L, Xu X (2015) Manufacturing service management in cloud manufacturing: overview

65.

66.

67.

68.

69.

70.

71. 72.

73.

74.

75.

and future research directions. Journal of Manufacturing Science and Engineering-Transaction of the ASME 137(4):040,912-1040,912-11 Topaloglu S, Kilincli G (2009) A modified shifting bottleneck heuristic for the reentrant job shop scheduling problem with makespan minimization. Int J Adv Manuf Technol 44(7-8): 781–794 Vilcot G, Billaut JC (2011) A tabu search algorithm for solving a multicriteria flexible job shop scheduling problem. Int J Prod Res 49(23):6963–6980 Wang L, Zheng DZ (2001) An effective hybrid optimization strategy for job-shop scheduling problems. Comput Oper Res 28(6):585–596 Wang L, Zhou G, Xu Y, Wang S, Liu M (2012) An effective artificial bee colony algorithm for the flexible jobshop scheduling problem. Int J Adv Manuf Technol 60(1–4): 303–315 Wj Xia, Zm Wu (2006) A hybrid particle swarm optimization approach for the job-shop scheduling problem. Int J Adv Manuf Technol 29(3–4):360–366 Xing LN, Chen YW, Wang P, Zhao QS, Xiong J (2010) A knowledge-based ant colony optimization for flexible job shop scheduling problems. Appl Soft Comput 10 (3):888–896 Yang XS (2010) Nature-inspired metaheuristic algorithms. Luniver press Yang XS (2012) Flower pollination algorithm for global optimization. In: Unconventional Computation and Natural Computation, Springer, pp 240–249 Zhang C, Li P, Guan Z, Rao Y (2007) A tabu search algorithm with a new neighborhood structure for the job shop scheduling problem. Comput Oper Res 34(11):3229–3242 Zhang R, Song S, Wu C (2013) A hybrid artificial bee colony algorithm for the job shop scheduling problem. Int J Prod Econ 141(1):167–178 Zou X, Tao F, Jiang P, Gu S, Qiao K, Zuo Y, Xu L (2015) A new approach for data processing in supply chain network based on fpga. Int J Adv Manuf Technol1–12