Hybrid Flow Shop with Unrelated Machines, Setup Time, and ... - MDPI

3 downloads 0 Views 13MB Size Report
May 9, 2018 - We find that it can save 48% of production time and 47% of electricity consumption .... Table 6 shows the setup time of machines at each stage.
algorithms Article

Hybrid Flow Shop with Unrelated Machines, Setup Time, and Work in Progress Buffers for Bi-Objective Optimization of Tortilla Manufacturing Victor Hugo Yaurima-Basaldua 1 , Andrei Tchernykh 2,3, *, Francisco Villalobos-Rodríguez 1 and Ricardo Salomon-Torres 1 ID 1 2 3

*

Software Engineering, Sonora State University, San Luis Rio Colorado, Sonora 83455, Mexico; [email protected] (V.H.Y.-B.); [email protected] (F.V.-R.); [email protected] (R.S.-T.) Computer Science Department, CICESE Research Center, Ensenada 22860, Mexico School of Electrical Engineering and Computer Science, South Ural State University, Chelyabinsk 454080, Russia Correspondence: [email protected]; Tel.: +521-646-178-6994

Received: 27 February 2018; Accepted: 1 May 2018; Published: 9 May 2018

 

Abstract: We address a scheduling problem in an actual environment of the tortilla industry. Since the problem is NP hard, we focus on suboptimal scheduling solutions. We concentrate on a complex multistage, multiproduct, multimachine, and batch production environment considering completion time and energy consumption optimization criteria. The production of wheat-based and corn-based tortillas of different styles is considered. The proposed bi-objective algorithm is based on the known Nondominated Sorting Genetic Algorithm II (NSGA-II). To tune it up, we apply statistical analysis of multifactorial variance. A branch and bound algorithm is used to assert obtained performance. We show that the proposed algorithms can be efficiently used in a real production environment. The mono-objective and bi-objective analyses provide a good compromise between saving energy and efficiency. To demonstrate the practical relevance of the results, we examine our solution on real data. We find that it can save 48% of production time and 47% of electricity consumption over the actual production. Keywords: multiobjective genetic algorithm; hybrid flow shop; setup time; energy optimization; production environment

1. Introduction Tortillas are a very popular food as a favorite snack and meal option in various cultures and countries. There are two types of tortillas: wheat-based and corn-based. Their overall consumption is growing all over the world. Originally, tortillas were made by hand: grinding corn into flour, mixing the dough, and pressing to flatten it. In fact, many small firms today still use processes that are more labor-intensive, requiring people to perform a majority of the tasks such as packaging, loading, baking, and distributing. This trend is expected to change over the next years. Due to its considerable practical significance, optimization of tortilla production is important. To improve the production timing parameters (setup, changeover, waiting), operational cost (energy consumption, repairing, service), throughput, etc., careful analysis of the process and of advance scheduling approaches is needed. This industry is a typical case of a hybrid flow shop, which is a complex combinatorial optimization problem that arises in many manufacturing systems. In the classical flow shop, a set of jobs has to pass through various stages of production. Each stage can have several machines. There is also the flexibility of incorporating different-capacity machines, turning this design into hybrid flow shop scheduling (HFS). Algorithms 2018, 11, 68; doi:10.3390/a11050068

www.mdpi.com/journal/algorithms

Algorithms 2018, 11, x FOR PEER REVIEW

2 of 22

is also the flexibility of incorporating different-capacity machines, turning this design into hybrid 2 of 23 flow shop scheduling (HFS). The production decisions schedule work to machines in each stage to determine the processing orderThe according to one or more criteria. Hybrid flow shop scheduling problems are commonly production decisions schedule work to machines in each stage to determine the processing encountered in manufacturing A flow largeshop number of heuristics for are different HFS order according to one or more environments. criteria. Hybrid scheduling problems commonly configurations considering realistic problems have been proposed [1–6]. encountered in manufacturing environments. A large number of heuristics for different HFS In spite of considering significant research some aspects of the problem configurations realistic efforts, problems have been proposed [1–6].have not yet been sufficiently explored. For improving the efficiency of production, In spite ofexample, significantprevious researchworks efforts,deal somemostly aspectswith of the problem have not yet been sufficiently where objective functions are usually used to minimize the total production time (makespan), the explored. For example, previous works deal mostly with improving the efficiency of production, total tardiness, etc. [7]. Although theused production efficiency is production essential, ittime is not the only factor to where objective functions are usually to minimize the total (makespan), the total consider in manufacturing operations. One of the main motivations of this work is to reduce energy tardiness, etc. [7]. Although the production efficiency is essential, it is not the only factor to consider in consumption. In recent years, it has been motivations recognized of that energy consumption can increase manufacturing operations. One of the main thishigh work is to reduce energy consumption. operational costitand negative environmental impacts [8]. In recent years, has cause been recognized that high energy consumption can increase operational cost and this paper, we proposeimpacts a genetic causeInnegative environmental [8].algorithm that contributes to the solution of a bi-objective HFS In problem of awe real tortillaa production environment, with unrelated machines, setup time, HFS and this paper, propose genetic algorithm that contributes to the solution of a bi-objective work in progress buffersproduction taking intoenvironment, account the total time and setup energy consumption of problem of a real tortilla with completion unrelated machines, time, and work in the machines. progress buffers taking into account the total completion time and energy consumption of the machines. The rest rest of of the the paper paper is is organized organized as as follows. follows. After the description description and and characteristics characteristics The After presenting presenting the of the production line and jobs in Section 2, we discuss related work in Section 3. We describe the the of the production line and jobs in Section 2, we discuss related work in Section 3. We describe model of of the the problem problem in in Section Section 4. 4. We We present present the the details details of of the the proposed proposed solution solution in in Section Section 55 and and model the algorithm algorithm calibration calibrationin inSection Section6.6.We Wediscuss discusscomputational computationalexperiments experimentsand andresults results Section the inin Section 7. 7. Finally, we conclude with a summary and an outlook in Section 8. Finally, we conclude with a summary and an outlook in Section 8. Algorithms 2018, 11, 68

2. 2. Production Process Figure 1 shows shows the the eight eightmain mainstages stagesofofthe theproduction production process: Preparation, Mixing, process: (1)(1) Preparation, (2) (2) Mixing, (3) (3) Division Rounding, Repose, Press, Bake, Cooling, Stacking and Packing. Division andand Rounding, (4) (4) Repose, (5) (5) Press, (6)(6) Bake, (7)(7) Cooling, (8)(8) Stacking and Packing.

Figure 1. 1. Tortilla Tortilla production production line. line. Figure

In this paper, we consider a hybrid flow shop with six stages (from 2 to 7), with different In this paper, we consider a hybrid flow shop with six stages (from 2 to 7), with different numbers numbers of machines in the stages 2, 4, and 6. In the production of wheat flour tortillas, the raw of machines in the stages 2, 4, and 6. In the production of wheat flour tortillas, the raw ingredients ingredients prepared in Stage 1 are transferred to a large mixer (Stage 2), where they are combined prepared in Stage 1 are transferred to a large mixer (Stage 2), where they are combined to create to create the dough. the dough. Corn tortilla production starts with a process called nixtamalization, in which the corn is soaked Corn tortilla production starts with a process called nixtamalization, in which the corn is soaked in an alkali solution of lime (calcium hydroxide) and hot water. The dough is divided into fragments in an alkali solution of lime (calcium hydroxide) and hot water. The dough is divided into fragments with round shapes (Stage 3). These fragments repose in a controlled environment to obtain a certain with round shapes (Stage 3). These fragments repose in a controlled environment to obtain a certain consistency as to the temperature and humidity (Stage 4). Next, they are pressed to take the shape of consistency as to the temperature and humidity (Stage 4). Next, they are pressed to take the shape of a a tortilla (Stage 5). Rows of tortillas are heated in stoves (Stage 6). The next steps are to cool (Stage 7), tortilla (Stage 5). Rows of tortillas are heated in stoves (Stage 6). The next steps are to cool (Stage 7), stack, and pack them (Stage 8). stack, and pack them (Stage 8).

Algorithms 2018, 11, 68 Algorithms 2018, 11, x FOR PEER REVIEW

3 of 23 3 of 22

with several several machines machines per per stage stage (Figure (Figure 2). 2). It is The process is distributed among flow lines with controlled by by operators. operators. operated in a semi-automated way controlled

Figure 2. Six-stage hybrid flow shop. Figure 2. Six-stage hybrid flow shop.

Tables 1 and 2 present the capacities of the machines in each stage. These are represented by the Tables 1 and 2 present the capacities of the machines in each stage. These are represented by the amount of work (tortillas in kilograms) that can be processed on each machine. amount of work (tortillas in kilograms) that can be processed on each machine. Table 1. The processing capacity of machines in Stage 2 (kg per minute). Table 1. The processing capacity of machines in Stage 2 (kg per minute).

Stage Machines 1 1Jobs 1–5 280

Stage Machines

2 2 80

3 3 100

24

5

130

4160

6 200 5

6

Jobs 1–5 80 80 100 130 160 200 Table 2. The processing capacity of machines in Stages 3–7 (kg per minute). Table 2. The processing capacity of machines in Stages 3–7 (kg per minute).

Stage 3 4 Machines 1–6 1–6 Stage 3 4 1 1.3 200 Machines 1–6 1–6 2 1.3 200 1 Jobs 1.3 3 0.44 200200 2 1.3 200 4 0.31 200200 3 0.44 Jobs 5 1.3 200200 4 0.31 5

1.3

200

5 1–3 4–6 5 7.2 9.6 1–3 6.4 8.8 7.2 2.4 4 6.4 3.2 4.8 2.4 43.2 8.8 4

6 1–6 9.6 4–6 8.8 9.6 4 8.8 4.8 4 8.8 4.8 8.8

7 1–66 9.6 1–6 8.8 4 9.6 8.8 4.84 8.84.8 8.8

7 1–6 9.6 8.8 4 4.8 8.8

Table 1 demonstrates that Stage 2 has six machines with processing capabilities of 80, 80, 100, 130, 160, 200 kg per minute that2 can jobs from 1 to 5. Machines 1 andof2 80, have Tableand 1 demonstrates that Stage has process six machines with processing capabilities 80,the 100,same 130, capacities. 160, and 200 kg per minute that can process jobs from 1 to 5. Machines 1 and 2 have the same capacities. Table 22shows showsthat that Stages 6, 7and 7 identical have identical machines, and2Stages 2 and 5 have Table Stages 3, 4,3,6,4,and have machines, and Stages and 5 have unrelated unrelated machines. Table 3 shows the power of the machines in each stage in kW per minute. The machines. Table 3 shows the power of the machines in each stage in kW per minute. The consumption consumption is independent of job the to type job to be processed. Stage do not consume is independent of the type of be of processed. Machines Machines in Stage 4indo not4consume energy. energy. The equipment is cabinets, where dough balls remain for a given period. The equipment is cabinets, where dough balls remain for a given period. Table 3. Energy consumption of machines (kW per per minute). minute).

Stage 2 1 1 0.09 2 2 0.09 3 3 0.11 Machine 4 0.14 4 5 0.23 5 6 0.23 6

Stage

Machine

2 3 3 0.09 0.03 0.03 0.09 0.03 0.03 0.11 0.03 0.03 0.14 0.03 0.03 0.03 0.23 0.03 0.03 0.23 0.03

44 5 5 6 0 0.62 0.04 0 0.62 00 0.620.620.04 00 0.620.620.04 00 0.9 0.9 0.04 00 0.9 0.9 0.04 0 0.9 0 0.9 0.04

7 6 0.12 0.04 0.12 0.04 0.12 0.04 0.04 0.12 0.04 0.12 0.04 0.12

7 0.12 0.12 0.12 0.12 0.12 0.12

Algorithms 2018, 11, 68

4 of 23

Table 4 shows characteristics of the jobs. We consider a workload with five jobs for production of the five types of tortillas most demanded by industry: Regular, Integral, Class 18 pcs, Taco 50, and MegaBurro. Each job is characterized by its type, group, and batches required for production. It has several assigned batches. This quantity must be within a certain range that is obtained from the standard deviations of the average monthly production in kilograms of tortillas for a year. Table 4. Job characteristics. Job j

Name

Type

Group

Batch Limits

1 2 3 4 5

Regular Integral Class 18 pcs Taco 50 MegaBurro

1 2 3 4 5

1 2 1 1 1

356–446 34–61 7–10 13–22 12–20

The batch indicates the number of 10 kg tasks that have to be processed in the job. A job with one batch indicates processing 10 kg, and with five batches indicates processing 50 kg. The group indicates the compatibility of the job tasks. Job tasks belonging to the same group may be processed simultaneously. For instance, they can be processed on all unrelated machines of Stage 2, if their sizes are less than or equal to the capacity in kilograms of the machines (Table 1). The job tasks of the same type of dough can be combined, forming compatibility groups. In this case, the jobs of Types 1, 3, 4, and 5 belong to Group 1, and jobs of Type 2 belong to Group 2. Table 5 shows processing times in minutes for every 10 kg of tortilla for a job type. The 10 kg represents a task. We see that in Stages 2 and 4, all machines process jobs for the same durations of 1 and 45 min, respectively, due to the fact that the time does not depend on the job type. Table 5. Processing time Pik ,j of job j at machine k in stage i for each 10 kg. Stage

2

Machines 1 2 3 4 5

Jobs

3

4

5

6

7

1–6

1–6

1–6

1–3

4–6

1–6

1–6

1 1 1 1 1

1.25 1.25 4.38 3.13 0.42

45 45 45 45 45

1.43 1.48 3.69 3.00 2.28

1.07 1.11 2.77 2.25 1.14

12.15 12.15 21.38 24.50 18.24

17.25 17.25 26.48 29.60 23.34

Table 6 shows the setup time of machines at each stage. At stages from 3 to 7, the setup time is independent of the task sequence, job type, and weight. Table 6. Sequence setup time for all machines in stage i. Stage

2

3

4

5

6

7

Setup time

st2

0.7

0.7

0.5

0

0

At Stage 2, the setup time is calculated as follows:  st2 =

35 +



tw 10

60

×5

 (1)

where tw is the total weight in kilograms of the tasks to be processed by machines in the same time period. These tasks have to belong to jobs of the same group.

Algorithms 2018, 11, 68

5 of 23

3. Related Works 3.1. Energy-Aware Flow Shop Scheduling Investment in new equipment and hardware can certainly contribute to energy savings [9]. The use of “soft” techniques to achieve the same objective is also an effective option [10]. The advance schedule could play an important role in reducing the energy consumption of the manufacturing processes. Meta-heuristics, e.g., genetic algorithms (GA) [11], particle swarm optimization [12], and simulated annealing [13] are greatly popularity for use in the design of production systems. However, studies of flow shop scheduling problems for energy saving appear to be limited [14,15]. In fact, most production systems that allow changing of the speed of the machines belong to the mechanical engineering industry and usually take the configuration of a work shop instead of a flow shop [16]. One of the first attempts to reduce energy consumption through production scheduling can be found in the work of Mouzon et al. [17]. The authors collected operational statistics for four machines in a flow shop and found that nonbottleneck machines consume a considerable amount of energy when left idle. As a result, they propose a framework for scheduling power on and off events to control machines for achieving a reduction of total energy consumption. Dai et al. [18] applied this on/off strategy in a flexible flow shop, obtaining satisfactory solutions to minimize the total energy consumption and makespan. Zhang and Chiong [16] proposed a multiobjective genetic algorithm incorporated into two strategies for local improvements to specific problems, to minimize power consumption, based on machine speed scaling. Mansouri et al. [19] analyzed the balance between minimizing the makespan, a measure of the level of service and energy consumption, in a permutation flow shop with a sequence of two machines. The authors developed a linear model of mixed-integer multiobjective optimization to find the Pareto frontier composed of energy consumption and total makespan. Hecker et al. [20] used evolutionary algorithms for scheduling in a non-wait hybrid flow shop to optimize the allocation of tasks in production lines of bread, using particle swarm optimization and ant colony optimization. Hecker et al. [21] used a modified genetic algorithm, ant colony optimization, and random search procedure to study the makespan and total time of idle machines in a hybrid permutation flow model. Liu and Huang [14] studied a scheduling problem with batch processing machines in a hybrid flow shop with energy-related criteria and total weighted tardiness. The authors applied the Nondominated Sorting Genetic Algorithm II (NSGA-II). Additionally, in time completion problems, Yaurima et al. [3] proposed a heuristic and meta-heuristic method to solve hybrid flow shop (HFS) problems with unrelated machines, sequence-dependent setup time (SDST), availability constraints, and limited buffers. 3.2. Multiobjective Optimization Different optimization criteria are used to improve models of a hybrid flow shop. An overview of these criteria can be found in [4]. Genetic algorithms have received considerable attention as an approach to multiobjective optimization [6,22,23]. Deb et al. [24] proposed a computationally efficient multiobjective algorithm called NSGA-II (Nondominated Sorting Genetic Algorithm II), which can find Pareto-optimal solutions.  The computational complexity of the algorithm is O MN 2 , where M is the number of targets and N is the size of the data set. In this article, we consider it to solve our problem with two criteria. An experimental analysis of two crossover operators, three mutation operators, three crossover and mutation probabilities,

Algorithms 2018, 11, 68

6 of 23

and three populations is presented. For each case of machines per stage, the goal is to find the most desirable solutions that provide the best results considering both objectives. 3.3. Desirability Functions Getting solutions on the Pareto front does not resolve the multiobjective problem. There are several methodologies to incorporate preferences in the search process [25]. These methodologies are responsible for giving guidance or recommendations concerning selecting a final justifiable solution from the Pareto front [26]. One of these methodologies is Desirability Functions (DFs) that are responsible for mapping the value of each objective to desirability, that is, to values in a unitless scale in the domain [0, 1]. Let us consider that the objective f i is Zi ⊆ R; then a DF is defined as any function di : Zi → [0, 1] that specifies the desirability of different regions of the domain Zi for objective f i [25]. The Desirability Index (DI) is the combination of the individual values of the DFs in one preferential value in the range [0, 1]. The higher the values of the DFs, the greater the DI. The individual in the final front with the highest DI will be the best candidate to be selected by the decision maker [27]. The algorithm calibration of desirability functions is applied to the final solutions of the Pareto front to get the best combination of parameters. 4. Model Many published works address heuristics for flow shops that are considered production environments [3,23,28–30]. We consider the following problem: A set J = {1, 2, 3, 4, 5} of jobs available at time 0 must be processed on a set of 6 consecutive stages S = {2, 3, 4, 5 , 6, 7}. of the production line, subject to minimization of total processing time Cmax and energy consumption of the machines Eop . Each stage i ∈ S consists of a set Mi = {1, . . . , mi } of parallel machines, where | Mi | ≥ 1. The sets M2 and M5 have unrelated machines and the sets M3 , M4 , M6 , and M7 have identical machines. We consider three different cases of mi = {2, 4, 6} machines in each stage. Each job j ∈ J belongs to a particular group Gk . Jobs 1, 3, 4, 5 ∈ J belong to G1 , job 2 ∈ J belongs  to G2 . Each job is divided into a set Tj of t j tasks, Tj = 1, 2, . . . , t j . Jobs and tasks in the same group can be processed simultaneously only in stage 2 ∈ S according to the capacity of the machine. In stages 3, 4, 5, 6, 7 ∈ S, the tasks must be processed by exactly one machine in each stage (Figure 2). Each task t ∈ Tj consists of a set of batches of 10 kg each. The tasks can have different numbers of batches. The set of all the tasks together of all the jobs is given by T = {1, 2, . . . , Q}. Each machine must process an amount of “a” batches at the time of assigning a task. Let pil ,q be the processing time of the task q ∈ Tj on the machine l ∈ Mi at stage i. Let Sil ,qp be the setup (adjustment) time of the machine l from stage i to process the task p ∈ Tj after processing task q. Each machine has a process buffer for temporarily storing the waiting tasks called “work in progress”. The setup times in stage 2 ∈ S are considered to be sequence dependent; in stages 3, 4, 5, 6, 7 ∈ S, they are sequence independent. Using the well-known three fields notation α | β | γ for scheduling problems introduced in [31], the problem is denoted as follows:  FH6, ( RM(2) , ( PM(i) )i=34 , RM(5) , ( PM(i) )i=67 )|Ssd2 , Ssi37 | Cmax , Eop .

(2)

Encoding Each solution (chromosome) is encoded as a permutation of integers. Each integer represents a batch (10 kg) of a given job. The enumeration of each batch is given in a range specified by the order of the jobs and the number of batches.

Algorithms 2018, 11, 68

7 of 23

Algorithms 2018, x FOR according PEER REVIEW Batches are11,listed to

of 22 the job to which they belong. Table 7 shows an example 7the five jobs. If Job 1 has nine batches, they are listed from 1 to 9. If Job 2 has 12 batches, they are listed from 10 Table 7. Example of enumeration of batches for jobs. to 21. Reaching Job 5, we have accumulated 62 batches. Job 1 2 3 4 5 Table 7. Example Batches 9 of enumeration 12 16 of batches 15 for jobs. 10 Algorithms 2018, 11, x FOR PEER Batch REVIEW 7 of 22 index 1–9 10–21 22–37 38–52 53–62

Job

1 2 3 4 5 Table 7. Example of enumeration of batches for jobs. Figure 3 shows an example of representation. Groups according Batches 9 the production 12 16 15 are composed 10 to the number of machines in the job are randomly Batch index 1–9 stage (𝑚 10–21 22–37 of each38–52 53–62 distributed 𝑖 ). The batches

Job

1

2

3

4

5

between the groups. To obtain the number of batches per group (a), we divide the total amount of Batches 9 12 16 15 10 batches by the number of machines per stage. Figure 3 shows an example of the production Groups are composed according to Batch index 1–9 10–21representation. 22–37 38–52 53–62 Let us consider six stages with 𝑚 = 4 machines per stage. The chromosome is composed of 4 the number of machines in the stage (m1i ). The batches of each job are randomly distributed between groups. Let us assume that we have 62 batches for 5 jobs. This is equal to 62 (batches)/4 (machines the groups. obtainan the numberofofthe batches per group (a), we divide the total of batches by Figure 3Toshows example production representation. Groups areamount composed according per stage) = 15.5 (batches per group). The result is rounded to the nearest higher integer, ending with the number of machines per to athe of inexample the stage (𝑚𝑖 ). Thetasks batches of each1 job are randomly distributed = 16number batches permachines group. Anstage. of processing on Machine is shown in Figure 4.

between the groups. To obtain the number of batches per group (a), we divide the total amount of batches by the number Machine 1 of machines per stage. Machine 2 Machine 𝑚i Let us consider six stages with 𝑚1 = 4 machines per stage. The chromosome is composed of 4 ⋯ ⋯ ⋯ ⋯ 56 23 19 16 1 55 62 27 11 24 37 17 groups. Let us assume that we have 62 batches for 5 jobs. This is equal to 62 (batches)/4 (machines ⋯ 𝑎−1 𝑎 ⋯ 𝑎−1 𝑎 ⋯ 𝑎−1 𝑎 1 2 1 2 1 2 per stage) = 15.5 (batches per group). The result is rounded to the nearest higher integer, ending with a = 16 batches per group. An of example of processing on Machine is shown Figure 4. Figure 3. Example the representation of the tasks chromosome for 𝑚i 1machines perinstage. Figure 3. Example of the representation of the chromosome for mi machines per stage.

Machine 1 Machine 1 Machine 2 Machine 𝑚i Let us consider six stages with m1 = 4 machines per stage. The chromosome is composed of 4 ⋯Sequence ⋯ 48 11 ⋯Batches 56 Original 23 19 16 62 51 11 27 5 59 37 17 56 231 38 55 35 1 ⋯ 42 61 14 15 19 24 16 groups. Let us assume that we have 62 batches for 5 jobs. This is equal to 62 (batches)/4 (machines ⋯ (batches 𝑎 − 1 per𝑎 group). ⋯ is𝑎rounded − 1 2 𝑎to the nearest ⋯ Order 𝑎ending − 1 with 𝑎 2 = 15.5 1 1The 2 result 13 higher 2 integer, Proc. per1 stage) Ordered a = 16 batches per3.group. An Machine is shown 1 of5example 23representation 35 of 38 processing 42 51of the 48 tasks 56 61on59 14for11𝑚115 19 16 in Figure Batches4. Figure Example the chromosome i machines per stage. Sequence 1 3 4 4 5 2 Jobs Machine 1 Figure 4. Example of tasks on Machine 1.

Original Sequence 56 23 38 35 1 42 61 14 51 11 5 59 48 15 19 16 Batches In the ordered sequence, “Jobs” indicate jobs to which the batches belong according to their 1 2 3 Proc. Order enumeration. These groups indicate the number of total tasks for each job from left to right. For Ordered 5 23 35Job 384 has 42 two 51 48 56 the 61 first 59 one 14 “4-1” 11 15has 19three 16 batches Batches example, in the ordered1sequence, tasks; (38, 42, Sequence 51) and the second one “4-2” 1 has3one batch4 (48). 4 5 2 Jobs “Proc. Order” indicates the order of processing of the task groups. Assuming that a machine in 4. Example of tasks Machine 1. Stage 1 has to process up to 70 Figure kg (7 batches), Group 1 ison composed of sets of tasks 1-1 (1, 5), 3-1 (23, Figure 4. Example of tasks on Machine 1. 35), 4-1 (38, 42, 51) with seven batches. When the capacity of the machine is filled, Group 2 is formed ordered sequence, “Jobs” indicate jobs that to which the batches their forIn thethe tasks 4-2 (48) and 5-1 (56, 61, 59). Assuming the batches of Job 2belong are not according compatibleto with In the ordered sequence, “Jobs” indicate jobs to which the batches belong according to their enumeration. These groups indicate the number of total tasks for each job from left to right. any of the remaining four jobs, the batches are processed separately from the other two groups,For enumeration. These number of total for job from left to right. example, the ordered sequence, Jobthe 4 has tasks; thetasks first “4-1” hastasks three batches (38, For 42, formingin the task 2-1 groups with fiveindicate batches (14, 11, two 15, 19, 16). When theone firsteach group’s have completed example, in the ordered sequence, Job 4 has two tasks; the first one “4-1” has three batches (38, 42, 51) the“4-2” machine, group is processed, and so on. 51)their andprocessing the secondinone has the onesecond batch (48). and the second “4-2” hasin one batch The tasksone processed the group are assigned to task the groups. machinesAssuming of the following stages in “Proc. Order” indicates the order of(48). processing of the that a machine “Proc. Order” indicates the order of processing of the task groups. Assuming that a machine in individually according to their order of completion in the first stage. When the tasks of a group—for Stage 1 has to process up to 70 kg (7 batches), Group 1 is composed of sets of tasks 1-1 (1, 5), 3-1 (23, Stage 1 has to process up to 70 kg (7 batches), Group 1 is composed of sets of tasks 1-1 (1, 5), 3-1 (23, 35), example, 1 (1-1,seven 3-1, 4-1)—are processed together,ofthe is the same all. 35), 4-1 (38, Group 42, 51) with batches. When the capacity thecompletion machine istime filled, Group 2 isforformed 4-1 (38, 42, 51) with seven batches. When the capacity of the machine is filled, Group 2 is formed for Therefore, the three tasks will be ready at the same time to be assigned individually to machines for the tasks 4-2 (48) and 5-1 (56, 61, 59). Assuming that the batches of Job 2 are not compatible with available in the next stage, until the last (6th) stage. the tasks 4-2 (48) and 5-1 (56, 61, 59). Assuming that the batches of Job 2 are not compatible with any any of the remaining four jobs, the batches are processed separately from the other two groups, Inthe thetask post-second stages, the tasks are19, ready to be assigned in the tasks process buffer. The of the remaining fourwith jobs, thebatches batches arethat processed separately from the other two groups, forming forming 2-1 five (14, 11, 15, 16). When the firstare group’s have completed task that has the longest processing time is assigned first to an available machine. The calculation of the 2-1 withinfive (14, 11,second 15, 19,group 16). When the firstand group’s tasks have completed their theirtask processing thebatches machine, the is processed, so on. the total in completion timethe 𝐶 second is asgroup follows. The time and 𝐶 so foron. completing task p in stage i is processing theprocessed machine, processed, The tasks in 𝑚𝑎𝑥 the group areis assigned to 𝑖,𝑝 the machines of the following stages calculated according to the formula The tasksaccording processedtointheir the group assigned toin thethe machines of the following stages individually order are of completion first stage. When the tasks of aindividually group—for 𝑚𝑖𝑛 according to their order of completion in the first stage. When the tasks of a group—for example, example, Group 1 (1-1, 3-1,𝐶𝑖,𝑝 4-1)—are processed the completion time is the same for = {max{𝐶together, (3) all. 𝑖,𝑝 + 𝑆𝑖𝑙 ,𝑞𝑝 ; 𝐶𝑖−1,𝑝 } + 𝑝𝑖𝑙 ,𝑞 }. 1 ≤ 𝑙 ≤together, 𝑚 Group 1 (1-1, 3-1, 4-1)—are processed the completion time is the same for all. Therefore, Therefore, the three tasks will be ready at the same time to be assigned individually to machines Theinmaximum completion timelast of all tasks and jobs is calculated as available the next stage, until the (6th) stage. In the post-second stages, the tasks that are ready to be assigned are in the process buffer. The task that has the longest processing time is assigned first to an available machine. The calculation of the total completion time 𝐶𝑚𝑎𝑥 is as follows. The time 𝐶𝑖,𝑝 for completing task p in stage i is calculated according to the formula

Algorithms 2018, 11, 68

8 of 23

the three tasks will be ready at the same time to be assigned individually to machines available in the next stage, until the last (6th) stage. In the post-second stages, the tasks that are ready to be assigned are in the process buffer. The task that has the longest processing time is assigned first to an available machine. The calculation of the total completion time Cmax is as follows. The time Ci,p for completing task p in stage i is calculated according to the formula Ci,p =

 min {max Ci,p + Sil ,qp ; Ci−1,p + pil ,q }. 1≤l≤m

(3)

The maximum completion time of all tasks and jobs is calculated as  Cmax = max Q p=1 C7,p .

(4)

Q indicates the total number of tasks for all jobs. Cm,p is the completion time of task p ∈ Tj in the last stage 7 ∈ S. The total energy consumption Eop of the execution of all tasks is calculated as Q

Eop =

∑ pil ,q · Eil .

(5)

q =1

where Eil indicates the electrical power consumption of machine l in stage i, and pil ,q refers to the processing time of the task q ∈ T. 5. Bi-Objective Genetic Algorithm 5.1. Nondominated Sorting Genetic Algorithm II (NSGA-II) The NSGA-II algorithm is used to assign tasks to machines so that Cmax and Eop are minimized. NSGA-II is a multiobjective genetic algorithm characterized by elitism and stacking distance to maintain the diversity of the population to find as many Pareto-optimal solutions as possible [24]. It generates a population of N individuals, where each represents a possible solution. Individuals evolve through genetic operators to find optimal or near-optimal solutions. Three operators are usually applied: tournament selection (using a stacking tournament operator), crossover, and mutation to generate another N individuals or children. From the mixture of these two populations, a new population of size 2N is created. Then, the best individuals are taken according to their fitness value by ordering the population on nondominated fronts. Individuals from the best nondominated fronts are first taken, one by one, until N individuals are selected. The crowding distance is then compared to preserve diversity in the population. This operator compares two solutions and chooses a tournament winner by selecting the setting that is located on the best Pareto front. If the participating tournament configurations are on the same front, the best crowding distance (highest) to determine the winning setting is used. Later, the algorithm applies the basic genetic operators and promotes the next generation cycle with the configurations that occupy the best fronts, preserving the diversity through the crowding distance. A job is assigned to a machine at a given stage by taking into consideration processing speeds, setup times, machine availability, and energy consumption. Each solution is encoded in a permutation of integers. 5.2. Crossover Operators Crossover operators allow the obtaining of new solutions (children) by combining individuals (parents) of the population. The crossover operator is applied under a certain probability. In this paper, we consider two operators: the partially mapped crossover and the order crossover.

Algorithms 2018, 11, 68

9 of 23

Partially mapped crossover (PMX) [32,33]: This operator uses two cut points (Figure 5). The part of the first parent between the two cut points is copied to the children. Then, the following parts of the children are filled by a mapping between the two parents, so that their absolute positions are inherited Algorithms 2018, 11,from x FORthe PEER REVIEW 9 of 22 where possible second parent.

Algorithms 2018, 11, x FOR PEER REVIEW

9 of 22

Figure 5. 5. Partially Partially mapped mapped crossover crossover (PMX) (PMX) operator. operator. Figure

Figure 6 shows an example of the PMX crossover. The cross points in both parents serve to form the child chromosome. The 16 listed lots of the 4 jobs are randomly distributed on each parent Figure 5. form Partially crossover (PMX) chromosome. When crossed, they themapped child chromosome withoperator. 16 lots.

Figure 6. PMX crossover operator example with batch index.

Figures 7 and 8 show representations of tasks for two and four machines per stage. The original sequence is ordered considering the enumeration of the jobs and their batches. Each machine is assigned batches to process, which form the tasks of each job. Each machine must process a certain Figure 6. PMX crossover operator example with batch index. set of tasks; each set is assigned a processing order (Proc. Order). These sets are formed according to the processing compatibility. Figures 7 and 8 show representations tasks for twojobs andlisted four machines The cross parents are composed of theof batches of the in Table 8.per stage. The original sequence is ordered considering the enumeration of the jobs and their batches. Each machine is 𝑚 𝑚2 assigned batches to process, which form1the tasksofofthe each job. Each machine must process a certain Table 8. 1 Enumeration 2 1 batches for 2 jobs. Proc. Order set of tasks; each set is assigned a processing order (Proc. Order). These sets are formed according to Ordered 2 3 13 15 5 6 7 8 1 4 9 10 11 12 14 16 Batch the processing compatibility. Job Batches Batch Index 1 4 2 1 3 4 Jobs 1 4 1–4 𝑚1 𝑚2 with 2 representation 4 example 5–8two machines per stage. Figure 7. Chromosome 31 9–12 2 4 1 2 Proc. Order 4 4 13–16

Ordered 𝑚 2 13 13 15𝑚 5 26 7 8 1 4𝑚9310 11 12𝑚 144 16 Batch 11 Proc. 4 2 2 1 1 3 41 Jobs Order Figures 7 and 8Ordered show representations of tasks for two and four machines 2 3 6 7 3 15 5 8 10 11 12 16 1 4 9 14 Batchper stage. The original Figure 7. Chromosome representation example with two machines per stage. sequence is ordered considering batches. Each machine is 1 4 the 2 1enumeration 4 2 3of the4 jobs 1 and 3 4 theirJobs assigned batches to process, which form the tasks of each job. Each machine must process a certain set

𝑚1 representation 𝑚2 𝑚3 with𝑚 Figure 8. Chromosome example four 4 machines per stage.

1 2 1 1 Proc. Order The tasks of Jobs 1, 3, and 4 can be processed in the same set at the Ordered 2 3 6 7 3 15 5 8 10 11 12 16 1 4 9 14 same Batchtime. The tasks of Job 2 are put into a separate set because For example, in Figure 7, 1 4 2they 1 are 4 not 2 compatible 3 4 with 1 3each 4 other. Jobs the first two tasks (1-1, 4-1) are processed by Machine 1, then the second set with one task (2-1). In Figure 8. Chromosome representation example with four machines per stage. Machine 2, two tasks of the first set (1-2 and 3-1) are processed, assuming that the machine has only

Figure 6. PMX crossover operator example with batch index. Figure 6. PMX crossover operator example with batch index.

Figures and Algorithms 2018, 711, 68 8

show representations of tasks for two and four machines per stage. The original 10 of 23 Figures 7 and 8 show representations of tasks for per stage. original sequence is ordered considering the enumeration oftwo the and jobsfour andmachines their batches. EachThe machine is sequence is ordered considering enumeration the job. jobsEach and machine their batches. machine is assigned batches to process, whichthe form the tasks ofofeach must Each process a certain assigned batches toassigned process, form the tasks of each job.These Each machine must process a certain of each set is awhich processing order (Proc. Order). sets are are formed according to the settasks; of tasks; each set is assigned a processing order (Proc. Order). These sets formed according to set of tasks; each set is assigned a processing order (Proc. Order). These sets are formed according to processing compatibility. the processing compatibility. the processing compatibility.

𝑚1

1 𝑚1 2 1 Ordered 2 3 13 15 5 627 8 Ordered 213 13415 5 627 8 1 4 2

𝑚2

1𝑚2 2 Proc. Order 1 Order 1 4 9 10 11 12 214 Proc. 16 Batch 11 4 9 103 11 12 414 16 Jobs Batch 1 3 4 Jobs

Figure 7. Chromosome representation example with two machines per stage. Figure 7. 7. Chromosome Chromosome representation representation example example with with two Figure two machines machines per per stage. stage.

𝑚1 𝑚11

Ordered 2 316 7 Ordered 21 34 627 1 4 2

𝑚2 𝑚22

3 31 1

𝑚3 𝑚13

152 5 8 10 11112 15 4 528 10 11 3 12 4 2 3

𝑚4 𝑚14

16 16 4 4

1 4 19 114 93 1 3

Proc. Order Order 14 Proc. Batch 14 Batch 4 Jobs 4 Jobs

Figure 8. Chromosome representation example with four machines per stage. Figure 8. Chromosome representation example with four machines per stage. Figure 8. Chromosome representation example with four machines per stage.

The tasks of Jobs 1, 3, and 4 can be processed in the same set at the same time. The tasks of Job 2 Theinto tasks of Jobs 1,set 3, because and 4 can be processed in the same seteach at the same time. The tasks of Job7, 2 are put a separate they are not compatible with other. For example, in Figure The tasks of Jobs 1, 3, and 4 can be processed in the same set at the same time. The tasks of Job are separate because they are by notMachine compatible withthe each other.set Forwith example, in Figure 7,2 the put firstinto twoaatasks (1-1,set 4-1) are processed 1, then second one task (2-1). In are put into separate set because they are not compatible with each other. For example, in Figure 7, the first two tasks (1-1, are processed by3-1) Machine 1, then the second that set with one task has (2-1). In Machine 2, two tasks of 4-1) the first set (1-2 and are processed, assuming the machine only the first two tasks (1-1, 4-1) are processed by Machine 1, then the second set with one task (2-1). Algorithms 2018, 11, x FOR PEER REVIEW of 22 Machine 2, two tasks ofup thetofirst set (1-2 and processed, assuming that the machine has10(4-1). only theMachine capacity totwo process 6first batches (60 kg).3-1) Theare second set will then be followed by one task In 2, tasks of the set (1-2 and 3-1) are processed, assuming that the machine has only the capacity to process up to 6 batches (60 kg). The second set will then be followed by one task (4-1). Figure 8toshows four per(60 stage. arewill assigned to each machine. Machines 1 the capacity process upmachines to 6 batches kg).Four The batches second set then be followed by one task (4-1). and 2Figure process two sets of machines tasks. Machines 3 and 4 process one set. Upon completion the tasks1 8 shows four per stage. Four batchesonly are assigned to each machine. of Machines processed in sets Step in the later stages, they will be only processed individually. and 2 process twoinsets of1, tasks. Machines 3 and 4 process one set. Upon completion of the tasks Orderincrossover (OX) Two cut points are selected individually. randomly; the part of the first processed sets in Step 1, in[32,34–37]: the later stages, they will be processed parent located between these two cutTwo points copied the child. The child’s other spaces left Order crossover (OX) [32,34–37]: cut is points areto selected randomly; the part of the firstare parent blank. valuesthese copied to cut the points child are in the the child. secondThe parent. Theother following positions in the locatedThe between two is deleted copied to child’s spaces are left blank. childvalues are filled starting the are blank spaces considering theThe order found positions in the second The copied to theinchild deleted in and the second parent. following in theparent child (Figure 9). are filled starting in the blank spaces and considering the order found in the second parent (Figure 9).

Figure 9. Order Order crossover crossover (OX) operator.

5.3. 5.3. Mutation Mutation Operators Operators Mutation operatorsproduce produce small changes in individuals according to a probability. This Mutation operators small changes in individuals according to a probability. This operator operator helps to prevent falling into local optima and to extend the search space of the algorithm. helps to prevent falling into local optima and to extend the search space of the algorithm. Three Three mutation operators are considered: Displacement, Exchange, and Insertion. mutation operators are considered: Displacement, Exchange, and Insertion. Displacement is a generalization of the insertion mutation, in which Displacement is a generalization of the insertion mutation, in which instead instead of of moving moving aa single single value, several values changed (Figure (Figure 10). 10). The value, several values are are changed The exchange exchange operator operator selects selects two two random random points points and and these position values values are areexchanged exchanged(Figure (Figure11). 11).InIninsertion, insertion,a avalue value selected randomly and these position is is selected randomly and willwill be be inserted at an arbitrary position (Figure 12). inserted at an arbitrary position (Figure 12).

operator helps to prevent falling into local optima and to extend the search space of the algorithm. Three Three mutation mutation operators operators are are considered: considered: Displacement, Displacement, Exchange, Exchange, and and Insertion. Insertion. Displacement is a generalization of the insertion mutation, in which Displacement is a generalization of the insertion mutation, in which instead instead of of moving moving aa single single value, several values are changed (Figure 10). The exchange operator selects two random points value, several values are changed (Figure 10). The exchange operator selects two random points and and these these position position values values are are exchanged exchanged (Figure (Figure 11). 11). In In insertion, insertion, aa value value is is selected selected randomly randomly and and will will Algorithms 2018, 11, 68 11 of 23 be be inserted inserted at at an an arbitrary arbitrary position position (Figure (Figure 12). 12).

Figure Figure 10. 10. Displacement Figure 10. Displacement mutation mutation operator. operator.

Figure 11. Exchange Exchange mutation operator. operator. Figure Algorithms 2018, 11, x FOR PEER REVIEW Figure 11. 11. Exchange mutation mutation operator.

11 of 22

Figure 12. Insertion Figure 12. Insertion mutation mutation operator. operator.

6. Tuning Up the Parameters 6. Tuning Up the Parameters 6.1. Parameter Calibration 6.1. Parameter Calibration Calibration is provide thethe best result in Calibration is aa procedure procedurefor forchoosing choosingthe thealgorithm algorithmparameters parametersthat that provide best result thethe response variables. A computational experiment for for calibration includes the the following steps: (a) in response variables. A computational experiment calibration includes following steps: each instance or workload is run with all possible combinations of parameters; (b) the best solution (a) each instance or workload is run with all possible combinations of parameters; (b) the best solution is is obtained; (c) relative the relative difference each algorithm onsolution the best solution is(d)calculated; (d) obtained; (c) the difference of eachof algorithm on the best is calculated; multifactorial multifactorial Analysis of Variance with alevel 95%isconfidence levelthe is parameter applied tothat findmost the Analysis of Variance (ANOVA) with a(ANOVA) 95% confidence applied to find parameter that most influences the solution; (e) the set of the best values is selected for each influences the solution; (e) the set of the best values is selected for each parameter. parameter. Table 9 shows the parameters used for calibration. A total of 3 × 2 × 3 × 3 × 3 = 162 Table 9 shows the parameters used for calibration. Aruns totalfor of each 3 × 2combination × 3 × 3 × 3 =are 162 different different algorithms or combinations are considered. Thirty performed, algorithms or combinations are considered. Thirty runs for each combination are performed, with with 162 × 30 = 4860 experiments in total. For each of the 30 runs, a different workload will be taken. 162 each × 30 =of4860 experiments in total.are For each of the 30 runs,from a different workload will be taken. For For the five jobs, 30 batches generated randomly a uniform distribution according to each of the five jobs, 30 batches are generated randomly from a uniform distribution according to their limits (Table 4). theirEach limitsbatch (Table 4). is assigned to a workload, obtaining 30 loads from 5 jobs. The variation of processing capabilities (in minutes) of the machines in Stages 2 and 5 is generated according to Tables 1 and 2. Table 9. Parameters used for calibration. Stage 2 has 6 machines with different processing capabilities in kilograms. Stage 5 has 6 machines. Factors Levels Population 20, 30, 50 Crossover operators OX, PMX Crossover probability 0.5, 0.7, 0.9 Mutation operators Displacement, Exchange, Insertion Mutation probability 0.05, 0.1, 0.2

Algorithms 2018, 11, 68

12 of 23

According to the selection of a number of machines per stage, the set of machines is chosen. The machines are numbered from 1 to 6. By considering different numbers of machines per stage (2, 4, or 6), we conduct 4860 × 3 = 14,580 experiments. In each of the 30 runs of each combination, the best individual applying desirability function is obtained. Subsequent values of each objective (Cmax and Eop ) are used to calculate an average of 30, obtaining a single value for each objective. Table 9. Parameters used for calibration. Factors Population Crossover operators Crossover probability Mutation operators Mutation probability Selection Stop criterion

Levels 20, 30, 50 OX, PMX 0.5, 0.7, 0.9 Displacement, Exchange, Insertion 0.05, 0.1, 0.2 Binary tournament 50 iterations

The performance of each algorithm is calculated as the relative increment of the best solution (RIBS). The RIBS is calculated with the following formula: RIBS =

Heusol − Bestsol × 100 Bestsol

(6)

where Heusol is the value of the objective function obtained by the algorithm, and Bestsol is the best value obtained during the execution of all possible combinations of parameters. All experiments were performed on a PC with Intel Core i3 CPU and 4 GB RAM. The programming language used to encode the algorithm is R. It provides advantages, including the scalability and libraries [38]. The2018, calibration ofREVIEW the algorithm lasted approximately 15 days and 9 h. 12 of 22 Algorithms 11, x FOR PEER All experiments were performed on a PC with Intel Core i3 CPU and 4 GB RAM. The 6.2. Residual Analysis programming language used to encode the algorithm is R. It provides advantages, including the

To verify that and thelibraries data are normalized, the supposition of the suitability15ofdays theand model scalability [38]. The calibration of the algorithm lasted approximately 9 h. using the normality, homoscedasticity, and independence of residues is verified. The residuals are calculated 6.2. Residual Analysis according to the following formula [39]: To verify that the data are normalized, the supposition of the suitability of the model using the normality, homoscedasticity, and independence of residues ei = yi − y l , i = 1, 2, 3, is. .verified. . , n The residuals are calculated according to the following formula [39]:

(7)

= 𝑦𝑖 average − 𝑦̅,𝑖 𝑖 = 1,of 2, 3, …,𝑛 where yi is the RIBS for the run i and yi is𝑒𝑖 the the experiment. Figure 3 shows(7) the graph of 𝑦𝑖 is the RIBS for the run for 𝑖 and 𝑦̅𝑖 is the average the experiment. Figure 3 shows the graph the normalwhere probability of residuals 6 machines per of stage. As can be seen (Figure 13), the graph of the normal probability ofof residuals for 6 machines per stage. As can be obtained seen (Figurefor 13),2the graph complies with the assumption normality. The same results are and 4 machines complies with the assumption of normality. The same results are obtained for 2 and 4 machines per per stage. stage.

Figure 13. Graph of the normal probability of waste for six machines per stage.

Figure 13. Graph of the normal probability of waste for six machines per stage. 6.3. Variance Analysis Variance analysis is applied to evaluate the statistical difference between the experimental results and to observe the effect of the parameters on the quality of the results. It is used to determine factors that have a significant effect and to discover the most important factors. The parameters of the problem are considered as factors and their values as levels. We assume that there is no interaction between factors.

Algorithms 2018, 11, 68

13 of 23

6.3. Variance Analysis Variance analysis is applied to evaluate the statistical difference between the experimental results and to observe the effect of the parameters on the quality of the results. It is used to determine factors that have a significant effect and to discover the most important factors. The parameters of the problem are considered as factors and their values as levels. We assume that there is no interaction between factors. The F-Ratio is the ratio between Factor Mean Square and Mean Square residue. A high F-Ratio means that this factor significantly affects the response. The p-value shows the statistical significance of the factors: p-values that are less than 0.05 have a statistically significant effect on the response variable (RIBS) with a 95% confidence level. According to the F-Ratio and p-value, the most important factors in the case of two machines per stage are the mutation operator and the crossover operator variable. DF shows the number of degrees of freedom (Table 10). Table 10. Relative increment of the best solution (RIBS) analysis of variance for two machines per stage. Source

Sum of Squares

DF

Mean Square

F-Ratio

p-Value

A: Crossover B: Mutation C: Crossover probability D: Mutation probability E: Population Residuals

1747 11,967 1519 654 1732 27,934

1 1 1 1 1 156

1747 11,967 1519 654 1732 179

9.755 66.833 8.481 3.652 9.673

0.00213 9.58 × 10−14 0.00411 0.05782 0.00222

In the case of four machines per stage, the most important factors are the mutation operator and crossover probability (Table 11). For six machines per stage, the most important factors are the mutation operator and mutation probability (Table 12). Table 11. RIBS analysis of variance for four machines per stage. Source

Sum of Squares

DF

Mean Square

F-Ratio

p-Value

A: Crossover B: Mutation C: Crossover probability D: Mutation probability E: Population Residuals

760 13,471 1540 5157 3395 25,765

1 1 1 1 1 156

760 13,471 1540 5157 3395 165

4.602 81.565 31.223 9.326 20.557

0.03349 6.09 × 10−16 1.00 × 10−7 0.00266 1.15 × 10−5

Table 12. RIBS analysis of variance for six machines per stage. Source

Sum of Squares

DF

Mean Square

F-Ratio

p-Value

A: Crossover B: Mutation C: Crossover probability D: Mutation probability E: Population Residuals

2587 17,785 2886 4840 2881 21,937

1 1 1 1 1 156

2587 17,785 2886 4840 2881 165

18.39 126.47 20.52 34.42 20.49

3.14 × 10−5 2 × 10−16 1.16 × 10−5 2.58 × 10−8 1.18 × 10−5

Figures 14–17 show means and 95% Least Significant Difference (LSD) confidence intervals. Figure 14 shows the results obtained for mutation operators for two, four, and six machines per stage. We can see that operator Displacement is the best mutation among the three tested operators. Figure 15 shows the results for the crossover operators, where we observe that the PMX operator is the best. The results for four and six stages are similar.

E: Population Residuals

2881 21,937

1 156

2881 165

20.49

1.18 × 10

Figures 14–17 show means and 95% Least Significant Difference (LSD) confidence intervals. Figure 14 shows the results obtained for mutation operators for two, four, and six machines per stage. Algorithms 68 operator Displacement is the best mutation among the three tested operators. Figure 14 of 23 We can2018, see 11, that 15 shows the results for the crossover operators, where we observe that the PMX operator is the best. The results for four and six stages are similar. Figure 16 shows plots for the crossover probability. It can be seen that the best crossover probability Figure 16 shows plots for the crossover probability. It can be seen that the best crossover occurring is 0.9. Figure 17 presents plots for the mutation probability, where the probability of 0.2 is probability occurring is 0.9. Figure 17 presents plots for the mutation probability, where the statistically more significant. probability of 0.2 is statistically more significant.

Algorithms 2018, 11, x FOR PEER REVIEW

14 of 22

Figure 14. Means and 95% LSD confidence intervals of mutation operators. Figure Means and 95% LSD confidence intervals of mutation operators. Algorithms 2018, 11, x FOR14. PEER REVIEW

14 of 22

62

59

% Increment over the best solution

% Increment over the best solution

62

59

56 56

53 53

50 500.5 0.5

0.7 0.7

OX 0.9 OX 0.9

1.1

1.3

1.5 PMX1.7

Crossover 1.1 1.3operators 1.5 PMX1.7

1.9 1.9

2.1 2.1

Crossover operators

68 68

% Increment over the best solution

% Increment over the best solution

Figure 15. Means and 95% LSD confidence intervals of crossover operators—two machines per stage. Figure Means and and95% 95%LSD LSDconfidence confidenceintervals intervals crossover operators—two machines stage. Figure 15. 15. Means ofof crossover operators—two machines per per stage.

65 65 62 62 5959 5656

0.4 0.4

0.5 0.5 0.5 0.5

0.6 0.7 0.8 0.7 0.6 0.7 0.8 0.7 Crossover probability Crossover probability

0.9 0.9 0.9 0.9

1 1

Figure 16. Means and 95% LSD confidence intervals of crossover probability—four machines per stage. Figure Means and95% 95%LSD LSDconfidence confidence intervals intervals of crossover perper stage. Figure 16.16. Means and crossoverprobability—four probability—fourmachines machines stage.

best solution

e best solution

7171 6868 6565

56 0.5 0.5

0.4

0.7 0.7

0.6

0.8

0.9 0.9

1

Crossover probability

Algorithms 2018, 11, 68

15 of 23

Figure 16. Means and 95% LSD confidence intervals of crossover probability—four machines per stage.

% Increment over the best solution

71 68 65 62 59 56 53 1 0.05

0.5

0.12

1.5

2.5

0.23

3.5

Mutation probability

Figure 17. Means and 95% LSD confidence intervals of mutation probability—six machines per stage. Figure 17. Means and 95% LSD confidence intervals of mutation probability—six machines per stage.

6.4. 6.4.Pareto ParetoFront FrontCalibration CalibrationAnalysis Analysis Figures Figures18–20 18–20show showthe thesolution solutionspace spaceobtained obtainedby bycalibration calibrationexperiments. experiments.The Thehorizontal horizontalaxis axis represents the energy E consumed by the machines. The vertical axis represents the time completion represents the energyop 𝐸𝑜𝑝 consumed by the machines. The vertical axis represents the time Ccompletion determine individuals of allindividuals experiments, front is calculated eachis max of jobs. To 𝐶𝑚𝑎𝑥 of jobs.the Tobest determine the best ofthe allPareto experiments, the Paretofor front case. Each point represents a combination of parameters from 162 experiments explained in Section 6.1. calculated for 11, each case. Each point represents a combination of parameters from 162 experiments Algorithms 2018, x FOR PEER REVIEW 15 of 22 Each front2018, consists of6.1. numbered points fromof lowest to highest according to their DI. Theaccording closer15toof1, Algorithms 11, x FOR PEER REVIEW 22 explained in Section Each front consists numbered points from lowest to highest to the better the place in the enumeration is (see Section 3.3). their DI. The closer to 1, the better the place in the enumeration is (see Section 3.3). 97.6 97.6

CmaxC (h) (h)

97.4 97.4

max

97.2 97.2 97 97 96.8 96.8 6,960 6,960

7,010 7,060 7,010 E (kW) 7,060 op Eop (kW)

7,110 7,110

Figure 18. Pareto front for two machines per stage. Figure18. 18.Pareto Paretofront frontfor fortwo twomachines machinesper perstage. stage. Figure

45.08 45.08

max

CmaxC (h) (h)

44.88 44.88 44.68 44.68 44.48 44.48 44.28 44.2812,350 12,350

12,450 12,450

12,550 E12,550 op (kW) Eop (kW)

12,650 12,650

Figure19. 19.Pareto Paretofront frontfor forfour fourmachines machinesper perstage. stage. Figure Figure 19. Pareto front for four machines per stage. 26.58 26.58

(h) (h) max Cmax

26.38 26.38 26.18

44.28 12,350

Algorithms 2018, 11, 68

12,450

12,550 Eop (kW)

12,650

Figure 19. Pareto front for four machines per stage.

16 of 23

Cmax (h)

26.58

26.38

26.18

25.98 16,900

17,100

17,300

𝐸op (kW)

17,500

17,700

Figure 20.20. Pareto front for six per stage. Figure Pareto front for machines six machines per stage.

Tables 13–15 show the ordering of the front points regarding a combination of 𝐸𝑜𝑝 , 𝐶𝑚𝑎𝑥 , and Tables 13–15 show the ordering of the front points regarding a combination of Eop , Cmax , Desirability Index (DI) for each Pareto front. We see, for example, that the parameter combination and Desirability Index (DI) for each Pareto front. We see, for example, that the parameter combination 156 corresponds to a higher DI, followed by 129, etc. 156 corresponds to a higher DI, followed by 129, etc. Table 16 shows the parameters of these combinations. The crossover PMX has higher values of DI.Table The mutation operator Displacement is used in each the top three combinations. The crossover 13. Optimum parameter combination selection forofbi-objective genetic algorithm (GA) for two probability is maintained at 0.9 in the two most significant combinations. The mutation probability machines per stage. remains at the highest with 0.2 in all three, as does the population with 50 individuals. The most influentialNo. parameters correspond 156,DIwhich has greater DI in Comb. Eopto the combination Cmax three cases (see Tables 13–15). 1 2 3 4

156 129 128 120

6993.41 6968.70 7014.08 7046.87

97.03 97.15 97.01 96.94

0.843 0.816 0.782 0.677

Table 14. Optimum parameter combination selection for bi-objective GA for four machines per stage. N

Comb.

Eop

Cmax

DI

1 2 3 4 5

156 129 147 128 120

12,381.55 12,370.82 12,442.75 12,362.17 12,355.79

44.38 44.45 44.32 44.67 44.74

0.926 0.906 0.878 0.775 0.730

Table 15. Optimum parameter combination selection for bi-objective GA for six machines per stage. N

Comb.

Eop

Cmax

DI

1 2 3 4 5 6

156 129 120 147 138 75

17,018.73 16,961.46 17,108.33 16,953.30 17,189.52 16,925.85

26.07 26.12 26.05 26.19 26.01 26.23

0.891 0.883 0.847 0.823 0.816 0.796

Table 16 shows the parameters of these combinations. The crossover PMX has higher values of DI. The mutation operator Displacement is used in each of the top three combinations. The crossover probability is maintained at 0.9 in the two most significant combinations. The mutation probability remains at the highest with 0.2 in all three, as does the population with 50 individuals. The most influential parameters correspond to the combination 156, which has greater DI in three cases (see Tables 13–15). Our analysis shows that crossover PMX and mutation operator Displacement are the best among those tested. The best probability of crossover is 0.9 and that of mutation is 0.2. The population of 50 individuals is statistically more significant.

Algorithms 2018, 11, 68

17 of 23

Table 16. Optimum parameter combination for bi-objective GA. No.

Comb.

Cr

Mu

Pcr

Pmu

Pob

1 2 3

156 129 120

PMX OX OX

Displacement Displacement Displacement

0.9 0.9 0.7

0.2 0.2 0.2

50 50 50

Selected

7. Experimental Analysis 7.1. Experimental Setup In multiobjective optimization, there is usually no single best solution, but a set of solutions that are equally good. Our objective is to obtain a good approximation of the Pareto front regarding two objective functions [40]. Our bi-objective genetic algorithm is tuned up using the following parameters obtained during the calibration step: crossover, PMX; mutation, Displacement; crossover probability, 0.9; mutation probability, 0.2; population size, 50. We consider five jobs in each of the 30 workloads, obtaining the number of batches for each job based on a uniform random distribution (Table 4). 7.2. Bi-Objective Analysis Algorithms 2018, 11, x FOR PEER REVIEW

17 of 22

Figures 21–23 show the solution sets and the Pareto fronts. In each front, 1500 solutions17are Algorithms 2018, 11, x FOR PEER REVIEW of 22 included, being 50 (individuals) × 30 (workloads). 97.45

𝐶𝑚𝑎𝑥 (h)

97.45 97.25

𝐶𝑚𝑎𝑥 (h)

97.25 97.05 97.05 96.85 96.85 96.65 96.65 96.45 6,820 96.45 6,820

6,900 6,900

6,980

7,060

𝐸op (kW) 6,980 7,060

7,140 7,140

𝐸op (kW)

Figure 21. Pareto front for two machines per stage. Figure Figure21. 21.Pareto Paretofront frontfor fortwo twomachines machinesper perstage. stage.

44.72

𝐶𝑚𝑎𝑥 (h)

𝐶𝑚𝑎𝑥 (h)

44.72 44.52 44.52 44.32 44.32 44.12 44.12 43.92 12,230 12,310 12,390 12,470 12,550 43.92 𝐸op (kW) 12,230 12,310 12,390 12,470 12,550

𝐸 (kW)

op Figure 22. Pareto front for four machines per stage. Figure 22. Pareto front for four machines per stage.

Figure 22. Pareto front for four machines per stage. 26.6 26.626.4

h)

26.426.2

43.92 12,230

12,310

12,390

12,470

12,550

𝐸op (kW)

Figure 22. Pareto front for four machines per stage.

Algorithms 2018, 11, 68

18 of 23

26.6 26.4

𝐶𝑚𝑎𝑥 (h)

26.2 26

25.8 25.6 16,890

16,970

17,050 𝐸op (kW)

17,130

17,210

Figure 23.23.Pareto front forfor sixsix machines per stage. Figure Pareto front machines per stage.

Figure shows the solution spaceobjectives) (two objectives) for two per machines pertwo-dimensional stage. This twoFigure 21 21 shows the solution space (two for two machines stage. This dimensional space represents a feasible setthat of solutions satisfyconstraints. the problemThe constraints. solution space solution represents a feasible set of solutions satisfy thethat problem Pareto The Pareto front covers a wide range of 𝐸 from 6856 to 7178 kW, while 𝐶 is in the range 𝑜𝑝 𝑚𝑎𝑥 front covers a wide range of Eop from 6856 to 7178 kW, while Cmax is in the range of 96.64 to 97.48 h. of 96.64 to 97.48 h. Figure 22 shows the solution space for four machines per stage. The Pareto front covers Eop from 22kW, shows theCmax solution space forof four machines stage.Figure The Pareto front 𝐸𝑜𝑝 12,242 Figure to 12,559 while is in the range 43.98 to 44.8 h.per Finally, 23 shows thecovers solution from 12,242 to 12,559 kW, while 𝐶 is in the range of 43.98 to 44.8 h. Finally, Figure 23 shows the space for 6 machines per stage. Eop ranges 𝑚𝑎𝑥 from 16,899 to 17,221 kW, while Cmax is in the range of 25.66 tosolution 26.66 h. space for 6 machines per stage. 𝐸𝑜𝑝 ranges from 16,899 to 17,221 kW, while 𝐶𝑚𝑎𝑥 is in the range of 25.66the to Pareto 26.66 h.fronts are of good quality, it is observed that many of the generated solutions Although are quite far from it. Therefore, a single execution of the algorithm can produce significantly different results. The selection of the solutions included in the Pareto front depends on the preference of the decision maker. 7.3. Branch and Bound Analysis A Branch and Bound algorithm (B&B) is an exact method for finding an optimal solution to an NP-hard problem. It is an enumerative technique that can be applied to a wide class of combinatorial optimization problems. The solution searching process is represented by a branching tree. Each node in the tree corresponds to a subproblem, which is defined by a subsequence of tasks that are placed at the beginning of the complete sequence. This subsequence is called partial sequence (PS). The set of tasks not included in PS is called NPS. When a node is branched, one or more nodes are generated by adding a task ahead of the partial sequence associated with the node being branched. To avoid full enumeration of all task permutations, the lower bound of the value of the objective functions is calculated in each step for each partial schedule. In the case of ties, the algorithm selects a node with the lowest lower bound. The B&B algorithm was run on a PC with an Intel Core i3 CPU and 4 GB of RAM. The programming language used to encode the algorithm was R. The algorithm lasted approximately 12 days, 3 h. A comparison of the B&B algorithm concerning the Pareto front is shown in Figure 21 as a lowest left point. The B&B result is considered as the best result that could be obtained, known as the global optimum. Table 17 shows the relative degradation of our bi-objective algorithm over the B&B best solutions. Results are obtained by averaging 30 experiments. We see that the results for each objective are not more than 3.44% worst for Cmax and 2.94% for Eop . Table 18 shows the computation times for obtaining the Pareto front solutions. According to the results comparison, we conclude that the proposed bi-objective algorithm can produce satisfactory results close to a global optimum in short computational time.

Algorithms 2018, 11, 68

19 of 23

Table 17. RIBS of the solution of the Branch and Bound (B&B) algorithm concerning the Pareto front solutions. 30 × 2

30 × 4

30 × 6

Eop

Cmax

Eop

Cmax

Eop

Cmax

2.94 1.55 0.77 1.37 1.24 0.58 0.45 0.42 0.37

0.12 0.17 0.33 0.23 0.30 0.45 0.59 0.73 0.86

1.72 0.87 0.41 0.33 0.58 0.77 0.35 0.29 2.27

0.46 0.55 0.96 1.55 0.89 0.75 1.23 1.92 0.37

0.50 1.02 0.42 0.46 0.62 0.60 0.55 0.43

1.8 0.35 3.44 2.27 0.47 1.1 1.45 2.78

Table 18. Computational time for obtaining Pareto front solutions. Time (min)

Time (min)

Time (min)

2.85 3.79 1.74 2.76 2 machines per 4 machines per 3.73 stage stage 3.86 2.85 1.77 Algorithms 2018, 11, x FOR 2.85 PEER REVIEW

3.77 5.81 6.78 5.67 3.85 4.82 4.85 4.94 5.8

7.55 8.6 6.7 6.6 7.62 7.66 8.61 6.76

6 machines per stage

19 of 22

7.4. Comparison with Industry 7.4. Comparison with Industry Figures 24 and 25 show 𝐸𝑜𝑝 and 𝐶𝑚𝑎𝑥 of the industry and the B&B algorithm. The term Figures 24 and 25 show Eop and Cmax of the industry and the B&B algorithm. The term “Algorithm” “Algorithm” in these graphs represents of the term “Industry” in these graphs represents the results of the the B&Bresults algorithm. TheB&B termalgorithm. “Industry”The represents the actual represents the actual data of the production company. The industry results shown in Table 19 the are data of the production company. The industry results shown in Table 19 are obtained considering obtained considering the average monthly production load in kilograms of tortillas per year. The average monthly production load in kilograms of tortillas per year. The results of the B&B algorithm results of thefor B&B algorithm are obtained for a different of shows machines stage. Table 20 are obtained a different number of machines per stage.number Table 20 the per same results as in shows the same results as in Table 19 in percentages. Table 19 in percentages. 200

Algorithm

186

Industry

𝐶𝑚𝑎𝑥 (h)

160

120 96.52

93

80

62 43.82

40

25.57

0 2

4

6

Machines per stage

Figure 24. 24. C max of Figure 𝐶𝑚𝑎𝑥 ofB&B B&Balgorithm algorithmand andindustry. industry.

50,000

Algorithm

Industry 39,153

𝐸op (kW)

40,000 30,000

26,102 16,829

20,000 13,051

12,206

0 2

4

6

Machines per stage

Figure 24. 𝐶𝑚𝑎𝑥 of B&B algorithm and industry.

Algorithms 2018, 11, 68

Algorithm

50,000

Industry 39,153

40,000

𝐸op (kW)

30,000

26,102 16,829

20,000 13,051

10,000

20 of 23

12,206

6,831

0 2

4

6

Machines per stage

Figure25. 25. 𝐸Eop of B&B algorithm and industry. Figure 𝑜𝑝 of B&B algorithm and industry. Table 19. Table 19. Results Results of of the the industry industry and and B&B B&B algorithm. algorithm.

Machines per Stage 22 44 66

Industry Industry 𝑬𝒐𝒑 (kW) 𝑪 Eop (kW)𝒎𝒂𝒙 (𝐡) Cmax (h) 13,050.90 13,050.90 186 186 26,101.80 26,101.80 93 93 39,152.70 62 62 39,152.70

Machines per Stage

B&B Algorithm B&B Algorithm 𝑬𝒐𝒑 (kW) 𝑪𝒎𝒂𝒙 (h)(h) Eop (kW) Cmax 6,831.40 96.52 6,831.40 96.52 12,206.42 43.82 12,206.42 43.82 16,828.72 25.57 16,828.72 25.57

Table over B&B B&B algorithm algorithm (%). (%). Table 20. The industry and bi-objective GA degradation over

Industry Industry % 𝑬𝒐𝒑 % 𝑪𝒎𝒂𝒙 % Eop % Cmax 47.65 48.11 47.65 48.11 53.24 52.88 53.24 52.88 57.02 58.75 57.02 58.75

Machines per Stage

Machines per Stage 2 4 6

2 4 6

Bi-Objective GA Bi-Objective GA % 𝑬𝒐𝒑 % 𝑪𝒎𝒂𝒙 % Eop % Cmax 1.53 0.17 1.53 0.87 0.54 0.17 0.87 0.54 0.62 0.47 0.47 0.62

Figure 24 shows Cmax considering working hours. We see that the results of the algorithm are significantly improved compared with industry. For two machines per stage, the difference is almost a halving of Cmax from 186 to 96.52 h. In the case of four machines, it remains below half (93 and 43.82 h). In the case of six machines, the results are better by almost three times with regard to Cmax (62 h versus 25.57 h). Figure 25 shows Eop according to the processing time of machines (Table 3). We see that the B&B algorithm significantly improves upon the results obtained in industry. For two machines per stage, the Eop objective is reduced to almost half, and for four and six machines, the Eop objective of our algorithm is reduced to less than half. Table 20 shows the percentage of degradation of the results of industry and those selected from the Pareto front obtained from the bi-objective GA compared with results of the B&B algorithm. We observe that the degradation of Eop and Cmax observed in industry are closer to or higher than 50%. Comparing B&B with our bi-objective GA, we observe that our results are less than 1.53% worse for Eop and 0.54% for Cmax . Due to the fact that the B&B algorithm finds the global optimum, we demonstrate the quality of our algorithm. 8. Conclusions Our main contributions are multifold: (1)

We formulated the complex problem of the real-life industry environment of tortilla production considering two optimization criteria: total completion time and energy consumption;

Algorithms 2018, 11, 68

(2) (3)

(4) (5)

21 of 23

We proposed a bi-objective solution to solve a hybrid flow shop with unrelated machines, setup time, and work in progress buffers. It is based on the known NSGA-II genetic algorithm; We calibrated our algorithm using a statistical analysis of multifactorial variance. The ANOVA technique is used to understand the impact of different factors on the solution quality. A branch and bound algorithm was used to assert the obtained performance. We provided a comprehensive experimental analysis of the proposed solution based on data from a tortilla production company in Mexico. We demonstrated that our solutions are not more than 3.5 percent off the global optimum for both criteria. They save more than 48 percent of production time and 47 percent of energy consumption compared with the actual production plan followed by the company.

Author Contributions: All authors contributed to the analysis of the problem, designing algorithms, performing the experiments, analysis of data, and writing the paper. Funding: This work is partially supported by Russian Foundation for Basic Research (RFBR), project No. 18-07-01224-a. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2. 3. 4. 5. 6.

7.

8. 9. 10.

11.

12. 13. 14.

Linn, R.; Zhang, W. Hybrid flow shop scheduling: A survey. Comput. Ind. Eng. 1999, 37, 57–61. [CrossRef] Quadt, D.; Kuhn, H. A taxonomy of flexible flow line scheduling procedures. Eur. J. Oper. Res. 2007, 178, 686–698. [CrossRef] Yaurima, V.; Burtseva, L.; Tchernykh, A. Hybrid flowshop with unrelated machines, sequence-dependent setup time, availability constraints and limited buffers. Comput. Ind. Eng. 2009, 56, 1452–1463. [CrossRef] Ruiz, R.; Vázquez-Rodríguez, J.A. The hybrid flow shop scheduling problem. Eur. J. Oper. Res. 2010, 205, 1–18. [CrossRef] Pei-Wei, T.; Jeng-Shyang, P.; Shyi-Ming, C.; Bin-Yih, L. Enhanced parallel cat swarm optimization based on the Taguchi method. Expert Syst. Appl. 2012, 39, 6309–6319. Javanmardi, S.; Shojafar, M.; Amendola, D.; Cordeschi, N.; Liu, H.; Abraham, A. Hybrid Job Scheduling Algorithm for Cloud Computing Environment. In Proceedings of the Fifth International Conference on Innovations in Bio-Inspired Computing and Applications IBICA 2014; Komer, P., Abraham, A., Snasel, V., Eds.; Advances in Intelligent Systems and Computing Book Series; Springer: Berlin/Heidelberg, Germany, 2014; Volume 303, pp. 43–52. Ribas, I.; Leisten, R.; Framiñan, J.M. Review and classification of hybrid flow shop scheduling problems from a production system and a solutions procedure perspective. Comput. Oper. Res. 2010, 37, 1439–1454. [CrossRef] Luo, H.; Du, B.; Huang, G.; Chen, H.; Li, X. Hybrid flow shop scheduling considering machine electricity consumption cost. Int. J. Prod. Econ. 2013, 146, 423–439. [CrossRef] Mori, M.; Fujishima, M.; Inamasu, Y.; Oda, Y. A study on energy efficiency improvement for machine tools. CIRP Ann. Manuf. Technol. 2011, 60, 145–148. [CrossRef] Blum, C.; Chiong, R.; Clerc, M.; De Jong, K.; Michalewicz, Z.; Neri, F.; Weise, T. Evolutionary optimization. In Variants of Evolutionary Algorithms for Real-World Applications; Chiong, R., Weise, T., Michalewicz, Z., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 1–29. Liu, Y.; Dong, H.; Lohse, N.; Petrovic, S. Reducing environmental impact of production during a rolling blackout policy—A multi-objective schedule optimisation approach. J. Clean. Prod. 2015, 102, 418–427. [CrossRef] Nilakantan, J.; Huang, G.; Ponnambalam, S. An investigation on minimizing cycle time and total energy consumption in robotic assembly line systems. J. Clean. Prod. 2015, 90, 311–325. [CrossRef] Wang, S.; Lu, X.; Li, X.; Li, W. A systematic approach of process planning and scheduling optimization for sustainable machining. J. Clean. Prod. 2015, 87, 914–929. [CrossRef] Liu, C.; Huang, D. Reduction of power consumption and carbon footprints by applying multi-objective optimisation via genetic algorithms. Int. J. Prod. Res. 2014, 52, 337–352. [CrossRef]

Algorithms 2018, 11, 68

15. 16.

17. 18.

19. 20. 21.

22. 23.

24. 25.

26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37.

38.

22 of 23

May, G.; Stahl, B.; Taisch, M.; Prabhu, V. Multi-objective genetic algorithm for energy-efficient job shop scheduling. Int. J. Prod. Res. 2015, 53, 7071–7089. [CrossRef] Zhang, R.; Chiong, R. Solving the energy-efficient job shop scheduling problem: A multi-objective genetic algorithm with enhanced local search for minimizing the total weighted tardiness and total energy consumption. J. Clean. Prod. 2016, 112, 3361–3375. [CrossRef] Mouzon, G.; Yildirim, M.B.; Twomey, J. Operational methods for minimization of energy consumption of manufacturing equipment. Int. J. Prod. Res. 2007, 45, 4247–4271. [CrossRef] Dai, M.; Tang, D.; Giret, A.; Salido, M.A.; Li, W.D. Energy-efficient scheduling for a flexible flow shop using an improved genetic-simulated annealing algorithm. Robot. Comput. Integr. Manuf. 2013, 29, 418–429. [CrossRef] Mansouri, S.A.; Aktas, E.; Besikci, U. Green scheduling of a two-machine flowshop: Trade-off between makespan and energy consumption. Eur. J. Oper. Res. 2016, 248, 772–788. [CrossRef] Hecker, F.T.; Hussein, W.B.; Paquet-Durand, O.; Hussein, M.A.; Becker, T. A case study on using evolutionary algorithms to optimize bakery production planning. Expert Syst. Appl. 2013, 40, 6837–6847. [CrossRef] Hecker, F.T.; Stanke, M.; Becker, T.; Hitzmann, B. Application of a modified GA, ACO and a random search procedure to solve the production scheduling of a case study bakery. Expert Syst. Appl. 2014, 41, 5882–5891. [CrossRef] Fonseca, C.M.; Fleming, P.J. An overview of evolutionary algorithms in multiobjective optimization. Evol. Comput. 1995, 3, 1–16. [CrossRef] Hosseinabadi, A.A.R.; Siar, H.; Shamshirband, S.; Shojafar, M.; Nasir, M.H.N.M. Using the gravitational emulation local search algorithm to solve the multi-objective flexible dynamic job shop scheduling problem in Small and Medium Enterprises. Ann. Oper. Res. 2015, 229, 451–474. [CrossRef] Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [CrossRef] Jaimes, A.L.; Coello, C.A.C. Interactive Approaches Applied to Multiobjective Evolutionary Algorithms. In Multicriteria Decision Aid and Artificial Intelligence; Doumpos, M., Grigoroudis, E., Eds.; John Wiley & Sons, Ltd.: Chichester, UK, 2013. [CrossRef] Lu, L.; Anderson-Cook, C.; Lin, D. Optimal designed experiments using a pareto front search for focused preference of multiple objectives. Comput. Stat. Data Anal. 2014, 71, 1178–1192. [CrossRef] Wagner, T.; Trautmann, H. Integration of preferences in hypervolume-based multiobjective evolutionary algorithms by means of desirability functions. IEEE Trans. Evol. Comput. 2010, 14, 688–701. [CrossRef] Baker, K.R. Introduction to Sequencing and Scheduling; John Wiley & Sons, Ltd.: New York, NY, USA, 1974. Pinedo, M. Scheduling: Theory, Algorithms, and Systems; Prentice Hall: Upper Saddle River, NJ, USA, 2002; 586p. Abyaneh, S.H.; Zandieh, M. Bi-objective hybrid flow shop scheduling with sequence-dependent setup times and limited buffers. Int. J. Adv. Manuf. Technol. 2012, 58, 309–325. [CrossRef] Graham, R.L.; Lawler, E.L.; Lenstra, J.K.; Kan, A.H.G.R. Optimization and approximation in deterministic sequencing and scheduling: A survey. Ann. Discret. Math. 1979, 5, 287–326. [CrossRef] Larrañaga, P.; Kuijpers, C.M.H.; Murga, R.H.; Inza, I.; Dizdarevic, S. Genetic algorithms for the travelling salesman problem: A review of representations and operators. Artif. Intell. Rev. 1999, 13, 129–170. [CrossRef] Tan, K.C.; Lee, L.H.; Zhu, Q.L.; Ou, K. Heuristic methods for vehicle routing problem with time windows. Artif. Intell. Eng. 2001, 15, 281–295. [CrossRef] Gog, A.; Chira, C. Comparative Analysis of Recombination Operators in Genetic Algorithms for the Travelling Salesman Problem; Springer: Berlin/Heidelberg, Germany, 2011; pp. 10–17. Hwang, H. An improved model for vehicle routing problem with time constraint based on genetic algorithm. Comput. Ind. Eng. 2002, 42, 361–369. [CrossRef] Prins, C. A simple and effective evolutionary algorithm for the vehicle routing problem. Comput. Oper. Res. 2004, 31, 1985–2002. [CrossRef] Starkweather, T.; McDaniel, S.; Mathias, K.E.; Whitley, C. A comparison of genetic sequencing operators. In Proceedings of the 4th International Conference on Genetic Algorithms; Belew, R.K., Booker, L.B., Eds.; Morgan Kaufmann: San Francisco, CA, USA, 1991; pp. 69–76. The R Project for Statistical Computing. Available online: https://www.r-project.org/ (accessed on 4 March 2017).

Algorithms 2018, 11, 68

39. 40.

23 of 23

Montgomery, D.C.; Runger, G.C. Applied Statistics and Probability for Engineers; John Wiley & Sons, Inc.: New York, NY, USA, 1994. Rahimi-Vahed, A.; Dangchi, M.; Rafiei, H.; Salimi, E. A novel hybrid multi-objective shuffled frog-leaping algorithm for a bi-criteria permutation flow shop scheduling problem. Int. J. Adv. Manuf. Technol. 2009, 41, 1227–1239. [CrossRef] © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).