J Heuristics (2010) 16: 511–535 DOI 10.1007/s10732-009-9108-4
Multi-objective redundancy allocation optimization using a variable neighborhood search algorithm Yun-Chia Liang · Min-Hua Lo
Received: 24 November 2007 / Revised: 17 May 2009 / Accepted: 15 June 2009 / Published online: 27 June 2009 © Springer Science+Business Media, LLC 2009
Abstract A variable neighborhood search (VNS) algorithm has been developed to solve the multiple objective redundancy allocation problems (MORAP). The single objective RAP is to select the proper combination and redundancy levels of components to meet system level constraints, and to optimize the specified objective function. In practice, the need to consider two or more conflicting objectives simultaneously increases nowadays in order to assure managers or designers’ demand. Amongst all system level objectives, maximizing system reliability is the most studied and important one, while system weight or system cost minimization are two other popular objectives to consider. According to the authors’ experience, VNS has successfully solved the single objective RAP (Liang and Chen, Reliab. Eng. Syst. Saf. 92:323–331, 2007; Liang et al., IMA J. Manag. Math. 18:135–155, 2007). Therefore, this study aims at extending the single objective VNS algorithm to a multiple objective version for solving multiple objective redundancy allocation problems. A new selection strategy of base solutions that balances the intensity and diversity of the approximated Pareto front is introduced. The performance of the proposed multiobjective VNS algorithm (MOVNS) is verified by testing on three sets of complex instances with 5, 14 and 14 subsystems respectively. While comparing to the leading heuristics in the literature, the results show that MOVNS is able to generate more non-dominated solutions in a very efficient manner, and performs competitively in all performance measure categories. In other words, computational results reveal the advantages and benefits of VNS on solving multi-objective RAP. Keywords Variable neighborhood search · Redundancy allocation problem · Multi-objective Y.-C. Liang () · M.-H. Lo Department of Industrial Engineering and Management, Yuan Ze University, No 135 Yuan-Tung Road, Chung-Li, Taoyuan County, 320 Taiwan, ROC e-mail:
[email protected]
512
Y.-C. Liang, M.-H. Lo
1 Introduction A series system of s independent k-out-of-n : G subsystems is considered the most studied configuration among the redundancy allocation problem (RAP) variations. The single objective RAP belongs to NP-hard problems (Chern 1992) and has been solved using various mathematical programming and other optimization approaches. Exact methods to the RAP include dynamic programming (Fyffe et al. 1968; Nakagawa and Miyazaki 1981; Li 1996; Onishi et al. 2007), integer programming (Ghare and Taylor 1969; Bulfin and Liu 1985; Misra and Sharma 1991; Coit and Liu 2000), and mixed-integer and nonlinear programming (Tillman et al. 1977). Owing to the computational expense of exact methods grows exponentially with the increase of problem sizes, meta-heuristics have become popular methods for solving the RAP in the past decade. Heuristic methods to the single objective RAP consist of Ant Colony Optimization (ACO) (Liang 2001; Huang et al. 2002; Shelokar et al. 2002; Huang 2003; Liang and Smith 2004; Zhao et al. 2007), Genetic Algorithm (GA) (Coit and Smith, 1996a, 1996b), Simulated Annealing (SA) (Ravi et al. 1997), Tabu Search (TS) (Huang et al. 2002; Kulturel-Konak et al. 2003), hybrid Neural Network (NN) and GA (Coit and Smith 1996c), hybrid ACO with TS (Huang 2003), ACO with Degraded Ceiling (Nahas et al. 2007), Great Deluge Algorithm (GDA) (Ravi 2004), and Variable Neighborhood Search (VNS) and its variations (Chen 2005; Liang and Wu 2005; Liang and Chen 2007; Liang et al. 2007), etc. The single objective RAP is to select the proper combination and redundancy levels of components to meet system level constraints, and to optimize the specified objective function. Most single objective RAP studies focus on maximizing system reliability or minimizing system cost. However, in practice, a decision maker or a system designer has to consider multiple objectives at the same time. In addition to reliability and cost, weight and volume are also important characteristics considered in the system level. Therefore, multiple objective functions become an essential aspect in the reliability design of engineering system. Besides, the studies on the multi-objective redundancy allocation problem are still limited. For example, Elegbede and Adjallah (2003) developed a GA for a repairable system. The authors transformed multi-objective formulation to a single-objective optimization problem and an exterior penalty function technique is employed. Sasaki and Gen (2003) proposed a hybridized Genetic Algorithm with a new chromosome representation for a fuzzy multi-objective problem. Taboada et al. (2007) proposed a datamining technique to truncate the set of non-dominated solutions. Salazar et al. (2006) developed an NSGA-II to solve three types of multi-objective reliability optimization problems. Kulturel-Konak et al. (2008) transformed a single-objective problem to a multi-objective function and solve it by a Tabu Search Algorithm. Variable Neighborhood Search (VNS) is one of the latest meta-heuristics and it was original proposed by Mladenovi´c (1995). Hansen and Mladenovi´c (2003) provide comprehensive surveys of state-of-the-art development of VNS and its variations. The single objective VNS algorithm employ a set of neighborhood search methods systematically to find the local optimum in each neighborhood iteratively and hopefully the search to the end can move toward the global optimum. Single objective VNS and its variations have been successfully applied to diverse com-
Multi-objective redundancy allocation optimization using a variable
513
binatorial optimization problems, e.g. graph coloring (Avanthay et al. 2003), multisource problem (Brimberg et al. 2000), P -median problem (Hansen and Mladenovi´c 1997), clustering (Hansen and Mladenovi´c 2002), minimum spanning tree problem (Ribeiro and Souza 2002), vehicle routing problem (Kytöjoki et al. 2007), and redundancy allocation problem (Liang and Wu 2005; Liang and Chen 2007; Liang et al. 2007), etc. Recently, the successful experience of VNS on single objective optimization problems has been applied to multi-objective problems. For example, Geiger (2004) and Gagné et al. (2005) proposed a randomized VNS algorithm and a hybrid tabu-VNS algorithm to solve scheduling problems, respectively. Stummer and Sun (2005) developed a VNS algorithm to solve capital investment planning problems. Therefore, in this study, the authors inherit their successful experience on single objective RAP by VNS to propose a Multi-Objective VNS (MOVNS) algorithm for multi-objective RAP. Three types of multi-objective RAP will be considered. The remaining paper is organized as follows: the problem definition is described in Sect. 2, and Sect. 3 introduces the proposed MOVNS algorithm. Performance measures, benchmark problems and computational results are discussed in Sect. 4. Finally, Sect. 5 provides the concluding remarks. 2 Problem definition As mentioned in Sect. 1, system reliability maximization is the most studied objective in single objective RAP. Therefore, three problems that each optimizes system reliability with another popular objective—system cost or system weight are discussed in this section. The mathematical models including the objective functions and constraints, and original resource of these three multi-objective redundancy allocation problems (MORAP) are provided as follows. Problem I Maximize Rs =
s 1 − (1 − Ri )xi
(1)
i=1
Minimize Cs =
s
Ci xi + exp(qi xi )
(2)
i=1
subject to the constraints s
pi xi2 ≤ P ,
(3)
Wi xi exp(qi xi ) ≤ W.
(4)
i=1 s i=1
The first problem was proposed by Tillman et al. (1985). In Problem I, the objective is to determine the number of redundancies (xi ) at each stage i while there is only one component option at each stage i. Therefore, the system configuration can
514
Y.-C. Liang, M.-H. Lo
be considered as a series-parallel system with single component alternative in each subsystem. Two system objectives are optimized in this formulation: system reliability (Rs ) and system cost (Cs ). Equation (3) represents the constraint (P ) considering both weight and volume of components given that pi denotes the product of weight and volume per component. Equation (4) is the weight constraint (W ). Note that both (2) and (4) consider the additional cost (Ci × exp(qi xi )) and the multiplicative factor exp(qi xi ) respectively, due to the interconnecting parallel components. Problem II Maximize Rs =
s
Ri (yi |ki )
(5)
Ci (yi )
(6)
i=1
Minimize Cs =
s i=1
subject to the constraints s
Wi (yi ) ≤ W,
(7)
i=1
ki ≤
ai
xij ≤ nmax
∀i = 1, 2, . . . , s.
(8)
j =1
Problem II considers a series system that consists of s k-out-of-n: G subsystems. The goal of Problem II is to determine the optimal arrangement and redundancy levels of components which satisfies system weight constraint (W ) and optimizes the system reliability (Rs ) and system cost (Cs ) objectives simultaneously. Both system cost and system weight are calculated by the linear combination of component cost and component weight respectively, i.e., no interconnecting effect is considered in this model. Different from Problem I, each subsystem has multiple component choices, and the mix of component types in each subsystem is allowed in Problem II. Therefore, yi (= (xi1 , . . . , xiai )) denotes an ordered set of xij that represents the quantity of component type j used in subsystem i, and ai is the number of available component options in subsystem i. Equation (8) indicates that the total number of components a used in subsystem i, i.e., j i=1 xij , has to lie within the range of ki and nmax where ki denotes the minimum number of components in parallel required for subsystem i to function and nmax is the pre-determined maximum number of components allowed in parallel. Problem III Maximize Rs =
s
Ri (yi |ki )
(9)
i=1
Minimize Ws =
s i=1
Wi (yi )
(10)
Multi-objective redundancy allocation optimization using a variable
515
subject to the constraints s
Ci (yi ) ≤ C,
(11)
i=1
ki ≤
ai
xij ≤ nmax
∀i = 1, 2, . . . , s.
(12)
j =1
Problem III is an intuitive extension from Problem II by switching the cost objective and the weight constraint. That is Problem III aims at optimizing system reliability and system weight (Ws ) at the same time while has to satisfy the cost constraint. In addition, (12) equivalent to (8) is also provided to limit the redundancy level in each subsystem.
3 Multi-objective variable neighborhood search algorithm (MOVNS) Multi-Objective Variable Neighborhood Search (MOVNS) intends to extend the single-objective VNS algorithm to solve the multiple objective optimization problems. The main idea of MOVNS different from its single-objective VNS variations includes random selection of neighborhoods, a new selection strategy for the base solution, and the use of the neighboring solutions. The pseudo code of MOVNS is illustrated in Fig. 1. The MOVNS algorithm starts from generating a feasible initial solution randomly. This initial solution is also the first and only member in the starting approximated Pareto front. Then, a set of neighborhoods has to be defined properly. The search loop begins with the selection of base solution from the current approximated Pareto front. A neighborhood is then randomly selected with equal probability from the pool of the pre-defined neighborhood structures. Two search operators—shaking and neighborhood search are performed consecutively
Fig. 1 Procedure of the multi-objective variable neighborhood search (MOVNS) algorithm
516
Y.-C. Liang, M.-H. Lo
thereafter. Unlike in the single-objective VNS algorithm, only the best neighboring solution is considered to update the base solution, in MOVNS, all neighboring solutions will be adopted to update the approximated Pareto front. The search procedure continues until the pre-specified stopping criterion is reached. The following sections will discuss the steps of the proposed multi-objective variable neighborhood search algorithm in more details by starting with the initial solution generation. 3.1 Initial solution generation Random generation is the most common approach to construct an initial solution in meta-heuristics. When dealing with constrained optimization problems such as RAP, to generate a proper initial solution plays an important role in algorithmic efficiency. In this study, only a feasible initial solution is allowed. For all three types of problems, the number of initial components (ni ) is generated from a uniform distribution in the range of [LB, UB] for each subsystem i. In the formulation of Problem I, there exists merely one component option in each subsystem (stage), so ni units of identical component will be chosen at each stage. As for Problems II and III, multiple component choices are available in each subsystem, so the ni components will then be selected from all possible component alternatives randomly, i.e., each component type has the equal opportunity to be picked. 3.2 Update of the approximated Pareto front The idea of optimality in single-objective optimization problems is not directly applicable to multi-objective optimization problems, since no single global optimum can be found while considering all the objectives simultaneously. For example, in Problem I, if a solution y 1 has larger system reliability but a smaller system cost than the ones of another solution y 2 , then y 1 and y 2 are not superior to each other, i.e., y 1 and y 2 are non-dominated to each other. On the other hand, a solution y 1 dominates y 2 if y 1 is better than y 2 in one of the objectives, say system reliability, and performs equally or better in another objective, say system cost. For those solutions that cannot be dominated by any other solutions found in the search space are called non-dominated solutions. Therefore, the optimal solutions in a multi-objective optimization problem usually consist of a set of non-dominated solutions, i.e., approximated Pareto front. The formal definition of the Pareto optimality and related terms can be found in Salazar et al. (2006). In MOVNS, the approximated Pareto front starts with a solo member, i.e., the initial solution. The front is then updated constantly during the neighborhood search. All neighboring solutions are checked while updating the front. Those solutions dominated by the new candidate will be deleted from the approximated Pareto front, and the candidates not being dominated by any members in the front are admitted to enter the non-dominated set. The final approximated Pareto front is reported in the end of search procedure as the final solution is recorded in single-objective problem.
Multi-objective redundancy allocation optimization using a variable
517
Fig. 2 An example of type 1 neighborhood
3.3 Neighborhood structures Neighborhood structures in VNS and its variations determine how the search space will be explored efficiently. For complex problems such as RAP in this study, the number of neighborhood structures is usually limited. Therefore, two types of neighborhoods are employed. In addition, to save the computational expense, only a randomly chosen neighborhood is performed at each iteration in MOVNS. In the design of neighborhood structure for RAP, the idea of “empty” component is adapted. Namely an existing component can be replaced by an “empty” component or one extra “empty” slot is considered for changing to other available component options. For Problem I, neighborhood type 1 can increase or reduce the number of redundant components by one at each stage; for Problems II and III, the first type of neighborhood simultaneously replaces one type of existing component with a different type of component options in the same subsystem. Figure 2 illustrates an example of neighborhood type 1 for Problems II or III. Assume in the subsystem demonstrated, three component types (I, II and III) are available for selection, and the subsystem currently has two components I and II in parallel as shown in Fig. 2(a). The “shaded” component from Figs. 2(b) to 2(j) reveals the newly changed component. For instance, component type I is replaced by an “empty” component in Fig. 2(b), while an “empty” position is replaced by component type III in Fig. 2(j). Total of nine possible neighboring solutions are illustrated from Figs. 2(b) to 2(j). By the design of such neighborhood structure, the number of components can be increased or reduced by one or maintain at the same level. Also all available component options are considered. The second type of neighborhood structure involves the change of two components at the same time. For Problem I, type 2 neighbourhood is employed only when the number of components in a subsystem is greater than or equal to 3 since the neighborhood will reduced the number of components by two. As for Problems II and III, two existing components are simultaneously replaced by two different types of component options in the same subsystem. An example of neighborhood type 2 is illustrated in Fig. 3. Assume the subsystem presently has three components—types I, II, and III each as demonstrated in Fig. 3(a). All the possible neighboring solutions considering the changes in the first two positions (marked by the “shaded” components) are shown in Figs. 3(b) to 3(j). For instance, Fig. 3(c) shows that the component type I in the first position is replaced by an “empty” component, i.e. component “0”, and the component type II in the second position is replaced by component type I at the
518
Y.-C. Liang, M.-H. Lo
Fig. 3 An example of type 2 neighborhood
same time. Note that only the feasible neighboring solutions are considered in the neighborhood, i.e., the solutions have to satisfy ki and nmax requirements. 3.4 Selection strategy of the base solution In a single-objective VNS algorithm, the base solution starting from the initial solution is updated through the search process. However, the selection of a base solution at each iteration in the proposed MOVNS algorithm is not as intuitive as its relative in single-objective but at the same time it also plays a key role in guiding the search direction. A special selection strategy, therefore, is developed here. The base solution in MOVNS is selected from the set of non-dominated solutions, i.e., the approximated Pareto front. To balance two important characteristics of the approximated Pareto front—diversity and intensity, the selection strategy tries to lead the search in three different directions. In the first direction, all non-dominated solutions are ranked in the descending order of objective 1—system reliability. Then starting from the top of the list, the first solution that has not been explored by any neighborhood will be selected as the base solution for the following search steps. Similarly, in the second direction, all non-dominated solutions are ranked in the ascending order of objective 2—either system cost or system weight. And the first unvisited non-dominated solution on the top of the list is chosen. Since the first two directions focus on the two extreme regions—the best of objective 1 and the best of objective 2 respectively, the third direction is then to pick an unexplored non-dominated solution from the set randomly. Therefore, whenever a base solution needs to be determined, one of these three search directions will be chosen randomly with uniform distribution, and then a base solution will be selected accordingly. In addition, once a non-dominated solution is explored by the neighborhood search and still stays in the approximated Pareto front thereafter, it will be marked as a visited solution. Thus, the visited solutions will be excluded from the selection of the base one. If all members in the approximated Pareto front are marked as visited before reaching the stopping criterion of MOVNS, then all marks will be reset and the selection procedure can start over again. Figure 4 illustrates three search directions on the approximated Pareto front.
Multi-objective redundancy allocation optimization using a variable
519
Fig. 4 Illustration of the selection strategy in MOVNS
3.5 Shaking The main purpose of the shaking operation, the last operation in the main search procedure, is to offer a stochastic mechanism for the selection of the new base solution and provides a better opportunity to escape from local optima. Shaking is performed after a base solution is selected and one of the neighbourhood structures defined in Sect. 3.3 is randomly chosen with equal probability. It will randomly generate a neighbouring solution of the current base solution using the selected neighbourhood, and this new solution will be employed for the following neighbourhood search. For example, as shown in Fig. 2(a), the base solution owns two components— component I and II each in parallel. If type 1 neighbourhood is employed, the outcome of the shaking operation will be one of the 9 neighbouring solutions illustrated from Figs. 2(b) to 2(j).
4 Test problems and results The proposed MOVNS algorithm is coded in Borland C++ Builder 6.0 and is run using an Intel Pentium IV 3.0 GHz PC with 1 GB RAM. All computations use real float point precision without rounding or truncating values. The system reliabilities of the final solutions are rounded to six digits behind the decimal point in order to compare with results in the literature. The number of digits behind the decimal point used in other methods is truly referred to each reference. Additionally, the number of initial components in each subsystem is randomly chosen within the range of [1, 2]. That is to generate a feasible initial solution; no more than two components will be used in each subsystem. The stopping criterion of MOVNS and the competing algorithms are described in the corresponding sections. Meanwhile, the stopping criterion of the reference (approximated) Pareto front (generated for comparison purpose) is ten times of number of evaluations in MOVNS, and the reference front is collected from the set of non-dominated points over ten runs.
520
Y.-C. Liang, M.-H. Lo
4.1 Performance measures As mentioned in Sect. 3, the concept of the single objective optimization problem does not apply directly to the multiple objective optimization problem. Therefore, four measures are selected to evaluate the performance of the multiple objective algorithms. The first two measures—hit ratio and accuracy ratio both calculate the percentage of reference non-dominated solutions found but used different denominators. In the hit ratio measure, the denominator is the number of points in the approximated Pareto front obtained by the competing algorithms while the number of nondenominated points in the reference Pareto front is employed as the denominator in the accuracy ratio. Other two measures, GD and D1R evaluate the distance between the approximated Pareto front and the reference Pareto front. GD considers the distance from the view point of the approximated Pareto front while D1R calculates the distance from the reference Pareto front. Therefore, the hit ratio and the accuracy ratio are the larger the better, and the GD and D1R values are the smaller the better. The formal definitions of these four measures are listed as follows: • Hit Ratio (Zitzler et al. 2003) N 1 i=1 ei , where ei = Hit Ratio = |Yknow | 0
if a point in Yknow ∈ Ytrue otherwise
• Accuracy Ratio (Zitzler et al. 2003) N 1 if a point in Yknow ∈ Ytrue i=1 ei , where ei = Accuracy Ratio = |Ytrue | 0 otherwise
(13)
(14)
• GD (Zitzler et al. 2003) ⎛ GD =
|Yknow |
di =
|Y know |
1
⎝
⎞1/2 min{di2 |Ytrue ∈ Yknow }⎠
(15)
i=1
(f1 (y ∗ ) − f1 (y))2 + · · · + (fN (y ∗ ) − fN (y))2
(16)
• D1R (Ishibuchi et al. 2003) 1
|Y true |
min {di |Yknow ∈ Ytrue } |Ytrue | i=1 di = (f1 (y ∗ ) − f1 (y))2 + . . . + (fN (y ∗ ) − fN (y))2
D1R =
(17)
(18)
where ei is a binary variable denoting whether a solution belongs to the reference Pareto front, Ytrue represents the reference Pareto front, Yknow denotes the approximated Pareto front, |Y | is the number of solutions (points) in the set Y , y ∗ denotes the solution vector from the reference Pareto front, y is the solution vector from the approximated Pareto front, fi (.) represents the ith normalized objective function, and N is the number of objectives considered.
Multi-objective redundancy allocation optimization using a variable Table 1 Data for Problem I
521
i
Ri
pi
Ci
Wi
1
0.80
1
7
7
2
0.85
2
7
8
3
0.90
3
5
8
4
0.65
4
9
6
5
0.75
2
4
9
Fig. 5 (R, C) approximated Pareto front for Problem I
4.2 Problem I The first test problem was initially published by Tillman et al. (1985) in the form of single-objective and Salazar et al. (2006) extended to a multi-objective problem. Salazar et al. also proposed an NSGA-II to solve the problem. The product of weight and volume constraint P is set to 110, and the weight constraint W is 200. The multiplicative factor qi in exponential function is set to 0.25 for all subsystems. The number of subsystems is five, and the component data is listed in Table 1. The (R, C) approximated Pareto fronts of both MOVNS and NSGA-II are shown in Fig. 5, and the details of all 21 non-dominated solutions are provided in Appendix A. MOVNS is able to find all the non-dominated solutions as provided by NSGA-II. Note that the approximated Pareto fronts obtained by both MOVNS and NSGA-II are also exactly equivalent to the reference Pareto front. Therefore, the hit ratio and the accuracy are both equal to one while the GD and D1R are both equal to zero for both MOVNS and NSGA-II. The stopping criterion of MOVNS is when the total number of evaluations reaches 9,000. On the other hand, the CPU time of MOVNS on Problem I is very close to zero second, i.e., MOVNS is able to find the superior approximated Pareto front in no time.
522
Y.-C. Liang, M.-H. Lo
Table 2 Component data of Problems II and III Subsystem
Component 1
i
Ri
Ci
Wi
Component 2 Ri
Ci
1
0.9
1
3
0.93
1
2
0.95
2
8
0.94
1
3
0.85
2
7
0.9
3
4
0.83
3
5
0.87
4
5
0.94
2
4
0.93
6
0.99
3
5
0.98
7
0.91
4
7
8
0.81
3
9
0.97
2
10
0.83
11 12
Component 3 Wi
Component 4
Ri
Ci
Wi
Ri
Ci
Wi
4
0.91
2
2
0.95
2
5
10
0.93
1
9
5
0.87
1
6
0.92
4
4
6
0.85
5
4
2
3
0.95
3
5
3
4
0.97
2
5
0.96
2
4
0.92
4
8
0.94
5
9
4
0.9
5
7
0.91
6
6
8
0.99
3
9
0.96
4
7
0.91
3
8
4
6
0.85
4
5
0.9
5
6
0.94
3
5
0.95
4
6
0.96
5
6
0.79
2
4
0.82
3
5
0.85
4
6
0.9
5
7
13
0.98
2
5
0.99
3
5
0.97
2
6
14
0.9
4
6
0.92
4
7
0.95
5
6
0.99
6
9
4.3 Problem II Problem II was originally proposed by Fyffe et al. (1968) in the form of singleobjective and Nakagawa and Miyazaki (1981) extends to a set of 33 instances with different weight constraints. Salazar et al. (2006) further extend to a multi-objective problem with the objectives of maximizing system reliability and minimizing system cost, given weight constraint W = 191 for a series-parallel system with 14 subsystems. Each subsystem has three or four component alternatives. ki is set to one and nmax is equal to 8 for all subsystems. The component data including cost, weight and reliability is shown in Table 2. Note Problem III also shares the same component data with Problem II, and the reference Pareto front has 102 non-dominated solutions. Salazar et al. (2006) proposed this test problem and used a Non-dominated Sorting Genetic Algorithm (NSGA-II) to solve it. The number of evaluations in NSGA-II is approximately 5,000,000 and the archive size is set to 50. The number of nondominated solutions in the final approximated Pareto front by NSGA-II is 49 as reported in Salazar et al. (2006), and 10 of them fall on the reference Pareto front. To conduct a fair comparison with NSGA-II, MOVNS was executed in two different settings of archive size—50 or unbounded, and both employed the same number of evaluations 5,000,000 as NSGA-II in the literature. The performance measures of bounded archive and unbounded archive MOVNS, and NSGA-II are listed in Table 3. The MOVNS with bounded archive size consists of 50 points and 41 of them hit the reference Pareto front; therefore, the hit ratio and the accuracy ratio of MOVNS with the archive size of 50 are much higher than the ones by NSGA-II. And not surprisingly, MOVNS with archive size of 50 also outperforms NSGA-II in the GD measure, since more points found by MOVNS fall on the reference Pareto front than NSGA-II. The only measure NSGA-II performs competitively to MOVNS is D1R . As illustrated
Multi-objective redundancy allocation optimization using a variable
523
Table 3 Performance comparison of MOVNS, RVNS (Geiger 2004) and NSGA-II (Salazar et al. 2006) Algorithm
Archive size
MOVNS
Archive size = 50 Unbounded archive size
RVNS
Unbounded archive size
NSGA-II
Archive size = 50
Hit ratio
Accuracy ratio
GD
D1R
0.820000
0.401961
0.000018
0.032704
0.990196
0.990196
1.2E-07
1.2E-07
0.000000
0.000000
0.013475
0.119204
0.204080
0.098039
0.000729
0.014970
Fig. 6 (R, C) Approximated Pareto front for Problem II, archive size = 50 (part A—for non-dominated points with Rs ≥ 0.945 and Cs ≥ 80)
in Figs. 6 and 7, the approximated Pareto front obtained by NSGA-II spreads more evenly than the one by MOVNS. Thus, the D1R value of NSGA-II is slightly better than the one of MOVNS with archive size of 50. While the archive size of the MOVNS algorithm is relaxed to unbounded, i.e., the front will collect as many as non-dominated points as possible, the improvement on measures is significant as shown in Table 3. MOVNS in this case is able to find 102 points and 101 of them are located on the reference Pareto front. That explains the high values of both the hit ratio and the accuracy ratio value. And the GD and D1R values are also both close to zero as expected. The detailed solutions of both MOVNS with bounded and unbounded archives are provided in Appendix B for the future reference. In addition, the CPU time of MOVNS with archive size of 50 is 62.766 seconds while the MOVNS with unbounded archive needs 121.421 seconds. The higher CPU time consumption contributes to the update of the larger approximated Pareto front. Additionally, to show the competitive edge of the proposed MOVNS algorithm with the traditional multiobjective VNS algorithms, the randomized variable neighborhood search algorithm
524
Y.-C. Liang, M.-H. Lo
Fig. 7 (R, C) Approximated Pareto front for Problem II, archive size = 50 (part B—for non-dominated points with Rs < 0.945 and Cs < 80)
(Geiger 2004), RVNS in short, is coded using the same environment, language, and the stopping criterion as MOVNS. The main differences between MOVNS and RVNS consist of the use of shaking operation and the selection strategy of the base solution. MOVNS applies a shaking operation before the employment of each neighborhood search while RVNS did not use any of the perturbation mechanism. RVNS chooses an “unvisited” solution randomly from the archive while a more evenly distributed selection strategy (as described in Sect. 3.4) is employed in MOVNS. From both Figs. 8 and 9, it is obvious that the approximated Pareto front obtained by RVNS is way behind the one by MOVNS. Also, the performance of MOVNS outperforms RVNS in all four measures. That verifies the newly developed selection strategy shows its merit on improving the efficiency and effectiveness of VNS algorithms. 4.4 Problem III One of the most popular single-objective RAP benchmark is the 33 system reliability maximization instances (proposed by Fyffe et al. 1968, and then extended by Nakagawa and Miyazaki 1981). In this benchmark set, the system cost constraint is set to 130, and the system weight constraint ranges from 159 to 191. Therefore, the objectives of Problem III are to maximize system reliability and minimize system weight given the system cost constraint of 130. Other problem settings such as component data, ki and nmax are as described in Sect. 4.3. The stopping criterion of MOVNS is when the number of evaluation reaches 7,000,000, and the number of non-dominate solutions in the reference Pareto front is 228. The number of non-dominated solutions obtained by the MOVNS algorithm is also 228 and the detailed results are shown in Appendix D. The CPU time of MOVNS is 190.969 seconds. Comparing with the
Multi-objective redundancy allocation optimization using a variable
525
Fig. 8 (R, C) Approximated Pareto front for Problem II (part A—for non-dominated points with Rs ≥ 0.945 and Cs ≥ 80)
Fig. 9 (R, C) Approximated Pareto front for Problem II (part B—for non-dominated points with Rs < 0.945 and Cs < 80)
reference Pareto front, the performance measures are as follows: D1R = 0.000007, GD = 0.000002, and Hit Ratio = Accuracy Ratio = 0.820175. To verify the performance of MOVNS, optima of the 33 single-objective instances provided by Onishi et al. (2007) are compared along with the results of some other famous algorithms in the literature such as GA (Coit and Smith 1996b), TS (Kulturel-
526
Y.-C. Liang, M.-H. Lo
Fig. 10 (R, W ) Approximated Pareto front for Problem III by MOVNS and the optimal solutions of 38 single objective instances
Konak et al. 2003), ACO (Liang and Smith 2004), IA (Chen and You 2005), Y&C (You and Chen 2005), VND (Liang and Wu 2005), VNS_RAP (Liang and Chen 2007), and ACO/DC (Nahas et al. 2007), etc. The authors find that optima of all 33 single objective instances are included in the non-dominated solutions of MOVNS which is illustrated in Fig. 10. To get a better view on the overlapped region between the 33 single objective optima and the non-dominated solutions, a zoom window is illustrated in Fig. 11. Again, MOVNS is able to provide a variety of solution alternatives. Table 4 summarizes the best performance of all competing algorithms in 33 singleobjective instances. The “shaded” region indicates that the optimum is found. Note that the best solution in each instance on VNS, GA, ACO, TS and VNS_RAP is obtained over 10 runs, on Y&C is from 20 trials, on ACO/DC is among 5 trials, and on IA and MOVNS are adapted from a single run. When comparing with those algorithms in the literature, MOVNS is the only algorithm that is able to find all 33 optimal solutions. In addition, Onishi et al. (2007) extended the system weight constraint to 198, 201, 202, 204 and 205. As shown in Table 5, the non-dominated solutions of MOVNS are also able to reach all the optimal solutions in the five extended instances. When comparing the computational expense, the number of total evaluations may provide a rough idea on how efficient an algorithm performs. Since many algorithms did not offer the number of evaluations in the literature, the approximate number of evaluations is obtained by calculating the product of the number of ants and the number of iterations in ACO, or the population size and the number of generations in GA, etc. Note the number of evaluations in local search is neglected if the literature did not mention particularly. Therefore the number of evaluations in a single run (trial) of each competing method is as follows: VND49,000, GA-48,040, ACO-100,000, TS-350,000, VNS_RAP-120,000, Y&C-9,820,
Multi-objective redundancy allocation optimization using a variable
527
Fig. 11 The zoom window of Fig. 10 (for 38 single objective optima and non-dominated points with 159 ≤ Cs ≤ 205 and 0.95 < Rs < 0.995)
Table 4 Comparison of computational results of 33 single-objective RAP instances
528
Y.-C. Liang, M.-H. Lo
Table 5 Comparison of computational results of five extended single-objective RAP instances
IA-360,120, and ACO/DC-150,000. The total number of evaluations in all 33 instances is then calculated by multiplying the number of test trails and the number of instances. Thus, the total number of evaluations in each method is VND-16,170,000, GA-15,853,200, ACO-33,000,000, TS-115,500,000, VNS_RAP-39,600,000, Y&C6,481,200, IA-11,883,960 and ACO/DC-24,750,000 while MOVNS needs only 1,013,158 to find all 33 optima. The total number of evaluations in MOVNS is obviously the smallest one among all competing algorithms. Note that the CPU time calculation per evaluation may differ over methods in the literature. For example, GA (Coit and Smith 1996b) had to recalculate the related system information for the entire system while TS (Kulturel-Konak et al. 2003) first proposed an approach that only needed to recalculate the related information of the subsystem being changed. For a complex system, this approach could save lots of computational effort while evaluating the objective value. Therefore, VNS_RAP (Liang and Chen 2007) and MOVNS in this study also employed the same approach as suggested by TS. Besides, it shows that MOVNS is capable of finding lots of solution alternatives including these singleobjective optima with a very reasonable computational effort.
5 Conclusions This paper proposes an MOVNS algorithm to solve the series-parallel system multiobjective redundancy allocation problems. VNS and its variations have been successfully applied to the single-objective RAP; however, this study should be the first application of VNS on the multiple-objective RAP according to authors’ knowledge. Two neighborhood structures are employed to explore the search space. And the most important of all, a new selection strategy is proposed to determine the base solution and properly leads the search direction. Three problems with the objectives of system reliability maximization and system weight/cost minimization are defined. In the first problem, the approximated Pareto front generated by MOVNS is the same as the ones by the competing algorithm NSGA-II and the reference Pareto front. For the second problem, when considering the archive size, MOVNS outperforms NSGA-II in the hit ratio, accuracy ratio, and GD value, and performs competitively in D1R . When relaxing the archive size to unlimited, MOVNS is superior to NSGA-II in all measures, and finds almost all non-dominated points in the reference Pareto front. For the last problem, the computational results of MOVNS reveal all the optimal solutions of 38 single objective RAP instances. The total number of evaluations of MOVNS needed is less than those of other competing algorithms in the literature. In general, this study has successfully proposed an MOVNS algorithm that is effective and efficient to solve
Multi-objective redundancy allocation optimization using a variable
529
the multi-objective RAPs. For the future research, because of the simple structure, it is believed that MOVNS can be easily applied to other multi-objective combinatorial optimization problems. The authors have also successfully employed the MOVNS algorithm to solve the multi-objective project portfolio optimization problem. In addition, the idea in the Pareto iterated local search metaheuristic (Geiger 2006) could be another interesting research direction for the further improvement in the current multi-objective VNS algorithms and its variations. Acknowledgements The authors would like to thank Dr. Daniel Salazar, Dr. Claudio M. Rocco and Dr. Blas J. Galván for sharing their results and provide valuable discussion. In addition, the authors also appreciate three reviewers and guest editors for their precious comments and suggestions.
Appendix A
Table 6 MOVNS results of Problem I Reliability
Cost
Weight
Reliability
Cost
Weight
1
0.904467
146.1250
192.4810
2
0.883248
143.1880
195.5350
12
0.634367
106.2560
120.5700
13
0.604159
100.3830
3
0.875291
135.8470
93.0881
171.1060
14
0.553812
97.6533
4
0.833610
95.1016
129.9740
143.6240
15
0.528639
96.7030
106.4760
5
0.812027
129.0370
193.4840
16
0.503466
90.8299
78.9942
6
0.802474
122.6320
152.7850
17
0.447525
88.1005
81.0076
7
0.764261
116.7590
125.3030
18
0.410231
85.3711
83.0211
8
0.729522
115.8090
136.6770
19
0.391584
84.4207
94.3957
9
0.697803
113.0790
136.6770
20
0.372938
78.5476
66.9137
10
0.694783
109.9360
109.1950
21
0.298350
73.0888
48.7930
11
0.664575
107.2060
109.1950
530
Y.-C. Liang, M.-H. Lo
Appendix B
Table 7 MOVNS results of Problem II (archive size = 50) Reliability
Cost
Weight
Reliability
Cost
Weight
Reliability
Cost
Weight
1
0.983349
118
191
2
0.982375
115
191
18
0.969547
93
191
19
0.929760
76
175
35
0.726312
57
130
36
0.713747
56
3
0.982042
114
191
20
0.907348
73
126
170
37
0.699328
55
4
0.981649
113
191
21
0.900439
125
72
169
38
0.685200
54
5
0.981205
112
191
22
125
0.892238
71
164
39
0.670621
53
6
0.980861
111
191
124
23
0.884111
70
157
40
0.646415
52
7
0.979916
109
121
191
24
0.875621
69
152
41
0.632662
51
8
0.979481
120
108
191
25
0.866776
68
152
42
0.617452
50
9
120
0.978600
106
191
26
0.849840
66
148
43
0.595165
49
117
10
0.977417
104
191
27
0.841255
65
148
44
0.582502
48
116
11
0.976683
103
191
28
0.828859
64
142
45
0.558966
47
113
12
0.976195
102
191
29
0.812115
63
141
46
0.511242
45
113
13
0.975523
101
191
30
0.799713
62
138
47
0.484959
44
110
14
0.974883
100
191
31
0.784146
61
135
48
0.462662
43
109
15
0.974184
99
191
32
0.768305
60
135
49
0.436959
42
108
16
0.973696
98
191
33
0.756572
59
131
50
0.412225
41
98
17
0.971690
95
191
34
0.741288
58
130
Multi-objective redundancy allocation optimization using a variable
531
Appendix C
Table 8 MOVNS results of Problem II (unbounded archive size) Reliability
Cost
Weight
Reliability
Cost
1
0.987503
135
191
2
0.987404
134
191
3
0.987206
133
4
0.987107
132
5
0.987008
6
Weight
Reliability
Cost
Weight
36
0.975036
100
191
71
0.841255
65
148
37
0.974294
99
191
72
0.828859
64
142
191
38
0.973807
98
191
73
0.812115
63
142
191
39
0.973027
97
191
74
0.799713
62
138
131
191
40
0.972469
96
191
75
0.784146
61
135
0.986811
130
191
41
0.971690
95
191
76
0.768305
60
134
7
0.986514
129
191
42
0.970909
94
191
77
0.756572
59
131
8
0.986349
128
191
43
0.969639
93
191
78
0.741288
58
130
9
0.986152
127
191
44
0.968474
92
191
79
0.726312
57
130
10
0.985954
126
191
45
0.966873
91
191
80
0.713747
56
126
11
0.985689
125
191
46
0.965706
90
191
81
0.699328
55
126
12
0.985492
124
191
47
0.964443
89
191
82
0.685200
54
125
13
0.985393
123
191
48
0.963182
88
191
83
0.670621
53
124
14
0.985196
122
191
49
0.961590
87
191
84
0.646415
52
121
15
0.984802
121
191
50
0.960332
86
191
85
0.632662
51
120
16
0.984461
120
191
51
0.957790
85
191
86
0.617452
50
120
17
0.984071
119
191
52
0.955342
84
191
87
0.595165
49
117
18
0.983677
118
191
53
0.954505
83
191
88
0.582502
48
116
19
0.983377
117
191
54
0.952020
82
187
89
0.558966
47
113
20
0.983180
116
191
55
0.948783
81
191
90
0.541315
46
114
21
0.982787
115
191
56
0.945692
80
184
91
0.511242
45
113
22
0.982295
114
191
57
0.941948
79
190
92
0.484959
44
110
23
0.981649
113
191
58
0.938759
78
180
93
0.462662
43
109
24
0.981205
112
191
59
0.934480
77
176
94
0.436959
42
108
25
0.980861
111
191
60
0.929760
76
175
95
0.412225
41
98
26
0.980389
110
191
61
0.922681
75
174
96
0.388792
40
105
27
0.979916
109
191
62
0.914277
74
169
97
0.367192
39
104
28
0.979481
108
191
63
0.907348
73
170
98
0.346408
38
94
29
0.979089
107
191
64
0.900439
72
169
99
0.323746
37
90
30
0.978600
106
191
65
0.892238
71
164
100
0.286501
36
84
31
0.977906
105
191
66
0.884111
70
157
101
0.267558
35
86
32
0.977417
104
191
67
0.875621
69
152
102
0.236777
34
80
33
0.976865
103
191
68
0.866776
68
152
34
0.976224
102
191
69
0.858081
67
153
35
0.975514
101
191
70
0.849840
66
148
532
Y.-C. Liang, M.-H. Lo
Appendix D
Table 9 MOVNS results of Problem III Reliability Cost Weight 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
0.998064 0.998062 0.998059 0.998056 0.998053 0.998042 0.998028 0.998007 0.997986 0.997983 0.997980 0.997970 0.997955 0.997934 0.997904 0.997900 0.997890 0.997876 0.997855 0.997825 0.997809 0.997798 0.997784 0.997763 0.997733 0.997707 0.997686 0.997656 0.997627 0.997592 0.997572 0.997551 0.997521 0.997491 0.997468 0.997444 0.997414 0.997378 0.997336 0.997288
130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130
295 294 293 292 291 290 289 288 287 286 285 284 283 282 281 280 279 278 277 276 275 274 273 272 271 270 269 268 267 266 265 264 263 262 261 260 259 258 257 256
Reliability Cost Weight 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80
0.997258 0.997218 0.997176 0.997128 0.997079 0.997026 0.996977 0.996923 0.996842 0.996771 0.996690 0.996590 0.996527 0.996448 0.996376 0.996295 0.996217 0.996125 0.996044 0.995945 0.995839 0.995745 0.995653 0.995589 0.995518 0.995437 0.995337 0.995238 0.995166 0.995085 0.994986 0.994796 0.994621 0.994521 0.994432 0.994317 0.994126 0.993950 0.993772 0.993673
130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130
255 254 253 252 251 250 249 248 247 246 245 244 243 242 241 240 239 238 237 236 235 234 233 232 231 230 229 228 227 226 225 224 223 222 221 220 219 218 217 216
Reliability Cost Weight 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120
0.993481 0.993382 0.993183 0.992968 0.992821 0.992622 0.992410 0.992112 0.991616 0.991394 0.991182 0.990961 0.990710 0.990548 0.990350 0.990129 0.989831 0.989336 0.989106 0.988904 0.988674 0.988377 0.987882 0.987338 0.986811 0.986416 0.985922 0.985378 0.984688 0.984176 0.983505 0.982994 0.982256 0.981518 0.981027 0.980290 0.979505 0.978400 0.977596 0.976690
130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 130 129 130 130 129 130 129 128 126 125 126 124
215 214 213 212 211 210 209 208 207 206 205 204 203 202 201 200 199 198 197 196 195 194 193 192 191 190 189 188 187 186 185 184 183 182 181 180 179 178 177 176
Multi-objective redundancy allocation optimization using a variable
533
Table 9 (Continued) Reliability Cost Weight
Reliability Cost Weight
Reliability Cost Weight
121
0.975708
125
175
161
0.903764
90
135
201
0.634040
70
95
122
0.974926
123
174
162
0.901651
89
134
202
0.620831
68
94
123
0.973827
122
173
163
0.897098
88
133
203
0.606428
67
93
124
0.973027
123
172
164
0.892354
89
132
204
0.593794
65
92
125
0.971929
122
171
165
0.887847
88
131
205
0.587074
66
91
126
0.970760
120
170
166
0.884349
84
130
206
0.574843
64
90
127
0.969291
121
169
167
0.879882
83
129
207
0.558348
64
89
128
0.968125
119
168
168
0.875230
84
128
208
0.548667
64
88
129
0.966335
118
167
169
0.870809
83
127
209
0.540528
63
87
130
0.965042
116
166
170
0.864179
82
126
210
0.529267
61
86
131
0.963712
117
165
171
0.855450
82
125
211
0.510949
60
85
132
0.962422
115
164
172
0.849861
87
124
212
0.505167
61
84
133
0.960642
114
163
173
0.845568
86
123
213
0.494642
59
83
134
0.959188
115
162
174
0.842237
82
122
214
0.476918
61
82
135
0.958035
113
161
175
0.837983
81
121
215
0.466983
59
81
136
0.955714
112
160
176
0.833552
82
120
216
0.441039
58
80
137
0.954565
110
159
177
0.829342
81
119
217
0.430124
54
79
138
0.952560
109
158
178
0.823027
80
118
218
0.414712
56
78
139
0.951118
110
157
179
0.814714
80
117
219
0.406072
54
77
140
0.949974
108
156
180
0.807118
77
116
220
0.383512
53
76
141
0.947674
107
155
181
0.800179
79
115
221
0.372543
52
75
142
0.946534
105
154
182
0.794086
78
114
222
0.351846
51
74
143
0.944270
107
153
183
0.786065
78
113
223
0.335597
52
73
144
0.943010
106
152
184
0.778736
75
112
224
0.319579
51
72
145
0.940850
104
151
185
0.770947
78
111
225
0.307887
50
71
146
0.939595
103
150
186
0.765077
77
110
226
0.290782
49
70
147
0.937162
102
149
187
0.757349
77
109
227
0.274053
47
69
148
0.934921
104
148
188
0.750287
74
108
228
0.258828
46
68
149
0.933674
103
147
189
0.742709
74
107
150
0.931534
101
146
190
0.730608
75
106
151
0.930292
100
145
191
0.725045
74
105
152
0.927467
101
144
192
0.717721
74
104
153
0.925342
99
143
193
0.711030
71
103
154
0.924108
98
142
194
0.703847
71
102
155
0.920494
97
141
195
0.689628
70
101
156
0.918385
95
140
196
0.682519
72
100
157
0.917160
94
139
197
0.675409
70
99
158
0.912528
93
138
198
0.661765
69
98
159
0.910395
92
137
199
0.654942
71
97
160
0.908352
91
136
200
0.641298
69
96
534
Y.-C. Liang, M.-H. Lo
References Avanthay, C., Hertz, A., Zufferey, N.: A variable neighborhood search for graph coloring. Eur. J. Oper. Res. 151, 379–388 (2003) Brimberg, J., Hansen, P., Mladenovi´c, N., Taillard, É.: Improvements and comparison of heuristics for solving the multisource Weber problem. Oper. Res. 48, 444–460 (2000) Bulfin, R.L., Liu, C.Y.: Optimal allocation of redundant components for large systems. IEEE Trans. Reliab. 34, 241–247 (1985) Chen, T.-C., You, P.-S.: Immune algorithms-based approach for redundant reliability problems with multiple component choices. Comput. Ind. 56, 195–205 (2005) Chen, Y.-C.: Redundancy allocation of series-parallel systems using variable neighbourhood search algorithms. Master Thesis, Yuan Ze University: Taiwan, ROC (in Chinese) (2005) Chern, M.S.: On the computational complexity of reliability redundancy allocation in a series system. Oper. Res. Lett. 11, 309–315 (1992) Coit, D.W., Liu, J.: System reliability optimization with k-out-of-n subsystems. Int. J. Reliab. Qual. Saf. Eng. 7, 129–143 (2000) Coit, D.W., Smith, A.E.: Penalty guided genetic search for reliability design optimization. Comput. Ind. Eng. 30, 895–904 (1996a) Coit, D.W., Smith, A.E.: Reliability optimization of series-parallel systems using a genetic algorithm. IEEE Trans. Reliab. 45, 254–260 (1996b) Coit, D.W., Smith, A.E.: Solving the redundancy allocation problem using a combined neural network/genetic algorithm approach. Comput. Oper. Res. 23, 515–526 (1996c) Elegbede, C., Adjallah, K.: Availability allocation to repairable systems with genetic algorithms a multiobjective formulation. Reliab. Eng. Syst. Saf. 82, 319–330 (2003) Fyffe, D.E., Hines, W.W., Lee, N.K.: System reliability allocation and a computational algorithm. IEEE Trans. Reliab. 17, 74–79 (1968) Gagné, C., Gravel, M., Price, W.L.: Using metaheuristic compromise programming for the solution of multiple objective scheduling problems. J. Oper. Res. Soc. 56, 687–698 (2005) Geiger, M.J.: Randomized variable neighborhood search for multi objective optimization. In: Proceedings of the 4th EU/ME Workshop: Design and Evaluation of Advanced Hybrid Meta-Heuristics, pp. 34– 42. Nottingham, United Kingdom (2004) Geiger, M.J.: Foundations of the Pareto iterated local search metaheuristic. In: Proceedings of the 18th International Conference on Multiple Criteria Decision Making, Chania, Greece (2006) Ghare, P.M., Taylor, R.E.: Optimal redundancy for reliability in series systems. Oper. Res. 17, 838–847 (1969) Hansen, P., Mladenovi´c, N.: Variable neighborhood search for the P -median. Locat. Sci. 5, 207–226 (1997) Hansen, P., Mladenovi´c, N.: In: Variable Neighborhood Search. Handbook of Applied Optimization, pp. 221–234. Oxford University Press, New York (2002) Hansen, P., Mladenovi´c, N.: In: Variable Neighborhood Search. Handbook of Metaheuristics, pp. 145–184. Kluwer Academic, Amsterdam (2003) Huang, Y.-C.: Optimization of the series-parallel system with the redundancy allocation problem using a hybrid ant colony algorithm. Master Thesis, Yuan Ze University: Taiwan, ROC (in Chinese) (2003) Huang, Y.-C., Her, Z.-S., Liang, Y.-C.: Redundancy allocation using meta-heuristics. In: Proceedings of the 4th Asia-Pacific Conference on Industrial Engineering and Management System (APIEMS 2002), pp. 1758–1761. Taipei, Taiwan, ROC (2002) Ishibuchi, H., Yoshida, T., Murata, T.: Balance between genetic search and local search in memetic algorithms for multiobjective permutation flowshop scheduling. IEEE Trans. Evol. Comput. 7, 204–223 (2003) Kulturel-Konak, S., Coit, D.W., Smith, A.E.: Efficiently solving the redundancy allocation problem using tabu search. IIE Trans. 35, 515–526 (2003) Kulturel-Konak, S., Coit, D.W., Baheranwala, F.: Pruned Pareto-optimal sets for the system redundancy allocation problem based on multiple prioritized objectives. J. Heuristics 14, 335–357 (2008) Kytöjoki, J., Nuortio, T., Bräysy, O., Gendreau, M.: An efficient variable neighborhood search heuristic for very large scale vehicle routing problems. Comput. Oper. Res. 34, 2743–2757 (2007) Li, J.: A bound dynamic programming for solving reliability redundancy optimization. Microelectron. Reliab. 36, 1515–1520 (1996) Liang, Y.-C.: Ant colony optimization approach to combinatorial problems. Ph.D. Dissertation, Auburn University: USA (2001)
Multi-objective redundancy allocation optimization using a variable
535
Liang, Y.-C., Chen, Y.-C.: Redundancy allocation of series-parallel systems using a variable neighborhood search algorithm. Reliab. Eng. Syst. Saf. 92, 323–331 (2007) Liang, Y.-C., Smith, A.E.: Ant colony optimization algorithm for the redundancy allocation problem (RAP). IEEE Trans. Reliab. 53, 417–423 (2004) Liang, Y.-C., Wu, C.-C.: A variable neighbourhood descent algorithm for the redundancy allocation problem. Ind. Eng. Manag. Syst. 4, 109–116 (2005) Liang, Y.-C., Lo, M.-H., Chen, Y.-C.: Variable neighborhood search for redundancy allocation problems. IMA J. Manag. Math. 18, 135–155 (2007) Misra, K.B., Sharma, U.: An efficient algorithm to solve integer-programming problems arising in systemreliability design. IEEE Trans. Reliab. 40, 81–91 (1991) Mladenovi´c, N.: Abstracts of papers presented at optimization days: a variable neighborhood algorithm— a new metaheuristic for combinatorial optimization. Montréal (1995) Nahas, N., Nourelfath, M., Ait-Kadi, D.: Coupling ant colony and the degraded ceiling algorithm for the redundancy allocation problem of series-parallel system. Reliab. Eng. Syst. Saf. 97, 211–222 (2007) Nakagawa, Y., Miyazaki, S.: Surrogate constraints algorithm for reliability optimization problems with two constraints. IEEE Trans. Reliab. 30, 175–180 (1981) Onishi, J., Kimura, S., James, R.J.W., Nakagawa, Y.: Solving the redundancy allocation problem with a mix of components using the improved surrogate constraint method. IEEE Trans. Reliab. 56, 94–101 (2007) Ravi, V.: Optimization of complex system reliability by a modified great Deluge algorithm. Asia-Pac. J. Oper. Res. 21, 487–497 (2004) Ravi, V., Murty, B.S.N., Reddy, P.J.: Nonequilibrium simulated annealing algorithm applied to reliability optimization of complex system. IEEE Trans. Reliab. 46, 233–239 (1997) Ribeiro, C.C., Souza, M.C.: Variable neighborhood search for the degree-constrained minimum spanning tree problem. Discrete Appl. Math. 118, 43–54 (2002) Salazar, D., Rocco, C.M., Galván, B.J.: Optimization of constrained multiple-objective reliability problems using evolutionary algorithms. Reliab. Eng. Syst. Saf. 91, 1057–1070 (2006) Sasaki, M., Gen, M.: A method of fuzzy multi-objective nonlinear programming with GUB structure by hybrid genetic algorithm. Int. J. Smart Eng. Syst. Des. 5, 281–288 (2003) Shelokar, P.S., Jayaraman, V.K., Kulkarni, B.D.: Ant algorithm for single and multiobjective reliability optimization problems. Qual. Reliab. Eng. Int. 18, 497–514 (2002) Stummer, C., Sun, M.: New multiobjective metaheuristic solution procedures for capital investment planning. J. Heuristics 11, 183–199 (2005) Taboada, H.A., Baheranwala, F., Coit, D.W., Wattanapongsakorn, N.: : Practical solutions for multiobjective optimization: an application to system reliability design problems. Reliab. Eng. Syst. Saf. 92(3), 314–322 (2007) Tillman, F.A., Hwang, C.L., Kuo, W.: Determining component reliability and redundancy for optimum system reliability. IEEE Trans. Reliab. 26, 162–165 (1977) Tillman, F.A., Hwang, C.L., Kuo, W.: Optimization of System Reliability. Marcel Dekker, New York (1985) You, P.-S., Chen, T.-C.: An efficient heuristic for series—parallel redundant reliability problems. Comput. Oper. Res. 32, 2117–2127 (2005) Zhao, J.-H., Liu, Z., Dao, M.-T.: Reliability optimization using multiobjective ant colony system approaches. Reliab. Eng. Syst. Saf. 92, 109–120 (2007) Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C.M., da Fonseca, V.G.: Performance assessment of multiobjective optimizers: an analysis and review. IEEE Trans. Evol. Comput. 7(2), 117–132 (2003)