Adaptive range particle swarm optimization - Semantic Scholar

7 downloads 331 Views 2MB Size Report
Keywords Global optimization · Particle swarm optimization · Active search domain range. 1 Introduction. In general, global optimization techniques may be ...
Optim Eng DOI 10.1007/s11081-009-9081-7

Adaptive range particle swarm optimization Satoshi Kitayama · Koetsu Yamazaki · Masao Arakawa

Received: 15 July 2008 / Accepted: 20 February 2009 © Springer Science+Business Media, LLC 2009

Abstract This paper proposes a new technique for particle swarm optimization called adaptive range particle swarm optimization (ARPSO). In this technique an active search domain range is determined by utilizing the mean and standard deviation of each design variable. In the initial search stage, the search domain is explored widely. Then the search domain is shrunk so that it is restricted to a small domain while the search continues. To achieve these search processes, new parameters to determine the active search domain range are introduced. These parameters gradually increase as the search continues. Through these processes, it is possible to shrink the active search domain range. Moreover, by using the proposed method, an optimum solution is attained with high accuracy and a small number of function evaluations. Through numerical examples, the effectiveness and validity of ARPSO are examined. Keywords Global optimization · Particle swarm optimization · Active search domain range

1 Introduction In general, global optimization techniques may be classified into two categories: deterministic and stochastic search techniques. The tunneling algorithm (Levy and S. Kitayama () · K. Yamazaki Kanazawa University, Kakuma-machi, Kanazawa 920-1192, Japan e-mail: [email protected] K. Yamazaki e-mail: [email protected] M. Arakawa Kagawa University, Hayashi-cho, Takamatsu, Kagawa 761-0396, Japan e-mail: [email protected]

S. Kitayama et al.

Montalvo 1985), for example, is a deterministic search technique (Nemhauser and Rinnooy Kan 1989). Many optimization techniques, also termed the meta-heuristics, may be classified as stochastic search techniques. One important characteristic of stochastic search techniques is the explicit use of randomness in the algorithm. The genetic algorithm (GA) is one of the most famous population-based optimization techniques, and it may be classified as a stochastic search technique. The GA is naturally suitable for discrete design variables or combinatorial problems, because it essentially expresses the design variables in binary code 0s and 1s. On the other hand, particle swarm optimization (PSO), developed by Kennedy and Eberhart (2001), is naturally suitable for finding a global minimum for a continuous and non-convex function. The basic concept of PSO is similar to that of the GA. The details of PSO are well summarized in Parsopoulos and Vrahatis (2002). Many applications and validations of PSO are widely known (Venter and Sobieski 2003, 2004). PSO is also classified as a stochastic search technique (Parsopoulos and Vrahatis 2004; Baumgartner et al. 2004). Some attractive characteristics of PSO as an optimization technique are as follows: (1) PSO includes a search direction vector and a neighbourhood (Kennedy and Eberhart 2001). In other words, PSO has structures similar to those in gradient-based optimization techniques. As a result, it may be considered that PSO can be used to find the global minimum with high accuracy (Kitayama et al. 2005). (2) PSO is, indeed, a dynamical system, as shown in Brandstatter and Baumgartner (2002), Clerc and Kennedy (2002), so that a relationship between two parameters generates new search points (c1 and c2 ) obtained by a stability analysis of the dynamical system, in which the eigenvalue problem is solved (Clerc and Kennedy 2002; Iwasaki et al. 2004). Two random parameters (r1 and r2 ) are introduced in the PSO algorithm; these determine the stochastic step-size. The range of the inertia term coefficient, w, is also determined by stability analysis of the dynamical system (Iwasaki et al. 2004). One important characteristic of population-based optimization techniques is that many particles (or search points) flock around a single particle, where the lower objective is attained at the present iteration for minimization problems (Kennedy and Eberhart 2001; Parsopoulos and Vrahatis 2004). Finally, many particles can be used to find the same global minimum by the centralization of many particles through many search iterations. However, it may be difficult to obtain an optimum with high accuracy because many population-based optimization techniques generally do not utilize the gradient of functions. In addition, the search domain defined in the initial search stages is always fixed through all the search processes. This implies that the wasted search domain, where the possibility of existence of a global optimum is very low, remains throughout the search iteration. It is desirable to determine an active search domain range for the next search iteration by utilizing some information, and it is also desirable that the active search domain range be gradually updated adaptively through search processes to obtain the optimum with high accuracy. Therefore, some subjects and motivations of the population-based optimization techniques are summarized as follow: (P1) How to determine the active search domain range by considering the variation of each design variable during the search processes.

Adaptive range particle swarm optimization

(P2) How to preserve the best particle in the active search domain range, and how to handle side constraints. (P3) How to update the active search domain range by considering the search iteration. This means that it is possible to find the optimum with high accuracy by the centralization of all particles. In this paper, we focus on overcoming problems (P1), (P2), and (P3). To do this, we introduce the active search domain range by using the mean and standard deviation of each design variable that is a statistical element. By setting the active search domain range, all particles were included in the active search domain range. As a result, it is expected that the global optimum can be obtained with high accuracy. We call this PSO adaptive range particle swarm optimization (ARPSO), in which statistical elements are included. Another ARPSO (attractive and repulsive PSO) has also been proposed and is described in Riget and Vesterstom (2002). In that algorithm, the diversity in the swarm is calculated in the design variable space, and the velocity is updated by using the diversity. However, the ARPSO described in Riget and Vesterstom (2002) cannot shrink the search domain. Thus, its diversity does not affect the search domain itself. On the other hand, the ARPSO proposed in this study shrinks the search domain directly. Through numerical examples, the validity and efficiency of ARPSO are examined.

2 Particle swarm optimization 2.1 Optimization problem In this paper, the following optimization problem is considered. f (x) → min

(1)

subject to xiL ≤ xi ≤ xiU , gj (x) ≤ 0,

i = 1, 2, . . . , ndv,

j = 1, 2, . . . , ncon

(2) (3)

where x denotes the continuous design variables. f (x) is the objective function to be minimized, and gj (x), the behavior constraints. ncon represents the number of behavior constraints. xiL and xiU denote the lower and upper bounds of the i-th continuous design variable, respectively. ndv also represents the number of continuous design variables. The optimization problem defined by (1) and (2) is often called the unconstrained optimization problem. On the other hand, the optimization problem defined by (1) to (3) is called the inequality constrained optimization problem. PSO is naturally suitable for unconstrained optimization problems with continuous design variables. The penalty function to handle the behavior constraints is often utilized for inequality constrained optimization problems in population-based optimization techniques (Deb 2001).

S. Kitayama et al.

2.2 g-best model and best position keeping model Several models of PSO have been proposed. The most popular among them is the g-best model (Kennedy and Eberhart 2001). The position and velocity of particle d are represented by xkd and vkd , respectively, and k represents the iteration step. The position and velocity of particle d at iteration k + 1 are calculated by the following equations. = wvkd + c1 r1 (pkd − xkd ) + c2 r2 (pkg − xkd ), vk+1 d

(4)

xk+1 = xkd + vk+1 d d .

(5)

The coefficient w in (4) is called the inertia term, and it linearly decreases as the search proceeds (Venter and Sobieski 2003, 2004; Kitayama et al. 2005; Fourie and Groenwold 2002; Schutte and Groenwold 2003; Yasuda and Ishigame 2006). Parameters r1 and r2 denote random numbers between [0, 1]. Two random numbers r1 and r2 are generated separately, and they are applied to each design variable. Thus, rigorously speaking, a subscript i that represents the i-th design variable is required. However, (4) is often used for simplicity in the literatures of PSO. The detailed procedure of the FORTRAN77 code form of (4) is presented in Table 1. Weighting coefficients c1 and c2 are recommended to maintain the following relationship: c1 + c2 ≤ 4,

(6)

c1 = c2 = 2 is used in this paper, according to Fourie and Groenwold (2002, 2003). The position vector pkd , called the p-best, represents the best position of particle d Table 1 FORTRAN77 code form of velocity do 200 ia = 1,agent do 210 i = 1,ndv call random(x0,r) tmp1 = c1∗ r ∗ (pbest(ia,i) − xp(ia,i)) call random(x0,r) tmp2 = c2∗ r ∗ (gbest(1,i) − xp(ia,i)) vv(ia,i) = w∗ vv(ia,i) + tmp1 + tmp2 210 continue 200 continue agent: number of agents ndv: number of design variables x0: seed of random number r: random number pbest(i,j) represents the p-best of i-th particle, and j represents the j -th design variable gbest(1,i) represents the g-best, and j represents the j -th design variable of the g-best xp(i,j) represents i-th particle at the current iteration, and j represents the j -th design variable w: the inertia term vv(i,j) represents the velocity of the i-th particle, and j represents the j -th design variable

Adaptive range particle swarm optimization

until the k-th iteration, and pkg , called the g-best, represents the best position in the swarm at the k-th iteration. In order to distinguish the g-best model, the best position keeping model can be defined as follows: vk+1 = wvkd + c1 r1 (pkd − xkd ) + c2 r2 (pg − xkd ) d

(7)

where pg represents the best position among pkd thus far. Thus, pg is selected from among pkd . 2.3 Basic algorithm of PSO (g-best model) The basic algorithm of PSO, which is called a g-best model, is described briefly below. (STEP1) Define the search domain in advance, and fix it through whole search iteration. Determine the swarm population size and the maximum search iteration number, kmax . Initialize the iteration counter k as k = 1. Randomly generate the initial position and velocity of each particle in the search domain. (STEP2) Calculate the objective function of each particle. (STEP3) Select the p-best and g-best. (STEP4) Update the velocity and position of each particle by (4) and (5). (STEP5) Update the inertia term by using following equation. w = wmax −

wmax − wmin ×k kmax

(8)

where, wmax = 0.9 and wmin = 0.4 are used, in general (Fukuyama 2000). (STEP6) If k is less than kmax , the iteration counter is increased as k = k + 1, and the algorithm returns to STEP2. Otherwise, the algorithm terminates. The search domain given in STEP1 does not change in the original PSO. The inertia term, w, linearly decreases by (8) in order to obtain the diversity in the earlier search stages and swarm centralization in the final search stages (Iwasaki et al. 2004; Yasuda and Ishigame 2006), but this linear decrease does not provide a change in the search domain itself.

3 Adaptive range particle swarm optimization 3.1 Setting the active search domain Let us focus on the unconstrained optimization problems first. The penalty function to handle the behavior constraints is discussed in Sect. 3.6. In the initial search iteration (k = 1), the original PSO described in Sect. 2.1 is performed, because no information is available to reduce the search domain. By performing the original PSO, the mean μi and standard deviation σi of the i-th design

S. Kitayama et al. Fig. 1 Active search domain range of i-th design variable

variable xi are obtained. Then, the active search domain range is defined by (10). To obtain the active search domain range by (10), the normal distribution function by (9) is assumed for the design variables:   (xi − μi )2 , N (xi ) = exp − 2σi2   μi − −2(σiL )2 log a ≤ xi ≤ μi + −2(σiR )2 log a.

(9) (10)

In (10), σiL and σiR represent the standard deviation of the i-th design variable on the left and right hand sides, which are given separately. a in (10) is a system parameter that represents the value of the vertical axis of N (xi ), as shown in Fig. 1. All particles are included in the active search domain range by selecting an appropriate parameter a. The way to determine parameter a is described in Sect. 3.4. An illustrative example of the active search domain range is shown in Fig. 1. The unconstrained optimization problems in the proposed ARPSO can be defined by (1) and (10) by using the active search domain range. The active search domain range is controlled by σiL and σiR . The estimation of distribution particle swarm optimization (EDPSO) (Iqbal and Montes de Oca 2006) is a method similar to the proposed ARPSO. EDPSO operates the weights to determine the search domain. Then, the search domain is determined by the mixture of the weighted Gaussian function. It is possible to determine the complex search domain in EDPSO. On the other hand, the proposed ARPSO does not require weight calculation. The active search domain range is controlled by σiL , σiR , and a. The search domain in the proposed ARPSO is very simple as compared to that in EDPSO. Normally, it is supposed that σiL and σiR are the same. However, σiL or σiR will be updated in the following cases to preserve the best position. 3.2 Preservation of the best position In ARPSO, the active search domain range of the i-th design variable is defined by the mean and standard deviation of the swarm in the previous iteration. As a result, the active search domain range is updated in every iteration, as shown in Fig. 2.

Adaptive range particle swarm optimization Fig. 2 Variation of the active search domain range

Fig. 3 New search domain range by changing the standard deviation

In Fig. 2 μki represents the mean value of i-th design variable in the k-th iteration, and it is the centre of the active search domain range. As a result, the best position, which is the best value of the objective function in all particles until the k-th iteration, may fall outside the active search domain range. Then, the following technique is adopted to preserve the best position in the active search domain range. In the following description, xibest (i = 1, 2, . . . , ndv) represents the component of pg .  (CASE1) In the case of μi + −2(σiR )2 log a < xibest . As shown in Fig. 3, xibest is located on the right hand side of the active search domain range. In Fig. 3, the solid line shows the original active search domain range. R In this case, the new standard deviation σi,new is calculated. Therefore, the following equation is established:   (xibest − μi )2 a = exp − . R )2 2(σi,new

(11)

S. Kitayama et al. R The new standard deviation σi,new is obtained from (11):

 R σi,new =



(xibest − μi )2 . 2 log a

Finally, the new active search domain range is defined as follows:   R )2 log a. μi − −2(σiL )2 log a ≤ xi ≤ μi + −2(σi,new

(12)

(13)

The dotted line in Fig. 3 shows the newly updated active search domain range.  (CASE2) xibest < μi − −2(σiL )2 log a. This is the reverse case of CASE 1. The same procedure is performed to obtain the L . Finally, the newly updated active search domain range new standard deviation σi,new is defined by the same way as that in CASE 1:   L )2 log a ≤ x ≤ μ + −2(σ R )2 log a. μi − −2(σi,new (14) i i i 3.3 Handling the side constraints As shown in Fig. 2, the centre of the active search domain range μi may move at every iteration. As a result, the side constraints may also be violated as shown in Fig. 4, which illustrates the case where the upper bound is violated. R In cases as those shown in Fig. 4, the new standard deviation σi,new is calculated as follows:  (x U − μi )2 R . (15) σi,new = − i 2 log a The newly updated active search domain range is obtained by substituting (15) L into (13). On the other hand, the new standard deviation σi,new is also calculated as Fig. 4 New active search domain range considering the side constraint

Adaptive range particle swarm optimization

follows when the lower bound is violated:  L σi,new ==



(xiL − μi )2 . 2 log a

(16)

The newly updated active search domain range is also obtained by substituting (16) into (14). When some particles violate the side constraints of the active search domain range, the side constraints of the active search domain range will be active. In other words, the particles that violate the side constraints of the active search domain range remain on the boundary of side constraints of the active search domain range. 3.4 Setting system parameter a The active search domain range is defined by (1) the mean value μi ; (2) the standard deviation σi of the i-th design variable; and (3) the system parameter a, which is the value of vertical axis of (9). The minimum range amin of a is easy to determine. Thus, amin may be selected as the positive value near 0 (i.e. amin = 1.0 × 10−5 ; based on numerical experience, the authors recommend amin = 1.0 × 10−5 ). However, it is difficult to determine the maximum range amax of a. If a is selected close to 1, the active search domain range in the final search stage will be close to zero. Then, we consider how to set the maximum range amax in the following discussions. To ensure the active search domain range in the final search stage, the minimum standard deviation σi,min of the i-th design variable is defined. Suppose σi,min is defined by using the side constraints as follows: σi,min = ε1 (xiU − xiL ).

(17)

In the final search stage, it is expected that many particles could flock around the g-best (Nemhauser and Rinnooy Kan 1989). Then, the active search domain range is defined as follows: ε2 (xiU − xiL )

(18)

ε1 and ε2 in (17) and (18) are small positive values set in advance. The following equation is obtained by utilizing (10) and (18):  2 2 −2σi,min log amax = ε2 (xiU − xiL ). (19) To obtain (19), we set σi,min = σiL = σiR . By substituting (17) into (19), amax is expressed by the following equation.     1 ε2 2 . amax = exp − 8 ε1

(20)

The above equation implies that amax is determined by the ratios of ε1 and ε2 . Through many numerical experiences, the authors recommend 0.5 ≤ ε2 /ε1 ≤ 1. Thus, the range of amax is 0.883 ≤ amax ≤ 0.969.

S. Kitayama et al.

3.5 Setting the active search domain range during the iteration In this section, we consider how to update the system parameter a. In the original PSO, many particles are allocated randomly in the design domain flock around the g-best as the search proceeds. It is natural to consider that the active search domain range will shrink. Then, the system parameter a is updated by (21) by considering the iteration step. a = amin +

(amax − amin ) × k. kmax

(21)

By using (21), it is possible to define the active search domain range widely in the initial search stage, and it is also possible to shrink the active search domain range as the search proceeds. 3.6 Handling the behavior constraints Various penalty functions to handle the behavior constraints are often employed in population-based optimization techniques (Deb 2001). In this paper, the following penalty function is adopted, and the augmented objective function to be minimized is constructed. F (x) = f (x) + r × penalty → min,

(22)

r = (1 + |f (x)|) , 

(23)

q

penalty =

j =1 exp(1 + gj (x)),

0,

gj (x) > 0, gj (x) ≤ 0,

(24)

q in (23) is a real number greater than 1. In this paper, q is set as 2. The penalty parameter r in (22) is automatically determined by using the above penalty function approach. 3.7 The algorithm of ARPSO In this section, the basic algorithm of ARPSO is described. In the initial iteration (k = 1), the original PSO described in Sect. 2.1 is performed. In other words, the procedures in STEP1 to STEP5 in the proposed algorithm are the same as those in the original PSO. (STEP6) Set the iteration counter k as k = k + 1. Calculate the mean μi and the standard deviation σi of the i-th design variable. In this step, the standard deviation is set as σi = σiL = σiR . (STEP7) Check the validity of the standard deviation by comparison with the upper and lower bounds. σiL < σi,min → σiL = σi,min ,

(25)

σiR < σi,min → σiR = σi,min .

(26)

Adaptive range particle swarm optimization

(STEP8) Update the system parameter a by (21). (STEP9) Set the active search domain range by using (10). (STEP10) If pg is not located in the active search domain range, the active search domain range is changed by using (12) and (13) (or (14)). (STEP11) If the side constraints are violated, the standard deviation is calculated by (15) (or (16)). Then, the newly updated active search domain range is set. (STEP12) Update the velocity and position of each particle by (4) and (5). (STEP13) Update the inertia term by (8). (STEP14) If k is less than kmax , return to STEP6. Otherwise, terminate the algorithm.

4 Numerical examples The validity of ARPSO is examined through some numerical examples. To visualize the variation of the active search domain range, a two-dimensional problem is treated. ε1 in (17) is set as 0.01, and the ratio ε2 /ε1 is also set to 1. amin in (21) is set as amin = 1.0 × 10−5 . Through many numerical examples, it has been reported that a swarm population size between 20 and 30 may be a reasonable compromise between cost and reliability (Schutte and Groenwold 2005); therefore, a swarm population size between 20 and 30 is adopted in the following examples. In the population-based optimization techniques, the function calls can be simply calculated by multiplying the total number of iterations by the population size when no convergence criterion is assigned. The function calls is also important aspect in the optimization. In this paper, the following convergence criterion to evaluate the function calls is used: (a) the objective function of p g does not change over 20 iterations. 4.1 Schwefel function The problem is defined as follows: f (x) = 418.9829 × ndv +

ndv 

{−xi sin |xi |} → min,

(27)

i=1

−500 ≤ xi ≤ 500,

i = 1, 2, . . . , ndv.

(28)

The global minimum is xi,G = 420.9687 (i = 1, 2, . . . , ndv) and the objective function at the global minimum is approximately 0. The behavior and contour of the objective function are shown in Fig. 5. This problem has many local minima, and many function evaluations are required to find a global minimum. The swarm population size is set to 20, and the maximum search iteration is also set to 100. The following domain range is set as the initial search domain range. The rectangle in Fig. 5 shows the initial search domain range: −100 ≤ xi ≤ 100,

i = 1, 2, . . . , ndv.

(29)

An example of the change in the active search domain range is shown in Fig. 6 by the rectangle, in which the symbol of the black circle • shows the best position.

S. Kitayama et al.

Fig. 5 The behavior and contour of objective function

Additionally, the histories of the objective function at p g among the g-best model, the best position keeping model and the ARPSO are shown in Fig. 7. Finally, the sum of the standard deviation of each design variable is shown in Figs. 8 and 9. The characteristics of these models will be discussed below. In Fig. 6, the active search domain range does not always shrink monotonically as the search proceeds, because the active search domain range is determined by the standard deviation of each design variable. Fig. 6 shows that the effect of parameter a toward shrinking the active search domain range is less than that of the standard deviation of each design variable in the initial search stage. In the g-best model, pg may always be updated at each iteration, so that the standard deviation of each design variable increases in the initial search stage, as shown in Figs. 8 and 9. From Fig. 7, p g attains the global minimum through sufficient search iteration. However, it is apparent from Figs. 8 and 9 that the standard deviation increases by whole search iteration. In general, it is preferable for the standard deviation of each design variable to progressively reduce as the search proceeds, because all the particles flock around one single particle, at which the lower objective is attained. However, the standard deviation of each design variable increases over that of the other models. This result shows that many particles do not flock around the solution point. In the best position keeping model, p g can be used to find the global minimum in the early search stage, and the standard deviation of each design variable reduces in comparison with the g-best model. This means that many particles flock around p g . However, the search domain given by (28) is fixed. On the other hand, ARPSO can be used to find the global minimum faster than the best position keeping model. Additionally, the standard deviation of each design variable is also smaller than the best position keeping model. In other words, the diversity of ARPSO is smaller than that of the other models. However, the global

Adaptive range particle swarm optimization

Fig. 6 Variation of active search domain range through search processes

S. Kitayama et al. Fig. 7 Convergence of objective function

Fig. 8 History of standard deviation of design variable x1

Fig. 9 History of standard deviation of design variable x2

minimum can be attained with high accuracy because all particles are included in the active search domain. The results obtained through 10 trials are tabulated in Table 2. From this table, it is apparent that ARPSO can find the global minimum with a small number of function evaluations than other models. The term ‘-achievement-’ indicates whether g-best finds the global minimum. 4.2 Results of some numerical test problems ARPSO was applied to eight numerical test problems. These problems are listed in Table 3. Fifty trials have been performed through numerical test problems. The swarm population size is set to 30, and the maximum number of search iterations is set to 500. All particles are distributed randomly between xiL and xiU at k = 1. The results are shown in Table 4. These results show that ARPSO can be used to find the global minimum with high accuracy.

Adaptive range particle swarm optimization Table 2 Comparison of results of Schwefel function ARPSO

Best position keeping

g-best model

Achievement ratio (%)

100

100

10

Best objective

2.545E–05

2.792E–05

1.596

Worst objective

2.758E–05

4.407E–03

N/A

Mean value of objective

2.593E–05

9.504E–04

N/A

Standard deviation of objective

7.844E–07

1.282E–03

N/A

Average function call

1488

1894

N/A

4.3 Inequality constrained optimization problem The following inequality constrained optimization problem is considered (Floudas and Pardalos 1990): f (x) = 37.293239x1 + 0.8356891x1 x5 + 5.3578547x32 − 40792.141 → min,

(30)

g1 (x) = 0.0022053x3 x5 − 0.0056858x2 x5 + 0.0006262x1 x4 − 6.665593 ≤ 0, (31) g2 (x) = 0.0022053x3 x5 − 0.0056858x2 x5 − 0.0006262x1 x4 − 85.334407 ≤ 0,

(32)

g3 (x) = 0.0071317x2 x5 + 0.0021813x32

+ 0.0029955x1 x2 − 29.48751 ≤ 0,

(33)

g4 (x) = −0.0071317x2 x5 − 0.0021813x32 − 0.0029955x1 x2 + 9.48751 ≤ 0,

(34)

g5 (x) = 0.0047026x3 x5 + 0.0019085x3 x4 + 0.0012547x1 x3 − 15.699039 ≤ 0, (35) g6 (x) = −0.0047026x3 x5 − 0.0019085x3 x4 − 0.012547x1 x3 + 10.699039 ≤0

(36)

78 ≤ x1 ≤ 102,

(37)

33 ≤ x2 ≤ 45,

(38)

27 ≤ x3 , x4 , x5 ≤ 45.

(39)

It has been reported in Floudas and Pardalos (1990) that the global minimum is xG = (78, 33, 29.9953, 45, 36.7758)T , and that the objective function at global minimum is f (xG ) = −30665.5387. The behavior constraints, g1 (x) and g6 (x), are active at the global minimum, and the accuracy of the global minimum affects the objective function. The swarm population size is set to 30, and the maximum search iteration is set to 500, to find the global minimum by ARPSO. The result obtained by the ARPSO is as follows: xG = (78, 33, 29.9953, 45, 36.7756)T ,

(40)

f (xG ) = −30665.5272.

(41)

The best position keeping model is used for comparison with the result by ARPSO. The exterior penalty function is adopted to handle the behavior, and the penalty para-

Griewank

Ackley

Rastrigin

Michalewics

Branin

Six-Hump

2

3

4

5

6

7

8

2n minima

1

Shubert

Camel-Back

Name

No.

2

2

2

5

6

10

10

10

design variables

Number of

Table 3 Numerical test problems

f (x) =

i=1 i cos[(i + 1)x1 + i]

5

i=1 i cos[(i + 1)x2 + i] → min

5

−10 ≤ x ≤ 10

−3 ≤ x ≤ 3

f (x) = x12 (4 − 2.1x12 + 13 x14 ) + x1 x2 + 4x24 − 4x22 → min



−10 ≤ x ≤ 10

0≤x≤π

−5.12 ≤ x ≤ 5.12

1 ) cos x + 10 → min f (x) = [xi − 5.12 x12 + π5 x1 − 6] + 10(1 − 8π 1

ixi2 i=1 sin(xi ) × {sin( π )} → min

ndv

ndv 2 i=1 [xi − 10 cos(2π xi ) + 10] → min

f (x) = −

f (x) =

i=1 cos(2π xi )] → min

ndv

−30 ≤ x ≤ 30

 1 ndv x 2 ] f (x) = 22.71828 − 20 exp[−0.2 ndv i=1 i 1 − exp[ ndv

−10 ≤ x ≤ 10

xi 1 ndv x 2 − ndv cos( √ f (x) = 1 + 400 ) → min i=1 i i=1 i

−5 ≤ x ≤ 5

Side constraints

 4 2 f (x) = 12 ndv i=1 (xi − 16xi + 5xi ) → min

Objective

Objective at global

f (xG ) = −186.730909

f (xG ) = −1.031628

f (xG ) = 0.397887

f (xG ) = −4.687658

f (xG ) = 0

f (xG ) = 0

f (xG ) = 0

f (xG ) = −391.661

minimum

S. Kitayama et al.

Adaptive range particle swarm optimization Table 4 Results of numerical test problems No.

Name

Best

Worst

Mean value of

Standard

Average

objective

objective

objective

deviation of

function

objective

call

1

2n minima

−391.661657

−390.988150

−391.661652

1.78497E–05

7551

2

Griewank

3.50048E–09

7.300E–02

1.55511E–07

5.58598E–07

7317

3

Ackley

6.50626E–04

1.57140E–03

1.19470E–03

3.13025E–04

12249

4

Rastrigin

2.26054E–09

3.34356E–07

9.39378E–01

1.38914E–07

11886

5

Michalewics

−4.687658

−4.687737

−4.468769

−7.94728E–08

8697

6

Branin

0.397887

0.397776

0.397800

4.96710E–09

5184

7

Six-Hump

−1.03163

−1.03162

−1.03163

1.98682E–08

5784

−186.73091

−186.72910

−186.73000

2.54313E–06

6360

Camel-Back 8

Shubert

Table 5 Relative errors of each design variable and objective function x1

x2

x3

x4

x5

f (x)

ARPSO

0%

0%

0%

0%

0.000544%

0.000038%

PSO

0%

0%

0.001000%

0%

0.002719%

0.000196%

meter for the behavior constraints is set as r = 1.0 × 108 . The result obtained by the best position keeping model is as follows: xG = (78, 33, 29.9956, 45, 36.7748)T ,

(42)

f (xG ) = −30665.4782.

(43)

A better result could be obtained by ARPSO in comparison to that obtained by the best position keeping model, because of the high accuracies of x3 and x5 . The relative errors in each design variable and objective function are shown in Table 5. 4.4 Optimum design of tension/compression spring One of the most famous test problem proposed by Arora was considered (Arora 1989). Many researchers have tested as one of the benchmark problems in the structural optimization (Ray and Saini 2001; Coello 2000; Hu et al. 2003). The design variables are (1) the wire diameter d (= x1 ), (2) the mean coil diameter D (= x2 ) and (3) the number of active coils N (= x3 ). The problem can be stated as follows: f (x) = (2 + x3 )x12 x2 → min, g1 (x) = 1 −

x23 x3 71785x14

≤ 0,

(44) (45)

S. Kitayama et al.

g2 (x) =

4x22 − x1 x2 12566(x2 x13

g3 (x) = 1 −

− x14 )

+

1 − 1 ≤ 0, 5108x12

(46)

140.45x1 ≤ 0, x22 x3

(47)

x1 + x2 − 1 ≤ 0, 1.5 0.05 ≤ x1 ≤ 2.00,

(49)

0.25 ≤ x2 ≤ 1.30,

(50)

2.00 ≤ x3 ≤ 15.0.

(51)

g4 (x) =

(48)

The swarm population size is set to 20, and the number of maximum search iterations is set to 500. Eleven trials were performed for comparison with previous researches. The result obtained by through 11 ARPSO trials is listed in the last column in Table 6, and it was found that the best result could be obtained by ARPSO. The results obtained through 11 trials are shown in Table 7. Numerical test problems presented in this paper are solved by Mathematica (version 6.0.3). Four global optimization techniques (Nelder-Mead Method, Simulated Annealing, Random Search, and Differential Evolution) are included into the Mathematica, and the readers can use these global optimization techniques by NMinimize command. However, the function calls in Mathematica are not clear. As the result, it is impossible to compare the efficiency exactly. Test problems more than five design variables in Table 3 have been solved. The optimum design of tension/compression spring has also been solved. The results have been shown in Table 8.

Table 6 Comparison of some results of optimum design of tension/compression spring Design variables

Best solutions found Arora(22)

Coello(24)

Ray(23)

Hu(25)

This study

x1 (d)

0.053396

0.05148

0.050417

0.051466

0.051679

x2 (D)

0.39918

0.351661

0.321532

0.351384

0.356477

x3 (N )

9.1854

11.632201

13.979915

11.608659

11.299395

g1 (x)

0.000019

−0.00208

−0.001926

−0.003336

−0.000037

g2 (x)

−0.000018

−0.00011

−0.012944

−0.00011

−0.000008

g3 (x)

−4.123832

−4.026318

−3.89943

−4.026318

−4.054976

g4 (x)

−0.698283

−0.731239

−0.752034

−0.731324

−0.727895

f (x)

0.01273

0.012705

0.01306

0.012667

0.012661

Function Call

N/A

900000

1291

N/A

5804

Mean of f (x)

N/A

0.012769

0.013436

0.012719

0.012675

Worst of f (x)

N/A

0.012822

0.01358

N/A

0.012696

Standard deviation of f (x)

N/A

3.9390E–05

N/A

6.4660E–05

1.1740E–05

x1

0.0516792

0.0519954

0.0520007

0.0511883

0.0522840

0.0509491

0.0508524

0.0527666

0.0528525

0.0529021

0.0503256

Trial No.

1

2

3

4

5

6

7

8

9

10

11

0.3247856

0.3865957

0.3853432

0.3831984

0.3369097

0.3391693

0.3711950

0.3447864

0.3642496

0.3641189

0.3564771

x2

13.4354849

9.7274183

9.7858349

9.8872390

12.5485456

12.3939067

10.4843893

12.0202057

10.8581928

10.8647657

11.2993956

x3

−4.0832736 −4.0190010

−0.0000295 −0.0000276 −0.0000064 −0.0000126 −0.0000170 −0.0000287 −0.0000002 −0.0000307 −0.0000247 −0.0000184

−0.0000462 −0.0001074 −0.0000233 −0.0000062 −0.0000926 −0.0000393 −0.0000902 −0.0000198 −0.0000172 −0.0000406

−3.9872798

−4.1107290

−4.1085073

−4.1045623

−4.0143310

−4.0312974

−4.0696214

−4.0696722

−4.0549763

−0.0000086

−0.0000374

g3 (x)

g2 (x)

g1 (x)

Table 7 Results through 11 trials of optimum design of tension/compression spring

0.0126883 0.0126968

−0.7499258

0.0126864

0.0126830

0.0126752

0.0126727

0.0126680

0.0126662

0.0126648

0.0126641

0.0126618

obj.

−0.7070015

−0.7078696

−0.7093567

−0.7414919

−0.7399210

−0.7176806

−0.7360169

−0.7224998

−0.7225905

−0.7278958

g4 (x)

Adaptive range particle swarm optimization

Michalewics

Rastrigin

Ackley

Griewank

4.95733E–19 13.010400 17.867300 1.82846E–06

0.000000 0.000000 −1.82846E–06 11.699000 11.109400 −1.82846E–06

Random Search

Differential Evolution

Nelder-Mead Method

Simulated Annealing

Random Search

Differential Evolution

4.974800 −2.622520 −2.693030 −3.694590 −4.687660

2.984880 7.3326E–26 −4.687660 −4.687660 −4.687660 −4.687660

Random Search

Differential Evolution

Nelder-Mead Method

Simulated Annealing

Random Search

Differential Evolution

14.924400

33.828500

2.984880

Simulated Annealing

39.798200

2.984880

Nelder-Mead Method

18.831700

1.56948E–32

0.549409

0.243823

0.000000

−391.662000

−391.662000

Differential Evolution 0.000000

−349.251000

−391.662000

Random Search

Simulated Annealing

−335.115000

−377.525000

Nelder-Mead Method

−306.841000

−391.662000

Nelder-Mead Method

Simulated Annealing

2n minima

Worst objective

Best objective

Methods

Name

Table 8 Results on more than five design variables in Table 2 and the optimum design of tension/compression spring

−4.687660

−4.347061

−3.962234

−3.963637

2.558468

9.125183

12.337470

13.247134

−1.61949E–06

15.813129

16.983683

5.456291

4.57183E–20

4.51258E–34

0.084307

0.040340

−391.662000

−374.697600

−357.733286

3.60458E–15

0.286999

0.593549

0.564961

1.305897

3.241914

6.484971

9.063731

8.6122E–07

1.674436

1.662878

2.969282

1.13823E–19

2.65247E–33

0.143626

0.062167

1.7302E–13

10.733611

12.920243

20.034286

objective

objective −347.231886

Standard deviation of

Mean value of

S. Kitayama et al.

0.012665 0.012665

Random Search

Differential Evolution

spring

0.012665

0.012665

Nelder-Mead Method

Simulated Annealing

tension/compression

Best objective

The optimum design of

Methods

Name

Table 8 (Continued)

0.014384

0.012665

0.020412

0.012665

Worst objective

0.013066

0.012665

0.014716

0.000593

1.81939E–18

0.003249

1.81939E–18

objective

objective 0.012665

Standard deviation of

Mean value of

Adaptive range particle swarm optimization

S. Kitayama et al.

5 Conclusions In this paper, adaptive range particle swarm optimization (ARPSO) has been proposed. The active search domain range of ARPSO is introduced by using the mean and standard deviation of each design variable. The active search domain range is updated adaptively. Additionally, the parameter a is introduced; the parameter controls the active search domain range. Therefore, it is possible to define the active search domain range widely in the initial search stage, and it is also possible to shrink the active search domain range as the search proceeds. Through numerical examples, the validity and efficiency of ARPSO have been confirmed. Acknowledgements The authors would like to thank to Prof. Yamakawa, H. (Waseda Univ.), Prof. Sugimoto, H. (Hokkai Gakuen Univ.), and Prof. Nakayama, H. (Konan Univ.), who provided useful and constructive comments.

References Arora JS (1989) Introduction to optimum design. McGraw-Hill, New York Baumgartner U, Magele Ch, Renhart W (2004) Pareto optimality and particle swarm optimization. IEEE Trans Magn 40(2):1172–1175 Brandstatter B, Baumgartner U (2002) Particle swarm optimization—mass-spring system analogon. IEEE Trans Magn 38(2):997–1000 Clerc M, Kennedy J (2002) The particle swarm-explosion, stability and convergence in a multidimensional complex space. IEEE Trans Evol Comput 6:58–73 Coello CA (2000) Use of a self-adaptive penalty approach for engineering optimization problems. Comput Ind 41:113–127 Deb K (2001) Multi-objective optimization using evolutionary algorithms. Wiley, New York Floudas CA, Pardalos PM (1990) A collection of test problems for constrained global optimization algorithms In: Lecture notes in computer science. Springer, Berlin, p 23 Fourie PC, Groenwold AA (2002) The particle swarm optimization algorithm in size and shape optimization. Struct Multidisc Optim 23(4):259–267 Fukuyama Y (2000) A particle swarm optimization for reactive power and voltage control considering voltage security assessment. IEEE Trans Power Syst 15(4):1232–1239 Hu X.H., Eberhart RC, Shi YH (2003) Engineering optimization with particle swarm. IEEE Swarm Int Symp 53–57 Iqbal M, Montes de Oca MA (2006) An estimation of distribution particle swarm optimization algorithm. IRIDIA Technical Report Series, Technical Report No TR/IRIDIA/2006-012 Iwasaki N, Yasuda K, Ide A (2004) Analysis of the dynamics of particle swarm optimization. In: Proceedings of IFAC workshop on adaptation and learning in control and IFAC workshop on periodic control systems, pp 741–746 Kennedy J, Eberhart RC (2001) Swarm intelligence. Morgan Kaufmann, San Mateo Kitayama S, Arakawa M, Yamazaki K (2005) Penalty function approach for the mixed discrete non-linear problems by particle swarm optimization. Struct Multidisc Optim 32(3):191–202 Levy AV, Montalvo A (1985) The tunneling algorithm for the global minimization functions. SIAM Soc Ind Appl Math J Sci Stat Comput 1(6):15–29 Nemhauser GL, Rinnooy Kan AHG, Todd MJ (eds) (1989) Handbooks in operations research and management science, vol. 1: optimization. Elsevier Science, Amsterdam Parsopoulos KE, Vrahatis MN (2002) Recent approaches to global optimization problems through particle swarm optimization. Nat Comput 1:235–306 Parsopoulos KE, Vrahatis MN (2004) On the computation of all global minimizers through particle swarm optimization. IEEE Trans Evol Comput 8(3):211–224 Ray T, Saini P (2001) Engineering design optimization using swarm with an intelligent information sharing among individuals. Eng Optim 33:735–748

Adaptive range particle swarm optimization Riget J, Vesterstom JS (2002) A diversity-guided particle swarm optimizer—the ARPSO. Technical Report 2002-02, Department of Computer Science, University of Aarhus Schutte JF, Groenwold AA (2003) Sizing design of truss structures using particle swarms. Struct Multidisc Optim 25:261–269 Schutte F, Groenwold A (2005) A study of global optimization using particle swarm. J Glob Optim 31:93– 108 Venter G, Sobieski JS (2003) Particle swarm optimization. AIAA J 41(8):1583–1589 Venter G, Sobieski JS (2004) Multidisciplinary optimization of a transport aircraft wing using particle swarm optimization. Struct Multidisc Optim 26(1–2):121–131 Yasuda K, Ishigame A (2006) Nonlinear programming algorithm—from the point of view of practical applications. Syst Control Inf Eng 50(9):1–7 (in Japanese)