Applied Mathematics and Computation 218 (2012) 6620–6626

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

A novel particle swarm optimization algorithm based on particle migration Ma Gang, Zhou Wei ⇑, Chang Xiaolin State Key Laboratory of Water Resources and Hydropower Engineering Science, Wuhan University, Wuhan 430072, China

a r t i c l e

i n f o

Keywords: Particle swarm optimization Global optimization Time varying acceleration coefﬁcients Migratory behavior

a b s t r a c t Inspired by the migratory behavior in the nature, a novel particle swarm optimization algorithm based on particle migration (MPSO) is proposed in this work. In this new algorithm, the population is randomly partitioned into several sub-swarms, each of which is made to evolve based on particle swarm optimization with time varying inertia weight and acceleration coefﬁcients (LPSO-TVAC). At periodic stage in the evolution, some particles migrate from one complex to another to enhance the diversity of the population and avoid premature convergence. It further improves the ability of exploration and exploitation. Simulations for benchmark test functions illustrate that the proposed algorithm possesses better ability to ﬁnd the global optima than other variants and is an effective global optimization tool. Ó 2011 Elsevier Inc. All rights reserved.

1. Introduction Particle swarm optimization (PSO) is a population-based stochastic, heuristic optimization algorithm with inherent parallelism, ﬁrstly introduced in 1995 by Kennedy and Eberhart [1,2]. It is a member of the wild category of swarm intelligence, and draw inspiration from the simpliﬁed animal social behaviors, such as bird ﬂocking, ﬁsh schooling, etc. In the PSO, each individual is treated as a volume-less point, which referred to as particle in the multidimensional search space. The population is called as swarm, and the trajectory of each particle in the search space is adjusted by dynamically altering its velocity. These particles ﬂy through problem space and have two essential reasoning capabilities: their memory of their own best position and knowledge of the global or their neighborhood’s best. Since the introduction of PSO, it has attracted comprehensive attention due to its effectiveness and robustness in the global optimization research ﬁeld, as well as simplicity of implementation [3,4]. As a swarm intelligence algorithm, some researchers have noted a tendency for the swarm to converge prematurely on local optima [5], especially in complex multi-peak-search problems. In view of the shortcoming of the standard PSO algorithm, a few variants of the algorithm have been suggested through empirical simulations over the past decade, some have resulted in improved general performance, and some have improved performance on particular kinds of problems. These variants can be classiﬁed into several groups as: parameter selecting [6,7], integration of its self-adaptation [8–15], evolution strategy [16–22] and integrating with other intelligent optimizing methods [23–26]. For a detailed review of the particle swarm optimization and its different variants, readers are encouraged to refer to the review articles written by Eberhart and Shi [3] and Poli et al. [5]. Inspired by migratory behavior in the nature, we propose a novel particle swarm optimization based on particle migration. In the new algorithm, a population of particles is randomly sampled from the feasible space. Then the population is randomly partitioned into several sub-swarms, each of which is made to evolve based on LPSO-TVAC described in Section

⇑ Corresponding author. E-mail address: [email protected] (Z. Wei). 0096-3003/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2011.12.032

M. Gang et al. / Applied Mathematics and Computation 218 (2012) 6620–6626

6621

2.3. At periodic stages in the evolution, some particles immigrate from one complex to another to maintain the diversity of the whole population. The rest part of the paper is organized as follows: Section 2 provides a brief introduction of the classical particle swarm optimization and some commonly accepted improved version of PSO. Section 3 presents the new algorithm based on particle migration. Numerical examples used to illustrate the efﬁciency of the proposed algorithm are given in Section 4. Finally, Section 5 discuss the test conclusion and consider possible future work. 2. Review of PSO and its variants 2.1. Classic PSO In the PSO algorithm, each particle represents a potential solution for the speciﬁc issue, and the particle swarm is initialized with a population of random individuals in the feasible space. The algorithm searches the optimal solution by updating the position of the particle as search proceeds. The position and velocity of ith particle is represented by n-dimensional vector xi = (xi1, xi2, . . . , xin) and vi = (vi1, vi2, . . . , vin), respectively. The ﬁtness of each particle is evaluated according to the objective function of speciﬁc issues. The best preciously visited position of the ith particle achieved up to the current iteration is indicated as its individual best position pi = (pi1, pi2, . . . , pin). The global best position of the whole swarm obtained so far is indicated as pg = (pg1, pg2, . . . , pgn). At each iteration, the particle tries to modify its position using the current velocity and the distance from pbest and gbest. The velocity v ikþ1 of each particle and its new position xikþ1 manipulated according to the following two equations:

v kþ1 ¼ xv ki þ c1 r 1 ðpki xki Þ þ c2 r 2 ðpkg xki Þ; i ¼ xki þ v kþ1 : xkþ1 i i

ð1Þ ð2Þ

Here k is the iteration number; x is the inertia weight. c1, c2 are positive constants known as acceleration coefﬁcients, determine the relative inﬂuence of the cognition and social components; r1, r2 are independently uniformly distributed random variables in the range (0, 1). The ﬁrst part of Eq. (1) represents the impact of the previous velocity of particle on its current one. The second part represents the personal experience. The third part represents the collaborative effect of the particles and it always pulls the particle to the global best solution the swarm has found so far. At each generation, the velocity of each particle is calculated according to Eq. (1), and the position is updated by Eq. (2). Each time, any better position is stored in memory. Each particle adjusts its position by its own ‘‘ﬂying’’ experience and the experience of its companions. This means that if a particle gets a promising new position, all the other particles will move closer to it. This process is repeated until satisfactory solution is found or the predeﬁned number of iterations is met. 2.2. LPSO In the PSO algorithm, proper control of global exploration and local exploitation is a crucial issue. Shi and Eberhart [27] introduced the concept of inertia weight to the original version of PSO, to balance the local and global search during the evolution process. In general, the higher values of x help in exploring the search space more thoroughly in the process and beneﬁt the global search, while lower values help in the local search around the current search space. The major concern of this modiﬁcation is to avoid the premature convergence in the early stage of the search and to enhance convergence to the global optimum solution during the latter stage of the search. The concept of linearly decreasing inertia weight was introduced in [28] and is given by:

x ¼ xmax

xmax xmin itermax

iter;

ð3Þ

where iter is the current iteration number, while itermax is the maximum number of allowable iteration. Usually the value of x is varied between 0.9 and 0.4. Therefore, the particle is to use lager inertia weight during the initial exploration and gradual reduce its value as the search proceeds in further iterations. 2.3. LPSO-TVAC Although the PSO algorithm with linearly decreasing inertia weight (LPSO) can locate satisfactory solution at a markedly fast speed, its ability to ﬁne tune the optimum solution is limited, mainly due to the lack of diversity at the latter stage of evolution process. In population-based optimization methods, the guideline is to encourage the individuals to roam through the entire search space during the early part of the search, without clustering around local optima. During the later stage, convergence towards the global optima is encouraged [1]. With this in view, a novel strategy in which time-varying acceleration coefﬁcients are employed in some literatures [10,29]. The strategy is implemented by changing the acceleration coefﬁcients c1 and c2 in such a manner that the cognitive component is reduced while the social component is increased as the search proceeds. With a large cognitive component and small social component at the early part of the process, particles are allowed to roam around the search space instead of clustering around some super particles. With a small cognitive compo-

6622

M. Gang et al. / Applied Mathematics and Computation 218 (2012) 6620–6626

nent and a large social component, particles are allowed to converge to the global optima in the latter part of the optimization process. The modiﬁcation can be mathematically represented as follows [10,29]:

c1 ¼ ðc1f c1i Þ iteriter þ c1i ; max c2 ¼ ðc2f c2i Þ iteriter þ c2i ; max where c1i, c1fc2i and c2f are initial and ﬁnal values of cognitive and social acceleration factors respectively, usually c1i = c2f = 2.5 and c1f = c2i = 0.5 [29]. 3. Implementation of PSO with particle migration It should be noted that there are two key factors affect the performance of PSO algorithm. One is the proper balance between global exploration and local exploitation. By introducing the linearly decreasing inertia weight and time-varying acceleration coefﬁcients into the original version of PSO, the performance of PSO has been remarkably improved. Another one is to maintain the diversity of the population. Inspired by the migratory behavior in the nature when one particle travels from one swarm to another, it enhances the diversity of the target swarm by its position in the previous swarm. On the other hand, the particle migration builds the bridge for information exchange. The communication mainly reﬂects in that not only

Fig. 1. The migration scheme.

Fig. 2. The ﬂow chart of MPSO.

6623

M. Gang et al. / Applied Mathematics and Computation 218 (2012) 6620–6626

the current position but the pbest of the migration particle are integrating into the target swarm. The migratory scheme is given in Fig. 1. The MPSO strategy is illustrated in Fig. 2 and its implementation consists of the following steps: Step 1: Initializing. Sample s points in the feasible space. The population size s = pm, where p is the number of sub-swarms, m is size of the swarm. Compute the ﬁtness value fi of each particle. Step 2: Grouping. Partition s particles into p sub-swarms randomly, each containing m individuals. Sk ¼ fxk1 ; xk2 ; . . . ; xm k g Denotes the kth sub-swarm. Step 3: Evolving. Evolve each complex Sk using particle swarm optimization algorithm with linearly decreasing inertia weight and time-varying acceleration coefﬁcients strategy (LPSO-TVAC). Step 4: Migrating. Perform migration process if iteration number can be exactly divided by the designated migration interval. Step 4.1: Assign each sub-swarm a mutually exclusive index indicated as the target sub-swarm number randomly. Step 4.2: Select n migratory particles in each sub-swarm according to its ﬁtness value, where n = mr, r is the migration rate of the particle swarm. The tournament selection is employed and the best particle should not be selected. Step 4.3: Move the migratory particles from the original swarm to the target swarm.

Table 1 Benchmark test functions. Name Sphere Weighted sphere Schwefel’s Rosenbrock Rastrigin Griewank Schwefel Ackley

Formula Pn

2 i¼1 xi P f ðxÞ ¼ ni¼1 ix2i P P f ðxÞ ¼ ni¼1 ij¼1 x2j Pn1 f ðxÞ ¼ i¼1 100ðxiþ1 x2i Þ2 þ ðxi 1Þ2 Pn 2 f ðxÞ ¼ 10n þ i¼1 xi 10 cosð2pxi Þ Pn 2 Qn xiﬃ 1 p f ðxÞ ¼ 4000 þ1 i¼1 xi i¼1 cos i pﬃﬃﬃﬃﬃﬃﬃ Pn f ðxÞ ¼ 418:9829n i¼1 xi sin jxi j ﬃ qﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ P P f ðxÞ ¼ 20 þ e 20 exp 0:2 1n ni¼1 x2i exp 1n ni¼1 cosð2pxi Þ

f ðxÞ ¼

Range

Optimal value

xi 2 [5.12, 5.12]

0

xi 2 [5.12, 5.12]

0

xi 2 [65.536, 65.536]

0

xi 2 [2.048, 2.048]

0

xi 2 [5.12, 5.12]

0

xi 2 [600, 600]

0

xi 2 [500, 500]

0

xi 2 [32.786, 32.786]

0

Table 2 The mean ﬁtness values for benchmark functions of PSO, LPSO, LPSO–TVAC and MPSO. Function

Dimension

Generation

Sphere

10 20 30

1000 1500 2000

PSO 1.2250 10.4144 27.3661

Weighted shpere

10 20 30

1000 1500 2000

5.5397 100.3120 402.4097

Schwefel’s

10 20 30

1000 1500 2000

Rosenbrock

10 20 30

Rastrigin

LPSO 0 0 0

LPSO–TVAC

MPSO

0 0 0

0 0 0

0 4.5613 30.6184

0 0.6816 4.8759

0 0 0.0524

535.3144 13411.3546 58920.6005

0 661.4250 4363.6868

0 25.7698 884.7634

0 0 8.5899

1000 1500 2000

25.7328 207.8258 590.3556

4.0469 20.5068 39.4433

1.7815 13.7679 32.9951

1.0194 6.228 11.6365

10 20 30

1000 1500 2000

38.8794 137.0605 255.0153

3.0346 16.6929 46.5757

2.7636 14.63304 37.1872

1.84065 7.9915 17.7272

Griewank

10 20 30

1000 1500 2000

5.0169 36.5880 94.8735

0.0854 0.0307 0.0149

0.0749 0.0225 0.0170

0.0504 0.0098 0.0061

Schwefel

10 20 30

1000 1500 2000

841.1588 2818.7608 5489.5560

633.7827 1578.6844 2766.0077

320.7916 1044.4540 1923.3555

105.6773 417.1019 857.0456

Ackley

10 20 30

1000 1500 2000

8.7200 13.6639 15.8166

0 0 0

0 0 0

0 0 0

6624

M. Gang et al. / Applied Mathematics and Computation 218 (2012) 6620–6626

Step 5: Updating: The particle’s position vector is updated according to Eq. (1).The values of the evaluation function are calculated for the updated positions. If the new value is better than the previous pbest, the new position is set to pbest. Similarly, the is also updated as the best pbest. Step 6: Stopping criteria: If the convergence criterion is satisﬁed, stop. Otherwise, return to step 3. 4. Numerical results and analysis In this section, several well-known nonlinear benchmark functions that commonly used in literature are used to test the performance of the new particle swarm optimization algorithm. The functions are difﬁcult to optimize for any search algorithm because of their several local minima, which can produce premature convergence. In all case, a single global optimum exists. The functions and admissible range of the variables and the optimal value are summarized in Table 1, To evaluate the performance of the proposed algorithm, the classic PSO, linearly decreasing inertia weight (LPSO) and LPSO with time-varying acceleration coefﬁcients (LPSO-TVAC) are used for comparisons. In classic PSO, the inertia weight is 0.9 and the acceleration coefﬁcients are both 2.0. In LPSO, the inertia weight used is recommended from Shi and Eberhart [27] with a linearly decreasing from 0.9 to 0.4 and acceleration coefﬁcients are set to 2.0. In LPSO-TVAC, the cognitive component of accelerations coefﬁcients is linearly decreasing form 2.5 to 0.5; the social part of the acceleration coefﬁcients is linearly increasing from 0.5 to 2.5. In the new algorithm, the following values are used as default: p = 5, where p is the number of sub-swarms, r = 0.4, where r is the migratory rate of the sub-swarm, the migratory interval is set as 5.

Fig. 3. Evolution of logarithmic average ﬁtness of Rosenbrock function for PSO, LPSO, LPSO–TVAC and MPSO.

Fig. 4. Evolution of logarithmic average ﬁtness of Rastrigin function for PSO, LPSO, LPSO–TVAC and MPSO.

Fig. 5. Evolution of logarithmic average ﬁtness of Griewank function for PSO, LPSO, LPSO–TVAC and MPSO.

M. Gang et al. / Applied Mathematics and Computation 218 (2012) 6620–6626

6625

Fig. 6. Evolution of logarithmic average ﬁtness of Schwefel function for PSO, LPSO, LPSO–TVAC and MPSO.

Following the settings in [28], for each benchmark function, three different dimension sizes are tested, which are 10, 20, 30, and the corresponding maximum number of generations are set as 1000, 1500 and 2000. The population size for all tests are set to be 40. The mean ﬁtness values of the best particle found for 50 runs for the four functions are listed in Table 2. It is easy to see that for the sphere and Ackley function, the LPSO, LPSO-TVAC and MPSO algorithm can ﬁnd the global optima rapidly and have better performance than classic PSO. For other benchmark functions, the proposed algorithm shows a better performance than the classic PSO, LPSO and LPSO-TVAC on all mentioned benchmark functions. Figs. 3–6 show the evolution of logarithmic average ﬁtness of 20 dimensional benchmark functions by 40 particles for PSO, LPSO, LPSO-TAVC and MPSO respectively. The results are consistent with those shown in Table 2. By looking at the shapes of the curves in all the ﬁgures, It’s easy to see that the classic PSO algorithm converges quickly under all cases and slows its convergence speed down when reaching the optima, which exhibits signiﬁcant prematurity. The proposed MPSO is found to outperform the classic PSO, LPSO and LPSO-TVAC in the sense that it has a strong ability to avoid the local optima, and it can effectively prevent the premature convergence and signiﬁcantly enhance the convergence rate and accuracy in the evolutionary process. By using the particle migration strategy, the MPSO has more global search ability at the end of evolution, which is required to jump out of the local optima in some cases. MPSO algorithm has not only good local exploitation ability but also favorable global exploration ability. 5. Conclusion In this paper, a novel particle swarm optimization algorithm is presented which draw inspiration from the migratory behavior in the nature, and present the general framework of this new variants of PSO. This new method can enhance diversity by information communication. Then the new algorithm MPSO we proposed is discussed in comparison with PSO, LPSO, LPSO-TVAC through empirical simulations with well-known benchmark functions from the standard literature. Results show that MPSO is a promising method with good global convergence performance. Further work is underway for parameter selecting in MPSO for high dimensional problems under equality and inequality constraints. Acknowledgements This work was sponsored by the National Natural Science Foundation of China (No. 50839004 and 50979082). References [1] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, Piscataway, NJ, 1995, pp. 1942–1948. [2] R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proceedings of the Sixth International Symposium on Micromachine and Human Science, Nagoya, Japan, 1995, pp. 39–43. [3] R.C. Eberhart, Y. Shi, Particle swarm optimization: developments, applications and resources, in: Proceedings of the IEEE Congress on Evolutionary Computation, Seoul, Korea, 2001, pp. 81–86. [4] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence in a multidimensional complex space, IEEE Trans. Evol. Comput. 6 (2002) 58–73. [5] R. Poli, J. Kennedy, T. Blackwell, Particle swarm optimization, Swarm Intell. 1 (2007) 33–57. [6] Trelea, Ioan Cristian, The particle swarm optimization algorithm: convergence analysis and parameter selection, Inform. Process. Lett. 85 (2003) 317– 325. [7] K.E. Parsopoulos, M.N. Vrahatis, Parameter selection and adaptation in Uniﬁed Particle Swarm Optimization, Math. Comput. Model. 46 (2007) 198–213. [8] Y. Shi, R.C. Eberhart, Fuzzy adaptive particle swarm optimization, in: Proceedings of the Congress on Evolutionary Computation, Seoul, Korea, 2001, pp. 101–106. [9] Y. Shi, R.C. Eberhart, Particle swarm optimization with fuzzy adaptive inertia weighs, in: Proceedings of the Workshop Particle Swarm Optimization, Indianapolis, 2001, pp. 101–106. [10] A. Ratnaweera, S.K. Halgamuge, H.C. Watson, Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefﬁcients, IEEE Trans. Evol. Comput. 8 (3) (2004) 240–255. [11] X. Yang, J. Yuan, J. Yuan, H. Mao, A modiﬁed particle swarm optimizer with dynamic adaptation, Appl. Math. Comput. 189 (2) (2007) 1205–1213. [12] K.E. Parsopoulos, M.N. Vrahatis, Parameter selection and adaptation in uniﬁed particle swarm optimization, Math. Comput. Model. 46 (1) (2007) 198– 213.

6626

M. Gang et al. / Applied Mathematics and Computation 218 (2012) 6620–6626

[13] Qie He, Ling Wang, An effective co-evolutionary particle swarm optimization for constrained engineering design problems, Eng. Appl. Artif. Intell. 20 (1) (2007) 89–99. [14] Chen DeBao, Zhao ChunXia, Particle swarm optimization with adaptive population size and its application, Appl. Soft. Comput. 9 (1) (2009) 39–48. [15] Y. Wang, B. Li, T. Weise, J. Wang, B. Yuan, Q. Tian, Self-adaptive learning based particle swarm optimization, Inform Sci. 181 (20) (2011) 4515–4538. [16] P.J. Angeline, Using selection to improve particle swarm optimization, in: Proceedings in IEEE Congress on Evolutionary Computation (CEC), Anchorage, 1998, pp. 84–89. [17] F. van den Bergh, A.P. Engelbrecht, A cooperative approach to particle swarm optimization, IEEE Trans. Evolut. Comput. 8 (3) (2004) 225–239. [18] J. Stefan, M. Martin, A hierarchical particle swarm optimizer and its adaptive variant, IEEE Trans. Syst. Man Cybern. 35 (6) (2005) 1272–1282. [19] Y. Jiang, T. Hu, C.C. Huang, X. Wu, An improved particle swarm optimization algorithm, Appl. Math. Comput. 193 (1) (2007) 231–239. [20] Wei Gao, Hai Zhao, Jiuqiang Xu, et al., A dynamic mutation PSO algorithm and its application in the neural networks, in: Proceedings of IEEE on First international conference on intelligence networks and intelligent systems, 2008, pp. 103–106. [21] Qi Kang, Lei Wang, Qi-di Wu, A novel ecological particle swarm optimization algorithm and its population dynamics analysis, Appl. Math. Comput. 205 (1) (2008) 61–72. [22] Yuxin Zhao, Wei Zu, Haitao Zeng, A modiﬁed particle swarm optimization via particle visual modeling analysis, Comput. Math. Appl. 57 (11) (2009) 2022–2029. [23] X. Shi, Y. Lu, C. Zhou, H. Lee, W. Lin, Y. Liang, Hybrid evolutionary algorithms based on PSO and GA, in: Proceedings of IEEE Congress on Evolutionary Computation 2003, Canberra, Australia, 2003, pp. 2393–2399. [24] X.H. Wang, J.J. Li, Hybrid particle swarm optimization with simulated annealing, in: Proceedings of the Third International Conference on Machine Learning and Cybernetics, Shanghai, 2004, pp. 2402–2405. [25] A. Esmin, G. Lambert-Torres, A.C. Zambroni, A hybrid particle swarm optimization applied to loss power minimization, IEEE Trans. Power Syst. 20 (2) (2005) 859–866. [26] X.H. Shi, Y.C. Liang, H.P. Lee, C. Lu, L.M. Wang, An improved GA and a novel PSO-GA-based hybrid algorithm, Inform. Process. Lett. 93 (5) (2005) 255– 261. [27] Y. Shi, R.C. Eberhart, A modiﬁed particle swarm optimizer, in: Proceedings of the IEEE Congress on Evolutionary Computation, Piscataway, NJ, 1998, pp. 69–73. [28] Y. Shi, R.C. Eberhart, Empirical study of particle swarm optimization, in: Proceedings of the IEEE Congress on Evolutionary Computation, Piscataway, NJ, 1999, pp. 1945–1950. [29] P.K. Tripathi, S. Bandyopadhyay, S.K. Pal, Multi-objective particle swarm optimization with time variant inertia and acceleration coefﬁcients, Inform Sci. 177 (22) (2007) 5033–5049.

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

A novel particle swarm optimization algorithm based on particle migration Ma Gang, Zhou Wei ⇑, Chang Xiaolin State Key Laboratory of Water Resources and Hydropower Engineering Science, Wuhan University, Wuhan 430072, China

a r t i c l e

i n f o

Keywords: Particle swarm optimization Global optimization Time varying acceleration coefﬁcients Migratory behavior

a b s t r a c t Inspired by the migratory behavior in the nature, a novel particle swarm optimization algorithm based on particle migration (MPSO) is proposed in this work. In this new algorithm, the population is randomly partitioned into several sub-swarms, each of which is made to evolve based on particle swarm optimization with time varying inertia weight and acceleration coefﬁcients (LPSO-TVAC). At periodic stage in the evolution, some particles migrate from one complex to another to enhance the diversity of the population and avoid premature convergence. It further improves the ability of exploration and exploitation. Simulations for benchmark test functions illustrate that the proposed algorithm possesses better ability to ﬁnd the global optima than other variants and is an effective global optimization tool. Ó 2011 Elsevier Inc. All rights reserved.

1. Introduction Particle swarm optimization (PSO) is a population-based stochastic, heuristic optimization algorithm with inherent parallelism, ﬁrstly introduced in 1995 by Kennedy and Eberhart [1,2]. It is a member of the wild category of swarm intelligence, and draw inspiration from the simpliﬁed animal social behaviors, such as bird ﬂocking, ﬁsh schooling, etc. In the PSO, each individual is treated as a volume-less point, which referred to as particle in the multidimensional search space. The population is called as swarm, and the trajectory of each particle in the search space is adjusted by dynamically altering its velocity. These particles ﬂy through problem space and have two essential reasoning capabilities: their memory of their own best position and knowledge of the global or their neighborhood’s best. Since the introduction of PSO, it has attracted comprehensive attention due to its effectiveness and robustness in the global optimization research ﬁeld, as well as simplicity of implementation [3,4]. As a swarm intelligence algorithm, some researchers have noted a tendency for the swarm to converge prematurely on local optima [5], especially in complex multi-peak-search problems. In view of the shortcoming of the standard PSO algorithm, a few variants of the algorithm have been suggested through empirical simulations over the past decade, some have resulted in improved general performance, and some have improved performance on particular kinds of problems. These variants can be classiﬁed into several groups as: parameter selecting [6,7], integration of its self-adaptation [8–15], evolution strategy [16–22] and integrating with other intelligent optimizing methods [23–26]. For a detailed review of the particle swarm optimization and its different variants, readers are encouraged to refer to the review articles written by Eberhart and Shi [3] and Poli et al. [5]. Inspired by migratory behavior in the nature, we propose a novel particle swarm optimization based on particle migration. In the new algorithm, a population of particles is randomly sampled from the feasible space. Then the population is randomly partitioned into several sub-swarms, each of which is made to evolve based on LPSO-TVAC described in Section

⇑ Corresponding author. E-mail address: [email protected] (Z. Wei). 0096-3003/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2011.12.032

M. Gang et al. / Applied Mathematics and Computation 218 (2012) 6620–6626

6621

2.3. At periodic stages in the evolution, some particles immigrate from one complex to another to maintain the diversity of the whole population. The rest part of the paper is organized as follows: Section 2 provides a brief introduction of the classical particle swarm optimization and some commonly accepted improved version of PSO. Section 3 presents the new algorithm based on particle migration. Numerical examples used to illustrate the efﬁciency of the proposed algorithm are given in Section 4. Finally, Section 5 discuss the test conclusion and consider possible future work. 2. Review of PSO and its variants 2.1. Classic PSO In the PSO algorithm, each particle represents a potential solution for the speciﬁc issue, and the particle swarm is initialized with a population of random individuals in the feasible space. The algorithm searches the optimal solution by updating the position of the particle as search proceeds. The position and velocity of ith particle is represented by n-dimensional vector xi = (xi1, xi2, . . . , xin) and vi = (vi1, vi2, . . . , vin), respectively. The ﬁtness of each particle is evaluated according to the objective function of speciﬁc issues. The best preciously visited position of the ith particle achieved up to the current iteration is indicated as its individual best position pi = (pi1, pi2, . . . , pin). The global best position of the whole swarm obtained so far is indicated as pg = (pg1, pg2, . . . , pgn). At each iteration, the particle tries to modify its position using the current velocity and the distance from pbest and gbest. The velocity v ikþ1 of each particle and its new position xikþ1 manipulated according to the following two equations:

v kþ1 ¼ xv ki þ c1 r 1 ðpki xki Þ þ c2 r 2 ðpkg xki Þ; i ¼ xki þ v kþ1 : xkþ1 i i

ð1Þ ð2Þ

Here k is the iteration number; x is the inertia weight. c1, c2 are positive constants known as acceleration coefﬁcients, determine the relative inﬂuence of the cognition and social components; r1, r2 are independently uniformly distributed random variables in the range (0, 1). The ﬁrst part of Eq. (1) represents the impact of the previous velocity of particle on its current one. The second part represents the personal experience. The third part represents the collaborative effect of the particles and it always pulls the particle to the global best solution the swarm has found so far. At each generation, the velocity of each particle is calculated according to Eq. (1), and the position is updated by Eq. (2). Each time, any better position is stored in memory. Each particle adjusts its position by its own ‘‘ﬂying’’ experience and the experience of its companions. This means that if a particle gets a promising new position, all the other particles will move closer to it. This process is repeated until satisfactory solution is found or the predeﬁned number of iterations is met. 2.2. LPSO In the PSO algorithm, proper control of global exploration and local exploitation is a crucial issue. Shi and Eberhart [27] introduced the concept of inertia weight to the original version of PSO, to balance the local and global search during the evolution process. In general, the higher values of x help in exploring the search space more thoroughly in the process and beneﬁt the global search, while lower values help in the local search around the current search space. The major concern of this modiﬁcation is to avoid the premature convergence in the early stage of the search and to enhance convergence to the global optimum solution during the latter stage of the search. The concept of linearly decreasing inertia weight was introduced in [28] and is given by:

x ¼ xmax

xmax xmin itermax

iter;

ð3Þ

where iter is the current iteration number, while itermax is the maximum number of allowable iteration. Usually the value of x is varied between 0.9 and 0.4. Therefore, the particle is to use lager inertia weight during the initial exploration and gradual reduce its value as the search proceeds in further iterations. 2.3. LPSO-TVAC Although the PSO algorithm with linearly decreasing inertia weight (LPSO) can locate satisfactory solution at a markedly fast speed, its ability to ﬁne tune the optimum solution is limited, mainly due to the lack of diversity at the latter stage of evolution process. In population-based optimization methods, the guideline is to encourage the individuals to roam through the entire search space during the early part of the search, without clustering around local optima. During the later stage, convergence towards the global optima is encouraged [1]. With this in view, a novel strategy in which time-varying acceleration coefﬁcients are employed in some literatures [10,29]. The strategy is implemented by changing the acceleration coefﬁcients c1 and c2 in such a manner that the cognitive component is reduced while the social component is increased as the search proceeds. With a large cognitive component and small social component at the early part of the process, particles are allowed to roam around the search space instead of clustering around some super particles. With a small cognitive compo-

6622

M. Gang et al. / Applied Mathematics and Computation 218 (2012) 6620–6626

nent and a large social component, particles are allowed to converge to the global optima in the latter part of the optimization process. The modiﬁcation can be mathematically represented as follows [10,29]:

c1 ¼ ðc1f c1i Þ iteriter þ c1i ; max c2 ¼ ðc2f c2i Þ iteriter þ c2i ; max where c1i, c1fc2i and c2f are initial and ﬁnal values of cognitive and social acceleration factors respectively, usually c1i = c2f = 2.5 and c1f = c2i = 0.5 [29]. 3. Implementation of PSO with particle migration It should be noted that there are two key factors affect the performance of PSO algorithm. One is the proper balance between global exploration and local exploitation. By introducing the linearly decreasing inertia weight and time-varying acceleration coefﬁcients into the original version of PSO, the performance of PSO has been remarkably improved. Another one is to maintain the diversity of the population. Inspired by the migratory behavior in the nature when one particle travels from one swarm to another, it enhances the diversity of the target swarm by its position in the previous swarm. On the other hand, the particle migration builds the bridge for information exchange. The communication mainly reﬂects in that not only

Fig. 1. The migration scheme.

Fig. 2. The ﬂow chart of MPSO.

6623

M. Gang et al. / Applied Mathematics and Computation 218 (2012) 6620–6626

the current position but the pbest of the migration particle are integrating into the target swarm. The migratory scheme is given in Fig. 1. The MPSO strategy is illustrated in Fig. 2 and its implementation consists of the following steps: Step 1: Initializing. Sample s points in the feasible space. The population size s = pm, where p is the number of sub-swarms, m is size of the swarm. Compute the ﬁtness value fi of each particle. Step 2: Grouping. Partition s particles into p sub-swarms randomly, each containing m individuals. Sk ¼ fxk1 ; xk2 ; . . . ; xm k g Denotes the kth sub-swarm. Step 3: Evolving. Evolve each complex Sk using particle swarm optimization algorithm with linearly decreasing inertia weight and time-varying acceleration coefﬁcients strategy (LPSO-TVAC). Step 4: Migrating. Perform migration process if iteration number can be exactly divided by the designated migration interval. Step 4.1: Assign each sub-swarm a mutually exclusive index indicated as the target sub-swarm number randomly. Step 4.2: Select n migratory particles in each sub-swarm according to its ﬁtness value, where n = mr, r is the migration rate of the particle swarm. The tournament selection is employed and the best particle should not be selected. Step 4.3: Move the migratory particles from the original swarm to the target swarm.

Table 1 Benchmark test functions. Name Sphere Weighted sphere Schwefel’s Rosenbrock Rastrigin Griewank Schwefel Ackley

Formula Pn

2 i¼1 xi P f ðxÞ ¼ ni¼1 ix2i P P f ðxÞ ¼ ni¼1 ij¼1 x2j Pn1 f ðxÞ ¼ i¼1 100ðxiþ1 x2i Þ2 þ ðxi 1Þ2 Pn 2 f ðxÞ ¼ 10n þ i¼1 xi 10 cosð2pxi Þ Pn 2 Qn xiﬃ 1 p f ðxÞ ¼ 4000 þ1 i¼1 xi i¼1 cos i pﬃﬃﬃﬃﬃﬃﬃ Pn f ðxÞ ¼ 418:9829n i¼1 xi sin jxi j ﬃ qﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ P P f ðxÞ ¼ 20 þ e 20 exp 0:2 1n ni¼1 x2i exp 1n ni¼1 cosð2pxi Þ

f ðxÞ ¼

Range

Optimal value

xi 2 [5.12, 5.12]

0

xi 2 [5.12, 5.12]

0

xi 2 [65.536, 65.536]

0

xi 2 [2.048, 2.048]

0

xi 2 [5.12, 5.12]

0

xi 2 [600, 600]

0

xi 2 [500, 500]

0

xi 2 [32.786, 32.786]

0

Table 2 The mean ﬁtness values for benchmark functions of PSO, LPSO, LPSO–TVAC and MPSO. Function

Dimension

Generation

Sphere

10 20 30

1000 1500 2000

PSO 1.2250 10.4144 27.3661

Weighted shpere

10 20 30

1000 1500 2000

5.5397 100.3120 402.4097

Schwefel’s

10 20 30

1000 1500 2000

Rosenbrock

10 20 30

Rastrigin

LPSO 0 0 0

LPSO–TVAC

MPSO

0 0 0

0 0 0

0 4.5613 30.6184

0 0.6816 4.8759

0 0 0.0524

535.3144 13411.3546 58920.6005

0 661.4250 4363.6868

0 25.7698 884.7634

0 0 8.5899

1000 1500 2000

25.7328 207.8258 590.3556

4.0469 20.5068 39.4433

1.7815 13.7679 32.9951

1.0194 6.228 11.6365

10 20 30

1000 1500 2000

38.8794 137.0605 255.0153

3.0346 16.6929 46.5757

2.7636 14.63304 37.1872

1.84065 7.9915 17.7272

Griewank

10 20 30

1000 1500 2000

5.0169 36.5880 94.8735

0.0854 0.0307 0.0149

0.0749 0.0225 0.0170

0.0504 0.0098 0.0061

Schwefel

10 20 30

1000 1500 2000

841.1588 2818.7608 5489.5560

633.7827 1578.6844 2766.0077

320.7916 1044.4540 1923.3555

105.6773 417.1019 857.0456

Ackley

10 20 30

1000 1500 2000

8.7200 13.6639 15.8166

0 0 0

0 0 0

0 0 0

6624

M. Gang et al. / Applied Mathematics and Computation 218 (2012) 6620–6626

Step 5: Updating: The particle’s position vector is updated according to Eq. (1).The values of the evaluation function are calculated for the updated positions. If the new value is better than the previous pbest, the new position is set to pbest. Similarly, the is also updated as the best pbest. Step 6: Stopping criteria: If the convergence criterion is satisﬁed, stop. Otherwise, return to step 3. 4. Numerical results and analysis In this section, several well-known nonlinear benchmark functions that commonly used in literature are used to test the performance of the new particle swarm optimization algorithm. The functions are difﬁcult to optimize for any search algorithm because of their several local minima, which can produce premature convergence. In all case, a single global optimum exists. The functions and admissible range of the variables and the optimal value are summarized in Table 1, To evaluate the performance of the proposed algorithm, the classic PSO, linearly decreasing inertia weight (LPSO) and LPSO with time-varying acceleration coefﬁcients (LPSO-TVAC) are used for comparisons. In classic PSO, the inertia weight is 0.9 and the acceleration coefﬁcients are both 2.0. In LPSO, the inertia weight used is recommended from Shi and Eberhart [27] with a linearly decreasing from 0.9 to 0.4 and acceleration coefﬁcients are set to 2.0. In LPSO-TVAC, the cognitive component of accelerations coefﬁcients is linearly decreasing form 2.5 to 0.5; the social part of the acceleration coefﬁcients is linearly increasing from 0.5 to 2.5. In the new algorithm, the following values are used as default: p = 5, where p is the number of sub-swarms, r = 0.4, where r is the migratory rate of the sub-swarm, the migratory interval is set as 5.

Fig. 3. Evolution of logarithmic average ﬁtness of Rosenbrock function for PSO, LPSO, LPSO–TVAC and MPSO.

Fig. 4. Evolution of logarithmic average ﬁtness of Rastrigin function for PSO, LPSO, LPSO–TVAC and MPSO.

Fig. 5. Evolution of logarithmic average ﬁtness of Griewank function for PSO, LPSO, LPSO–TVAC and MPSO.

M. Gang et al. / Applied Mathematics and Computation 218 (2012) 6620–6626

6625

Fig. 6. Evolution of logarithmic average ﬁtness of Schwefel function for PSO, LPSO, LPSO–TVAC and MPSO.

Following the settings in [28], for each benchmark function, three different dimension sizes are tested, which are 10, 20, 30, and the corresponding maximum number of generations are set as 1000, 1500 and 2000. The population size for all tests are set to be 40. The mean ﬁtness values of the best particle found for 50 runs for the four functions are listed in Table 2. It is easy to see that for the sphere and Ackley function, the LPSO, LPSO-TVAC and MPSO algorithm can ﬁnd the global optima rapidly and have better performance than classic PSO. For other benchmark functions, the proposed algorithm shows a better performance than the classic PSO, LPSO and LPSO-TVAC on all mentioned benchmark functions. Figs. 3–6 show the evolution of logarithmic average ﬁtness of 20 dimensional benchmark functions by 40 particles for PSO, LPSO, LPSO-TAVC and MPSO respectively. The results are consistent with those shown in Table 2. By looking at the shapes of the curves in all the ﬁgures, It’s easy to see that the classic PSO algorithm converges quickly under all cases and slows its convergence speed down when reaching the optima, which exhibits signiﬁcant prematurity. The proposed MPSO is found to outperform the classic PSO, LPSO and LPSO-TVAC in the sense that it has a strong ability to avoid the local optima, and it can effectively prevent the premature convergence and signiﬁcantly enhance the convergence rate and accuracy in the evolutionary process. By using the particle migration strategy, the MPSO has more global search ability at the end of evolution, which is required to jump out of the local optima in some cases. MPSO algorithm has not only good local exploitation ability but also favorable global exploration ability. 5. Conclusion In this paper, a novel particle swarm optimization algorithm is presented which draw inspiration from the migratory behavior in the nature, and present the general framework of this new variants of PSO. This new method can enhance diversity by information communication. Then the new algorithm MPSO we proposed is discussed in comparison with PSO, LPSO, LPSO-TVAC through empirical simulations with well-known benchmark functions from the standard literature. Results show that MPSO is a promising method with good global convergence performance. Further work is underway for parameter selecting in MPSO for high dimensional problems under equality and inequality constraints. Acknowledgements This work was sponsored by the National Natural Science Foundation of China (No. 50839004 and 50979082). References [1] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, Piscataway, NJ, 1995, pp. 1942–1948. [2] R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proceedings of the Sixth International Symposium on Micromachine and Human Science, Nagoya, Japan, 1995, pp. 39–43. [3] R.C. Eberhart, Y. Shi, Particle swarm optimization: developments, applications and resources, in: Proceedings of the IEEE Congress on Evolutionary Computation, Seoul, Korea, 2001, pp. 81–86. [4] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence in a multidimensional complex space, IEEE Trans. Evol. Comput. 6 (2002) 58–73. [5] R. Poli, J. Kennedy, T. Blackwell, Particle swarm optimization, Swarm Intell. 1 (2007) 33–57. [6] Trelea, Ioan Cristian, The particle swarm optimization algorithm: convergence analysis and parameter selection, Inform. Process. Lett. 85 (2003) 317– 325. [7] K.E. Parsopoulos, M.N. Vrahatis, Parameter selection and adaptation in Uniﬁed Particle Swarm Optimization, Math. Comput. Model. 46 (2007) 198–213. [8] Y. Shi, R.C. Eberhart, Fuzzy adaptive particle swarm optimization, in: Proceedings of the Congress on Evolutionary Computation, Seoul, Korea, 2001, pp. 101–106. [9] Y. Shi, R.C. Eberhart, Particle swarm optimization with fuzzy adaptive inertia weighs, in: Proceedings of the Workshop Particle Swarm Optimization, Indianapolis, 2001, pp. 101–106. [10] A. Ratnaweera, S.K. Halgamuge, H.C. Watson, Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefﬁcients, IEEE Trans. Evol. Comput. 8 (3) (2004) 240–255. [11] X. Yang, J. Yuan, J. Yuan, H. Mao, A modiﬁed particle swarm optimizer with dynamic adaptation, Appl. Math. Comput. 189 (2) (2007) 1205–1213. [12] K.E. Parsopoulos, M.N. Vrahatis, Parameter selection and adaptation in uniﬁed particle swarm optimization, Math. Comput. Model. 46 (1) (2007) 198– 213.

6626

M. Gang et al. / Applied Mathematics and Computation 218 (2012) 6620–6626

[13] Qie He, Ling Wang, An effective co-evolutionary particle swarm optimization for constrained engineering design problems, Eng. Appl. Artif. Intell. 20 (1) (2007) 89–99. [14] Chen DeBao, Zhao ChunXia, Particle swarm optimization with adaptive population size and its application, Appl. Soft. Comput. 9 (1) (2009) 39–48. [15] Y. Wang, B. Li, T. Weise, J. Wang, B. Yuan, Q. Tian, Self-adaptive learning based particle swarm optimization, Inform Sci. 181 (20) (2011) 4515–4538. [16] P.J. Angeline, Using selection to improve particle swarm optimization, in: Proceedings in IEEE Congress on Evolutionary Computation (CEC), Anchorage, 1998, pp. 84–89. [17] F. van den Bergh, A.P. Engelbrecht, A cooperative approach to particle swarm optimization, IEEE Trans. Evolut. Comput. 8 (3) (2004) 225–239. [18] J. Stefan, M. Martin, A hierarchical particle swarm optimizer and its adaptive variant, IEEE Trans. Syst. Man Cybern. 35 (6) (2005) 1272–1282. [19] Y. Jiang, T. Hu, C.C. Huang, X. Wu, An improved particle swarm optimization algorithm, Appl. Math. Comput. 193 (1) (2007) 231–239. [20] Wei Gao, Hai Zhao, Jiuqiang Xu, et al., A dynamic mutation PSO algorithm and its application in the neural networks, in: Proceedings of IEEE on First international conference on intelligence networks and intelligent systems, 2008, pp. 103–106. [21] Qi Kang, Lei Wang, Qi-di Wu, A novel ecological particle swarm optimization algorithm and its population dynamics analysis, Appl. Math. Comput. 205 (1) (2008) 61–72. [22] Yuxin Zhao, Wei Zu, Haitao Zeng, A modiﬁed particle swarm optimization via particle visual modeling analysis, Comput. Math. Appl. 57 (11) (2009) 2022–2029. [23] X. Shi, Y. Lu, C. Zhou, H. Lee, W. Lin, Y. Liang, Hybrid evolutionary algorithms based on PSO and GA, in: Proceedings of IEEE Congress on Evolutionary Computation 2003, Canberra, Australia, 2003, pp. 2393–2399. [24] X.H. Wang, J.J. Li, Hybrid particle swarm optimization with simulated annealing, in: Proceedings of the Third International Conference on Machine Learning and Cybernetics, Shanghai, 2004, pp. 2402–2405. [25] A. Esmin, G. Lambert-Torres, A.C. Zambroni, A hybrid particle swarm optimization applied to loss power minimization, IEEE Trans. Power Syst. 20 (2) (2005) 859–866. [26] X.H. Shi, Y.C. Liang, H.P. Lee, C. Lu, L.M. Wang, An improved GA and a novel PSO-GA-based hybrid algorithm, Inform. Process. Lett. 93 (5) (2005) 255– 261. [27] Y. Shi, R.C. Eberhart, A modiﬁed particle swarm optimizer, in: Proceedings of the IEEE Congress on Evolutionary Computation, Piscataway, NJ, 1998, pp. 69–73. [28] Y. Shi, R.C. Eberhart, Empirical study of particle swarm optimization, in: Proceedings of the IEEE Congress on Evolutionary Computation, Piscataway, NJ, 1999, pp. 1945–1950. [29] P.K. Tripathi, S. Bandyopadhyay, S.K. Pal, Multi-objective particle swarm optimization with time variant inertia and acceleration coefﬁcients, Inform Sci. 177 (22) (2007) 5033–5049.