BE-ABC: Hybrid Artificial Bee Colony Algorithm with ...

5 downloads 0 Views 907KB Size Report
Abstract—Artificial bee colony algorithm (ABC), a heuristic ... Karaboga applied ABC to constrained optimization problems ..... + , taking trial time limit into.
2012 Third International Conference on Intelligent Control and Information Processing July 15-17, 2012 - Dalian, China

BE-ABC: Hybrid Artificial Bee Colony Algorithm with Balancing Evolution Strategy Bai Li, Ya Li interested fields respectively. However, there has not been a universal solution that remains highly effective for all optimization problems. Viewing all these strategies to revise ABC up till now, seldom have they probed deeply enough into its core defects. In this paper, variable crossover & mutation method and clustering selection approach constitute our balancing evolution strategy, aiming to balance local exploitation and global exploration to the extreme. In this paper, we analyze the defects of ABC before elaborating on the mechanism of BE-ABC. Competitive experiments are conducted to verify the superiority of BE-ABC and simulation results are discussed meticulously. Our results preliminarily confirm that BE-ABC is competent to achieve better searching accuracy in comparisons with I-ABC and PS-ABC in [11].

Abstract—Artificial bee colony algorithm (ABC), a heuristic algorithm motivated by the forging behavior of bee swarm, is competent to solve complex engineering problems yet imperfect in its exploitation process. For higher convergence speed and better searching accuracy, we propose balancing evolution strategy as an amendment, which seeks to balance local exploration and global exploitation. In this paper, variable crossover & mutation method and clustering selection approach mainly constitute this strategy. We elaborate on the mechanism of BE-ABC and release its convergence proof by adopting the finite covering theorem in mathematical analysis. After simulating on several benchmark functions, we preliminarily confirm that BE-ABC is capable to converge more efficiently in comparison with some relevant algorithms.

I. INTRODUCTION

F

OR most engineering problems, obtaining the global optimum is enormously difficult. To avoid an inefficient enumerating process, several intelligence algorithms have been developed and investigated, such as genetic algorithm (GA) [1], differential evolution algorithm (DE) [2], particle swarm optimization algorithm (PSO) [3], ant colony optimization algorithm (ACO) [4] as well as artificial bee colony algorithm (ABC) [5]. ABC, motivated by the forging behavior of bees, conducts both local exploitation and global exploration in each iteration. It is expert in exploration but imperfect in its exploitation process. After his pioneering work, Dervis Karaboga applied ABC to constrained optimization problems in 2007 [6] and then to IIR filters’ design task [7]. To make up for the original deficiencies in ABC, modified or improved algorithms have been introduced during recent years. Anan Banharnsakun proposed best-so-far ABC for scheduling problems [8]. Haijun Ding adopted Boltzmann selection method as a substitution of original roulette strategy and then formed his B-ABC [9]. S. N. Omkar proposed vector evaluated ABC (VE-ABC) for multi-objective design optimization [10]. Guoqiang Li recently proposed I-ABC for numerical optimization and confirmed its superiority over gbest-guided ABC (G-ABC) as well as prediction & selection ABC (PS-ABC) [11]. Weibin Xu proposed a modified ABC (M-ABC) by abandoning its roulette strategy [12]. These modified ABCs have advantages over the original one in their

II. PRINCIPLES OF ABC In this heuristic algorithm, bee colony consists of three categories named employed bees, onlookers and scouts [5]. Each food source in searching space corresponds with a solution for the objective optimization problem. Quality of each food source is evaluated by substituting that solution into objective function. At the initial, a randomly distributed population with ESN food sources is generated by Eq. (1): j j j X i j = X min + rand ( X max − X min ) (1) Each solution X i (i = 1, 2, …, ESN) is a vector of D dimensions, i.e. j ∈ {1, 2, …,D}, where D refers to the dimension of objective problem. X min and X max represent boundaries of the objective problem. Then the swarm repeats the iterating process up to MCN times for the searching course. In each iteration, employed bees refresh their positions by Eq. (2) and evaluate their function values afterwards. Vi j = X i j + [rand (0,1) − 2] ⋅ ( X i j − X kj ) (2)

In this equation, k∈ {1, 2, …, ESN} but remains different from i all the time. For each employed bee, if the refreshed value is better than the previous one, it memorizes the new position; otherwise, its prior memory is reserved. Such method, also known as greedy selection strategy, has been widely adopted in heuristic algorithms. When all employed bees have done their exploration processes, Pi is calculated for each employed bee. Another selection method named roulette strategy, as revealed in Eq. (3) and (4), comes to implement. Probability Pi is calculated as follows:

Manuscript received April 9, 2012. This work was supported in part by SRF for ROCS, SEM and sponsored by school of advanced engineering in Beihang university. Bai Li is with the School of Advanced Engineering, Beihang University, Beijing, China (Tel: +8615210848328; e-mail: [email protected]). Ya Li is with the school of Mathematics and Systems Science & LMIB, Beihang University, Beijing, China (e-mail: [email protected]). 978-1-4577-2143-4/12/$26.00 ©2012 IEEE

217

Pi =

fitnessi

∑ fitness

For better performance, roulette strategy is replaced by other approaches in [9]-[13] and these amendments have made achievements on their interested benchmarks or engineering problems. In fact, as roulette strategy is in little correlation with other parts of ABC, their changes guarantee no side effects on the whole converging performance of the algorithm. Roulette selection strategy even gets simply removed in [12]. But we contend that roulette strategy is essential in that it serves as the only feedback part in ABC, without which converging states during iterations will not be observed or utilized. Smart as it seems in a few unimodal problems, such idea could be less than competent for practical engineering problems. Roulette strategy is modified in [13] by introducing the sensitivity of fitness values in Eq. (5), which aims to introduce a mild measurement of fitness values, works out quite well. f (i ) − f min n ⋅ f (i ) = , ( f max ≠ f min ) (5) f max − f min In this paper, also to present a mild selecting approach, we reserve roulette strategy considering its partial advantages although amendments are essential to make. Efforts are taken onto its interaction with other parts of ABC in our work. Another aspect concerns roulette selection strategy. We affirm its partial advantage to accelerate the convergence speed and make amendments for better interaction effects. Taking all processes in ABC considered, we propose a new strategy seeking to promote interactions among separate parts as well as to balance the exploration and exploitation by means of fully utilizing evolutionary state messages during iterations.

(3)

ESN

j

j =1

⎧ 1 ⎫ if obj ≥ 0⎪ ⎪ fitness = ⎨1 + obj (4) ⎬ ⎪ 1 + abs (obj ) if obj <0 ⎪ ⎩ ⎭ In Eq. (4), obj refers to the objective value of the solution and thus fitness is an evaluation of those positions. After all these procedures, ESN onlookers set out to follow each employed bee with probability of Pi and then generate their positions to exploit. Similarly, onlookers adopt the greedy selection strategy for crossover and mutation. Moreover, any employed bee whose inefficient attempt time exceeds a certain threshold is to abandon its current food source and randomly transfers to be a scout following Eq. (1). As an essential control parameter in this algorithm, that specific threshold is called limit. Up to one scout is allowed to emerge in each iteration. III. OUR PROPOSED BE-ABC A. On the performance of ABC Karaboga’s studies may be more reasonable if he had considered more on the interaction between exploration and exploitation. Several factors should account for its imperfect converging performance in ABC. In this paper, we address two of these aspects to improve the whole situation. Viewing previous work, seldom have researchers focused their attentions on Eq. (2). Our first aspect to mention involves its permanent scale, which seems merely suitable for problems with global optima in the very centre of the whole searching space. Nevertheless, this idea makes it enormously difficult to tackle problems with biased global optimums. Actually, the farther an optima distances from the centre, the worse it appears to converge. Additionally, in early iterations, a diversified population is critical to the convergence speed. However, only one vector to change for each onlooker contributes less to diversification, especially in terms of high-dimensional problems. Onlookers in ABC conduct the exploitation process following roulette strategy, in which, food source with higher fitness value enjoys larger probability of being selected. It is evident that sources with relatively higher fitness values retain while the others easily get eliminated. Such strategy contributes much to a high converging speed in early iterations, especially for unimodal problems. But with regard to multimodal problems, an outstanding source inevitably impacts the selection probabilities of other sources and thus takes hold of the whole evolution process, eventually leading to be trapped in a local optimum, as is called premature. Additionally, in later iterations, roulette selection strategy cannot afford to support an efficient converging process even for unimodal problems, on account of a highly converged swarm then. In this case, roulette strategy is no longer competent to accelerate the convergence speed and will, to the contrary, result in a larger probability to miss the global optimum.

B. Mechanism of balancing evolution strategy For onlookers, their permanent position changing scales in Eq. (2) is revised here by designing a convergence parameter a* and thus remarkably enhance its capacity at the initial to search marginal positions, as in Eq. (6) and (7), where iter refers to the current number of iteration. Fig. 1 illustrates its effort under the condition that param = -0.02. Vmj = X mj + [rand (0,1) − 2] ⋅ ( X mj − X nj ) ⋅ a* (6) a* = 1 + sin(iter ) ⋅ exp( param ⋅ (iter − 1)) (7) In this way, the initial capability to search near margins is strikingly enhanced, which avoids inefficiency in latter iterations as well. As with initial diversification, we consider more vectors to change and then fewer as iter increases in Eq. (8). (ln 0.1) ⋅ iter ⎞ ⎛ changer = fix ⎜ exp (8) ⎟ ⋅ ( D − 1) + 1 MCN ⋅ 0.1 ⎠ ⎝ In this formula, fix denotes rounding its actual value to zero. Fig. 2 presents its visual effect under the condition that D = 50, MCN = 300.

218

aspect.

Fig 1 Effect of Convergence Parameter a* Fig 3 Effect of Clustering Selection parameter R

These two ideas constitute our first modified strategy named variable crossover & mutation method, where the iteration condition has its influence on the crossover procedure.

C. Pseudo-code of BE-ABC The pseudo-code of BE-ABC for constrained optimization problems is given below. 1. Initialize solution population FOOD = { X mj } , ( j = 1, 2,..., ESN , m = 1,..., D ) 2. Evaluate the current population fitness 3. Set iter = 1 Repeat 4. Generate food sources for employed bees and evaluate their fitness values, vector numbers of crossover & mutation 5. Apply greedy method for selection 6. Generate exploiting positions for onlookers and evaluate their fitness values 7. Apply greedy method for selection 8. Apply clustering selection approach for selection 9. Determine the solution to abandon and substitute it for a randomly renewed scout 10. Record current attempt times that exceed the limit and redefine limit accordingly 11. Memorize the obtained current best solution 12. iter = iter +1 until iter = MCN

Fig 2 Effort of Variable Number of Vector for Crossover & Mutation

In original roulette selection strategy, onlookers conduct the exploitation process only when Pi ≥ rand ; otherwise they stay still. Such selection is too greedy for initial iterations indeed. For this part, we make amendments by erasing away its greedy part with a mild clustering approach for selection. We divide {Pi},(i = 1, 2,…, ESN ) into several categories with R food sources for each one in descending order of fitness values, and each onlooker randomly selects one employed bee in the same category to follow. In this way, onlookers persist to exploit and will never stay still during iterations. For this part, we define R in Eq. (9), where clustering results are decided by MCN and iter. Its visual effect is illustrated in Fig. 3 under the condition that MCN = 1000, SN = 40. iter ⎛ ⎞ (9) − 1) ⋅ ESN ⋅ 25% ⎟ R = fix ⎜ exp( MCN ⎝ ⎠ In Fig.3, it is apparent that our strategy gradually comes into implement. In early iterations, we expect R to be 1, i.e. adopting original roulette strategy in consideration of its advantage for faster speed. In this way, a clustering selection approach modified roulette strategy is designed as our second

D. Convergence proof of BE-ABC Definition 1. A square matrix P: n × n is defined as n

stochastic, if

∑a

ij

= 1 as well as P ≥ 0 for all i ∈ {1,..., n} .

j =1

Definition 2. A finite homogeneous Markov chain is defined as a probabilistic trajectory over a finite state space S, where cardinality S = n and probability pij of transitioning from

state i to j at step t is independent from the parameter of t. Definition 3. The state space of BE-ABC is said to be x1D limit1 ⎞ ⎛ x11 ⎜ ⎟ x x2 D limit2 ⎟ STA(iter ) = ⎜ 21 (10) ⎜ ⎟ ⎜ ⎟ xESN D limit ESN ⎠ ⎝ xESN 1

219

It should be noted that for a swarm of ESN employed bees, we define STA as ESN × ( D + 1) , taking trial time limit into consideration. Definition 4. In BE-ABC, the crossover & mutation strategy, greedy selection strategy, clustering selection approach as well as policy for scout bees cause the transitions in state during iteration process, we respectively define their transition matrixes as C, G, A, P. Theorem 1. Let P: n × n be a reducible stochastic matrix, then p ∞ = lim p k is a reducible stochastic matrix [13].

All f ( Si ) can be ranked in descending order, as f ( X 1 ) ≥ f ( X 2 ) ≥ ... ≥ f ( X n ) Let the very set contains the global optimum be Sbest , then ∀ε > 0, ∃N1 ∈ N * , s.t∀n ≥ N1 , the order will definitely be like: f ( X best ) ≥ f ( X p ) ≥ ... ≥ f ( X k ) Let the probability for X i to transit to X j be p( X i , X j ) .Knowing that SAT is a finite homogeneous Markov chain, we describe its state transition probability matrix as: p( X 1 , X n ) ⎞ ⎛ p( X 1 , X 1 ) p( X 1 , X 2 ) ⎜ ⎟ p ( X 2 , X 1 ) p( X 2 , X 2 ) ⎟ p=⎜ ⎜ ⎟ ⎜ ⎟ p( X n , X n ) ⎠ ⎝ p( X n , X 1 ) Considering the monotonic process of greedy selection, we confirm that ∀ε > 0, ∃N 2 ∈ N * , s.t∀n ≥ N 2 , p( X i , X j ) < ε .

k →∞

Theorem 2. Let P: n × n be a reducible stochastic matrix, and L: m × m a primitive stochastic matrix, if R ≠ 0, T ≠ 0, then ⎛ Lk 0⎞ ∞ ⎜ k −1 ⎟ ⎛ L 0⎞ ∞ p = lim ⎜ (11) = ⎜ ⎟ k −i k →∞ T k ⎟⎟ ⎝ R ∞ 0 ⎠ ⎜ ∑ Ti RL ⎝ i =0 ⎠ Lemma 1. (C ⋅ G ) , ( A ⋅ C ⋅ G ) and P are all stochastic matrixes. Proof. For BE-ABC, several steps pitch in to transit the state. The crossover method is always followed by greedy selection approach, hence (C ⋅ G ) presents the complete transition process of our crossover & mutation strategy. Similarly, ( A ⋅ C ⋅ G ) is for a complete process of clustering selection approach. For constrained problems, it is evident in BE-ABC that any transition probability pij of

Consequently, the probability transition matrix is 1 0 0 ⎛ ⎞ ⎜ ⎟ p X X p X X ( , ) ( , ) 2 1 2 2 ⎟ p=⎜ ⎜ ⎟ 0 ⎜ ⎟ p( X n , X n ) ⎠ ⎝ p( X n , X1 ) Conducting a division onto the matrix, then we get 0 ⎛ p( X 2 , X 2 ) ⎞ ⎜ ⎟ T =⎜ ⎟, ⎜ p( X , X ) ⎟ p ( X , X ) n n n ⎠ 1 ⎝ 1⎞ ⎛ p( X 2 , X1 ) ⎞ ⎛1 ⎜ ⎟ ⎜ ⎟ R =⎜ , L = ⎟ ⎜ ⎟ ⎜ p( X , X ) ⎟ ⎜1 1⎟⎠ n 1 ⎠ ⎝ ⎝

transitioning from state i to j is nonnegative, and we have n

∑P

ij

= 1 , ∀i ∈{1,2,..., ESN }

j =1

In this way, it is confirmed that (C ⋅ G ) is a compound stochastic matrix, as are the cases in matrixes of ( A ⋅ C ⋅ G ) and P. □ Lemma 2. BE-ABC conducts a homogeneous finite Markov process. Proof. For this algorithm, the state space STA is finite in regard with constrained problems, hence the Markov chain of BE-ABC is finite. Additionally, the state transition from STA(iter ) to STA(iter + 1) is described as: (12) STA(iter + 1) = (C ⋅ G ) ( A ⋅ C ⋅ G ) P ( STA(iter )) For any specific iter, the transition probability from xa to xb is independent of time ( ∀xa , xb ∈ STA ), because time does not

P ∞ is a stochastic matrix by Theorem 1. Then ⎛ lim( p ( X 2 , X 1 )) ⎞ ⎛1⎞ 1⎞ ⎛1 ⎜ k →∞ ⎟ ⎜ ⎟ ⎜ ⎟ ∞ ∞ R L =⎜ , = ⎜ ⎟=⎜ ⎟. ⎟ ⎜⎜ lim( p( X , X )) ⎟⎟ ⎜1⎟ ⎜1 1⎟⎠ 1 n ⎝ ⎝ k →∞ ⎠ ⎝ ⎠ That is, all solution sets eventually converge to the optimal solution set in sufficient iterations to come.□ IV. EXPERIMENTS AND DISCUSSIONS

impact Eq. (1), (3), (4), (6), and (8). In addition, all transition matrixes mentioned above are proved to be stochastic. Consequently, the whole searching process conducted by BE-ABC is a homogeneous finite Markov process. □ Proof. Consider dividing the state space of BE-ABC into q subsets of equality, then q can be a limited figure, as is confirmed by finite covering theorem in mathematical analysis. Each of these q subsets contains several solutions and thus is a solution set. To evaluate their fitness, we adopt Eq. (13) to integrate on each region, in which Si represents solution set. (13) ∫ obj ( x)dx = f ( Si )

Algorithms are applied to 3 classical benchmark functions shown in Table I [14]. Rosenbrock, whose global optimum lies along a narrow “valley” in its searching space, is extremely difficult to converge and is competent to evaluate algorithm capability to overcome premature. Schwefel is generally regarded deceptive for search algorithms to convergence in a wrong direction, owing to its geometrically distant global minimum over the parameter space from the next best local minima. Ackley is a widely used multimodal test function.

si

220

TABLE I HIGH-DIMENSIONAL BENCHMARK FUNCTIONS Test Function

Characteristics

Formulation n −1

Rosenbrock

unimodal, non-separable

f ( x) = ∑[100( x i +1 − xi2 )2 + ( xi − 1) 2 ]

Schwefel

multimodal, seperable

f ( x) = −∑ xi sin( xi )

Ackley

multimodal, non-separable

strengths from other algorithms or strategies for further improvements will be done in our future work.

i =1

n

i =1

−20exp(−0.2 − exp(

1 n 2 ∑ xi ) + 20 + e n i=1

1 n ∑ cos(2π xi )) n i =1

The performance of BE-ABC is compared with those of original ABC, PS-ABC as well as I-ABC. To make these comparisons fair, parameters are set completely the same as those in [13] for all 4 algorithms, i.e. MCN is set to be 1000, SN is taken as 40 and limit equals 200, and dimension is set 20, 30 and 50 in turn for benchmark functions. Additionally, we set param = -0.05 for Eq. (7). Each experiment is repeated 30 times with randomly initialized conditions. Mean and standard deviation values are given in Table II. As seen from Table II, BE-ABC can achieve solutions closer to optima compared with ABC in all cases, and can find better solutions compared with PS-ABC and I-ABC in most of those cases. Considering the standard deviation results in Table II, we confirm that BE-ABC enjoys better searching precision in comparison with other algorithms; as for their mean values, we affirm that the correctness in BE-ABC system is better. To conclude, the searching accuracy of BE-ABC is evidently better that other algorithms mentioned in this paper. However, as with function of Ackley, BE-ABC is no match for I-ABC and PS-ABC, both of which obtain the optima by extremely high searching correctness. We intend to select Ackley as one of these benchmarks just to convince there is still improvement to make for BE-ABC. Seldom could standard deviation values turn out to be 0 calculated by intelligence algorithms, we raise this comparison also for some interested researchers’ future work. In addition, Fig. 4-6 show that BE-ABC converges faster than original ABC under all dimension conditions and can steadily carry on the converging process through the iterations. Especially in Fig. 4, it converges not so fast as that of ABC at the initial, but maintains appropriate converging speed to the end, achieving better solution in later iterations. Its steady converging curve mainly illustrates our balancing evolution strategy, i.e. to take both convergence speed and convergence accuracy considered and to balance them during searching process. We conclude here that BE-ABC is effective and competent.

Fig 4 Comparison of performance for minimization of Rosenbrock(50D)

Fig 5 Comparison of performance for minimization of Swewefel(50D)

V. CONCLUSION In our experiments, a preliminary investigation has been made on effect of BE-ABC and results confirm that it is competent to achieve higher convergence speed as well as better convergence performance dealing with benchmarks. Meanwhile, it should be noted that our study concentrates merely on several benchmark functions and effort is still needed to verify its capacity intrinsically. Assimilating

Fig 6 Comparison of performance for minimization of Ackley(20D)

221

Test Function

Rosenbrock

Schwefel

Ackley

Dim 20 30 50 20 30 50 20 30 50

TABLE II THE MEAN AND THE STANDARD DEVIATIONS OF THE FUNCTION VALUES ABC I-ABC PS-ABC S.D.

Mean

S.D.

Mean

S.D.

Mean

S.D.

1.1114 4.5509 48.0307 -8327.49 -12130.31 -19326.50 2.8343×10-9 2.7591×10-5 4.7018×10-2

1.7952 4.8776 46.6576 60.3307 59.1470 230.858 2.5816×10-9 2.1221×10-5 3.3957×10-2

15.7165 26.4282 47.0287 -8323.77 -12251.03 -19313.49 8.8817×10-16 8.8817×10-16 8.8817×10-16

1.4013 1.3956 0.8601 74.0475 166.741 277.438 0 0 0

0.5190 1.5922 34.4913 -8379.66 -12564.23 -20887.98 8.8817×10-16 8.8817×10-16 8.8817×10-16

1.0764 4.4066 30.3412 4.72×10-12 22.5354 80.3524 0 0 0

1.629×10-2 0.14850 2.3067 -8379.66 -12569.50 -20931.50 6.549×10-13 5.3319×10-9 1.0138×10-5

1.1697×10-2 1.2208 2.5893 0 0 37.9857 3.53×10-13 6.0201×10-9 1.3643×10-5

ACKNOWLEDGMENT Bai Li would like to thank Professor H. B. Duan for his initial inspiration to carry on the research. REFERENCES [1] [2] [3] [4]

[5] [6] [7] [8] [9] [10]

[11] [12] [13] [14]

BE-ABC

Mean

J. H. Holland, Adaptation in natural and artificial systems, University of Michigan Press, Ann Arbor, 1975. R. Storn and K. Price, “Differential evolution—a simple and efficient adaptive scheme for global optimization over continuous space,” Berkley: International Computer Science Institute, 1995. J. Kennedy and R.Eberhart, “Particle swarm optimization,” Proceeding of IEEE International Conference on Neural Networks, pp. 1942–1948, 1995. M. Dorigo, V. Maniezzo and A. Colorni, “The ant system: optimization by a colony of cooperating agents,” IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics, vol. 26, no. 1, pp. 29–41, 1996. D.Karaboga, “An idea based on honey bee swarm for numerical optimization,” Technical Report-TR06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005. D. Karaboga and B. Basturk, “Artificial Bee Colony (ABC) optimization algorithm for solving constrained optimization problems,” Applied Soft Computing, vol. 11, pp. 3021–3031, April 2011. D. Karaboga, “A new design method based on Artificial Bee Colony algorithm for digital IIR filters,” Journal of the Franklin Institute, vol. 346, no. 4, pp. 328–348, 2009. A. Banharnsakun, B. Sirinaovakul, and T. Achalakul, “Job shop scheduling with the best-so-far ABC,” Engineering Applications of Artificial Intelligence vol. 25, pp. 583–593, 2012. H. Ding and Q. Feng, “Artificial Bee Colony algorithm based on Boltzmann selection policy,” Computer Engineering and Applications, vol. 45, no. 31, pp. 53–55, 2009. S. N. Omkar, J. Senthilnath, R. Khandelwal, G. N. Naik, and S. Gopalakrishnan. “Artificial Bee Colony (ABC) for multi-objective design optimization of composite structures,” Applied Soft Computing, vol. 11, pp. 489–499, 2011. G. Li, P. Niu, and X. Xiao, “Development and investigation of efficient artificial bee colony algorithm for numerical function optimization,” Applied Soft Computing, vol. 12, pp. 320–332, 2012. W. Xu, “A modified artificial bee colony algorithm without selection strategy,” Journal of Taiyuan University of Science and Technology, vol. 32, no.5, Oct. 2011. X. Bi and Y. Wang, “A modified artificial bee colony algorithm and its application,” Journal of Harbin Engineering University, vol. 33, no. 1, Jan. 2012. G. Rudolph, “Convergence analysis of canonical genetic algorithms”, IEEE Transaction Neural Networks, vol. 5, no. 1, pp. 96–101, 1994.

222

2012 International Conference on Intelligent Control and Information Processing  July 15-17, 2012

Dalian, Liaoning, China

LETTER OF INVITATION Dear LI, Bai: This letter is in reference to your accepted paper ID 163 at the International Conference on Intelligent Control and Information Processing (ICICIP2012). I write to confirm your registration and payment for the event and to provide you with this letter of invitation to attend the International Conference on Intelligent Control and Information Processing. Please present this letter of invitation when lodging your visa application at the Chinese visa office overseas nearest to your current place of residence. ICICIP 2012, organized by Dalian University of Technology, will be held in the popular summer resort city Dalian in northeastern China during July, 15-17, 2012. We are looking forward to meeting you at the conference.

Min Han, Ph.D. ICICIP 2012 Organizing Chair http://icicip.dlut.edu.cn