grouping-based evolutionary algorithm improves ... - Semantic Scholar

6 downloads 0 Views 99KB Size Report
cations, Korea, under the Information Technology Research Center (ITRC). Support .... method, death penalty method, adaptive penalty method, dynamic ...
GROUPING-BASED EVOLUTIONARY ALGORITHM IMPROVES THE PERFORMANCE OF DYNAMIC PENALTY METHOD FOR CONSTRAINED OPTIMIZATION PROBLEMS Ming Yuchi

Jong-Hwan Kim

Dept. of Electrical Engineering and Computer Science 373-1, Guseong-dong, Yuseong-gu Daejeon 305-701, Republic of Korea Email: {ycm, johkim}@rit.kaist.ac.kr ABSTRACT Infeasible individuals are often underrated when evolutionary algorithms are used for solving constraint optimization problems. This paper proposes a new approach to balance the feasible and infeasible individuals. The population is divided into two groups: feasible group and infeasible group. The evaluation and ranking of these two groups are performed separately. Parents for reproduction are selected from the two groups by a novel parent selection method. Objective function and bubble sort method are selected as the fitness function and ranking method for the feasible group. One existing evolutionary algorithms, dynamic penalty method, is modified to evaluate and rank the infeasible group. The new method is tested using a (µ, λ)-ES on 13 benchmark problems. Our results show that the proposed method is capable of improving performance of the dynamic penalty method for constrained optimization problems. 1. INTRODUCTION The general constrained optimization problem (P ) is to find ~x so as to min f (~x), ~ x

~x = (x1 , . . . , xn ) ∈ Rn

(1)

where ~x ∈ F ⊆ S. The objective function f (~x) is defined on the search space S ⊆ Rn and the set F ⊆ S defines the feasible region. Usually, the search space S is defined as an n-dimensional rectangle in Rn (domains of variables defined by their lower and upper bounds): l(j) ≤ xj ≤ u(j), j = 1, . . . , n

(2)

where the feasible region F ⊆ S is defined by a set of m additional constraints (m ≥ 0): This work was supported by the Ministry of information & Communications, Korea, under the Information Technology Research Center (ITRC) Support Program.

gk (~x) ≤ 0, hk (~x) = 0,

k = 1, . . . , l, k = l + 1, . . . , m.

(3) (4)

The inequalities gk (~x) ≤ 0 are called inequality constraints, and the corresponding functions gk (~x) are called the inequality constraint functions. The equations hk (~x) = 0 are called equality constraints, and the corresponding functions hk (~x) are the equality constraint functions. Constrained optimization problem (P ), in general, is intractable. If the objective function and constraint functions are arbitrary, then there is little choice apart from methods based on exhaustive search. Evolutionary algorithms are global methods, which aim at complex objective functions (e.g., non differentiable or discontinuous). It has received a lot of attention regarding their potential for solving the constrained optimization problem. In the survey paper by Michalewicz and Schoenauer [1], the existing evolutionary algorithms developed for solving the constrained optimization problems are classified into four categories: 1) methods based on preserving feasibility of solutions, 2) methods based on penalty functions, 3) methods which make a clear distinction between feasible and infeasible solutions, and 4) other hybrid methods. 11 different types of constrained optimization problems were also provided in [1] and has become a commonly used test suit for comparing the effectiveness of different evolutionary algorithms [2], [3], [4], [5], [6], [7]. When we use evolutionary algorithms to solve the constrained optimization problems, the relation between feasible and infeasible populations is a critical problem. Intuitively, one would say that feasible individuals are fitter than infeasible individuals because our goal is to find solutions that minimize the objective function and satisfy all the constraints. The feasible individuals have satisfied all the constraints and the work left seems to be one minimization problem. This kind of view ignores one important thing that the evolutionary algorithm is a probabilistic method not a

deterministic method. Considering the complexity and variety of different constrained optimization problems, there exists the possibility that the infeasible individuals are fitter than the feasible populations at some generations during evolution computation. How to deal with the relation between the feasible and infeasible individuals and how to process the infeasible individuals become very important for solving constrained optimization problems using evolutionary techniques. Most of the existing evolutionary algorithms for constrained optimization problems, more or less, underrated the importance of the infeasible individuals. For example, nearly all of the penalty methods add some ‘penalties’ to the fitness functions of the infeasible individuals [9], [3], [10]. Some other methods [12] [11], directly assume that feasible individuals are always fitter than infeasible ones. The novel evolutionary algorithm proposed in this paper is based on the idea that the infeasible individuals be treated more fairly. For this purpose, the whole population are divided into two groups. The evaluation and ranking of these two groups are performed separately. The selection of parents for reproduction from the feasible and infeasible groups will be adjusted by one tuning parameter Sp . The initial study on GEA can be found in [7], [17]. The rest of this paper is organized as follows. Section 2 gives a deeper investigation about the relation between feasible and infeasible individuals. Then, the grouping-based evolutionary algorithm (GEA) is introduced. Two important aspects of this method are put forward: ranking of infeasible group and how to select parents from feasible and infeasible groups. Section 3 tests the GEA with one ranking method for infeasible group: dynamic penalty ranking. Finally, Section 4 concludes with some remarks and future research directions. 2. GROUPING-BASED EVOLUTIONARY ALGORITHM 2.1. Motivation For constrained optimization problems, our aim is to find ~x satisfying all constraints (feasible solutions) and minimize f (~x). Intuitively, feasible individuals could be thought to have better fitness values than infeasible ones (here ‘better’ means to have more chance to survive and reproduce). Since all the constrains have already been satisfied for the feasible individuals, the only aim left is to find ~x minimize f (~x). Most of the current evolutionary algorithms for constrained optimization problems follow the idea that feasible individuals is, in some sense, better than the infeasible individuals. The fitness values of the infeasible individuals are degraded intentionally in different kinds of strategies. However, this is not always a good idea. Let’s see one example from [2]. Figure 1 shows a search space with feasible and infeasible

Search space S Feasible search space F Infeasible search space U

Figure 1: A search space and its feasible and infeasible parts

•d

•g •i

•c •a

•f •e

•b

•h

•l •k

X•j

Figure 2: A population of 12 individuals, a-l

subspaces, F and U, respectively. These subspaces need not to be convex or connected. The feasible part F of the search space consists of four disjointed subsets. During the search process, we have to deal with various feasible and infeasible individuals. For example, at one certain stage of the evolution process, a population may contain some feasible (c, d, f, h, k, l) and infeasible individuals (a, b, e, g, i, j), while the global optimum solution is marked with ‘X’. Apparently, the infeasible individual ‘j’ is the closest to the optimal solution ‘X’, and its offspring have more chance to reach ‘X’. It is not reasonable to give feasible individual ‘d’ a better fitness value than the infeasible individual ‘j’. 2.2. GEA In order to treat the infeasible individuals fairly as the feasible individuals, and at the same time, to evolve successfully (find one feasible solution to minimize f (x)), in this paper, a grouping-based EA (GEA) is proposed for constrained optimization problems. GEA focuses on building the relationship between the feasible and infeasible solutions so as to avoid the case that mentioned in Figure 2 (‘b’ has better fitness value than ‘j’). The main idea of GEA is to divide population into two groups: feasible group and infeasible group. The evaluation and ranking of these two groups are performed in parallel and separately. In order to exchange useful information of these two groups, the best feasible and infeasible individuals are combined as a parent group and reproduce together through one predefined selection strategy. The offspring of the parent group are separated as

Initial population

Check feasibility

Feasible group

Infeasible group

Evaluate and rank

Evaluate and rank

Select parents

Parent group

Reproduce

Offspring population

Figure 3: Flow chart of GEA

feasible group and infeasible group again. Figure 3 shows the flowchart of GEA, respectively. GEA provides an open structure for the designer. Different types of EA or GA can be directly adopted to operate the evolution process. And the user can also design different evaluation, ranking and parents selection strategies.

individuals in GEA are separated, we need one strategy to decide the number of feasible parents and the number of infeasible parents for reproduction. Here, a novel parent selection method is proposed, where the number of feasible parents is set to be proportional to the number of feasible individuals by a positive integer parameter Sp . Numbers of feasible parents and infeasible parents are calculated as follows: i) numF eaP ar = round(

numF eaInd ). Sp

(5)

If numF eaP ar > numP ar, numF eaP ar = numP ar. (6) ii) numInf eaP ar = numP ar − numF eaP ar. (7) where numF eaP ar represents ‘number of feasible parents’, numF eaInd ‘number of feasible individuals’, numP ar ‘number of parents’, numInf eaP ar ‘number of infeasible parents’. (6) restricts numF eaP ar not to exceed the predefined number of parents (numP ar). Once the numbers of feasible and infeasible parents are decided, we can select the corresponding numbers of best individuals from the feasible and infeasible groups, respectively, according to their rankings. To explain this more clearly, assume that (µ, λ)−ES is used for computation. µ is parent size, λ is population size. (µ, λ) is set as (30, 200). If Sp = 5 and numF eaInd = 20, then by (5), we can get numF eaP ar = 4. And by (7), numInf eaP ar = 30−4 = 26. Then we can select the best 4 individuals from the feasible group and best 26 individuals from the infeasible group, thus to form the parent group for reproduction.

2.2.1. Evaluation and Ranking 3. GEA WITH DYNAMIC PENALTY METHOD Let’s see the evaluation and ranking part first. For feasible group, the objective function can be directly selected as the fitness function. And the ‘bubble sort’ (rank from the smallest fitness value to the biggest) is adopted in this paper for feasible group. For infeasible group, the evaluation and rank is a little complicated. One of the simple way is to adopt the existing methods. Here, we will try the dynamic penalty method [9], which has been investigated by many researchers. This method is employed to evaluate and rank the whole population and will be modified to apply to infeasible group in GEA. Next section will introduce dynamic penalty method in detail and give the simulation results of GEA with this method. 2.2.2. Parent Selection Another important part of GEA is how to select the parents from the two separated groups. Normally, the number of parents and the number of individuals of evolutionary algorithm are predefined. Since the feasible and infeasible

As the population of GEA is divided into two groups: feasible group and infeasible group, it is necessary to design the evaluation and ranking of these two groups separately. Since the individuals in feasible group satisfies all the constraints, the job is relatively simple. In this paper, we directly choose the objective function (1) as the fitness function and the bubble ranking as the ranking method for the feasible group. For infeasible group, dynamic penalty method and is applied. One common approach to deal with constrained optimization problems is to introduce a penalty term into the objective function to penalize constraint violations. The penalty term is expected to transform a constrained optimization problem (P ) into an unconstrained one. Different kinds of penalty methods have been studied by many researchers, like static penalty method, annealing penalty method, death penalty method, adaptive penalty method, dynamic penalty method, etc. Dynamic penalty method [9] is the one which will be discussed here. The basic idea of

dynamic penalty method is to evaluate the individuals with the following formula1 : ψ(~x) = f (~x)+(g/2)2 φ(gk (~x), hk (~x); k = 1, . . . , l, . . . , m) (8) where g is the number of generation, φ > 0 is a real-valued function called the penalty function defined as follows: φ(gk (~x), hk (~x)) =

l X k=1

max{0, gk (~x)}2 +

m X

|hk (~x)|2 .

k=l+1

(9) ψ(~x) is set as the fitness function, and the bubble ranking method is set as the ranking method for infeasible group of GEA. In the later parts of this paper, we will call the GEA with dynamic penalty method ‘GEA D’. Thirteen benchmark functions were tested. Please see [7], [17] for details of these functions. Problems G2, G3, G8 and G12 are maximization problems. They were transformed into minimization problems using −f (~x). Problems G3, G5, G11 and G13 include one or several equality constraints. All of these equality constraints were converted into inequality constraints, |h(~x)| − δ ≤ 0, using the degree of violation δ = 0.0001. A (µ, λ)-ES [13] was employed for recombination and mutation. For impartial comparison, all parameters of the ES used here are the same as those of [5]. For each of the benchmark problems, 30 independent runs were performed using a (30, 200)-ES. All runs were terminated after 2000 generations. Table 1 (data from [5]) shows the simulation results solved with dynamic penalty method. Table 2 summarizes the experimental results we obtained with GEA D (Sp = 5). The median number of generations for finding the best solution in each run is indicated by gm in the tables. The tables also show the known ‘optimal’ solution for each problem and statistics for the 30 independent runs. These include the best, median, worst and mean objective values and the standard deviation found. Comparing Table 1 and Table 2, we could find that GEA D performed better than dynamic penalty method for most of the problems. For problems G1, G8 and G11, both algorithms performed well and found the optimal solutions for all 30 runs. For problem G10, both algorithms failed to find the optimal solution. For problem G2, GEA D provides ‘similar’ results to dynamic penalty method. It performed better on mean and median, but worse on best and worst than dynamic penalty method. For the rest of problems, GEA D outperformed the dynamic penalty method except that several criteria terms were a little worse: worst of G6, mean and worst of G9 and median of G13. For problem 1 There are several coefficients for the dynamic penalty method in [9]. The recommended values for these parameters by the authors are directly used in the function here.

G3, best of dynamic penalty method is −0.583, while best of GEA D could reach the optimal value −1.000. For problems G4 and G6, GEA D performed significantly better in terms of all four criteria: best, median, worst and mean. It should be noted that in problem G5, GEA D could find a feasible solution 7 times out of 30 runs, while dynamic penalty method failed to find the solution. In summary, the numerical experiments with the benchmark problems suggest that GEA provides an efficient structure to improve the performance of evolutionary algorithms with dynamic penalty method for constrained optimization problems. 4. CONCLUSION AND FUTURE RESEARCH DIRECTIONS In this paper, a new constraint handling technique: groupingbased evolutionary algorithm is proposed. This method divides the population into two groups: feasible group and infeasible group according to the feasibility of the individuals. The evaluation and ranking of these two groups are performed in parallel and separately. Parent selection from these two groups is tuned by one parameter Sp which determines the numbers of feasible parents and infeasible parents for reproduction. In addition, one existing evolutionary algorithms: dynamic penalty method is modified as GEA D to evaluate and rank the infeasible group. GEA D was tested on a set of 13 benchmark problems. Future research includes Sp tuning; GEA for multi-objective constrained optimization problems. 5. REFERENCES [1] Z. Michalewicz and M. Schoenauer, “Evolutionary algorithms for constrained parameter optimization problems,” Evol. Comput., vol. 4, no. 1, pp. 1-31, 1996. [2] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolutionary Programs, 3rd edition, Springer, 1999, pp. 312-327. [3] Z. Michalewicz and N. Attia, “Evolutionary optimization of constrained problems,” in Proc. of the Third Annual Conference on Genetic Algorithms, World scientific, pp. 98-108. [4] M. J. Tahk and B. C. Sun, “Coevolutionary augmented lagrangian methods for constrained optimization,” IEEE Trans. Evol. Comput., vol. 4, no. 2, pp. 114-124, 2000. [5] T. P. Runarsson and X. Yao, “Stochastic Ranking for Constrained Evolutionary Optimization,”

Function G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12 G13

optimal −15.000 −0.803619 −1.000 −30665.539 5126.498 −6961.814 24.306 −0.095825 680.630 7049.331 0.750 −1.000000 0.053950

best −15.000 −0.803587 −0.583 −30365.488 − −6911.247 24.309 −0.095825 680.632 − 0.750 −1.000000 0.514152

median −15.000 −0.785907 −0.045 −30060.607 − −6547.354 24.375 −0.095825 680.648 − 0.750 −0.999818 0.996674

worst −15.000 −0.751624 −0.001 −29871.442 − −5868.028 25.534 −0.095825 680.775 − 0.750 −0.999573 0.998156

mean −15.000 −0.784868 −0.103 −30072.458 − −6540.012 24.421 −0.095825 680.659 − 0.750 −0.999838 0.965397

st. dev. 7.9E − 005 1.5E − 002 1.4E − 001 1.2E + 002 − 2.6E + 002 2.2E − 001 2.8E − 017 3.2E − 002 − 9.1E − 006 1.3E − 004 9.4E − 002

gm 217 1235 996 4 − 13 180 421 1739 − 61 68 2000

Table 1: Experimental Results of Dynamic Penalty Method; “−” Means no Feasible Solutions Were Found; Data is From [5].

Function G1 G2 G3 G4 G5(7) G6 G7 G8 G9 G10 G11 G12 G13

optimal −15.000 −0.803619 −1.000 −30665.539 5126.498 −6961.814 24.306 −0.095825 680.630 7049.331 0.750 −1.000000 0.053950

best −15.000 −0.803542 −1.000 −30665.314 5289.147 −6955.392 24.308 −0.095825 680.631 − 0.750 −1.000000 0.440485

median −15.000 −0.792552 −0.064 −30618.671 5504.010 −6748.084 24.347 −0.095825 680.645 − 0.750 −1.000000 0.997443

worst −15.000 −0.723099 −0.009 −30348.470 5714.789 −5282.978 24.503 −0.095825 680.869 − 0.750 −0.999937 0.998079

mean −15.000 −0.786685 −0.134 −30569.466 5509.922 −6578.087 24.364 −0.095825 680.662 − 0.750 −0.999993 0.950496

st. dev. 2.6E − 005 1.8E − 002 1.9E − 001 1.0E + 002 1.5E + 000 4.0E + 002 4.8E − 002 0.0E + 000 4.5E − 002 − 3.6E − 007 1.3E − 005 1.4E − 001

gm 338 1296 1998 83 29 13 310 358 1867 − 527 229 2000

Table 2: Experimental Results of GEA D; Sp = 5; The Subscript in the Function Name Indicates the Number of Feasible Solutions Found if It was Between 1 and 29 Inclusive; “−” Means no Feasible Solutions Were Found.

IEEE Trans. Evol. Comput., vol. 4, no. 3, pp. 284294, 2000. [6] S. Koiziel and Z. Michalewica, “Evolutionary algorithms, homomorphous mapping and constrained parameter optimization,” Evol. Comput., vol. 7, no. 1, pp.19-44, 1999. [7] M. Yuchi and J.-H. Kim, “A grouping-based evolutionary algorithm for constrained optimization problem,” in Proc. of 2003 Congress on Evolutionary Computation, IEEE Press, 2003, pp. 1507-1512. [8] R. Hinterding and Z. Michalewicz, “Your brains and my beauty: parent matching for constrained optimisation,” in Proc. of the 5th IEEE Conference on Evolutionary Computation, IEEE Press, 1998, pp. 810-815. [9] J. Joines and C. Houck, “On the use of nonstationary penalty functions to solve nonlinear constrained optimization problems with GAs,” in Proc. of the First IEEE international conference on Evolutionary Computation, Z. Michalewicz, J. D. Schaffer, H. P. Schwefel, D. B. Fogel, and H. Kitano, Eds. IEEE Press, 1994, pp. 579-584. [10] S. B. Hamida and M. Schoenauer, “ASCHEA: new results using adaptive segregational constraint handling,” Proc. of the 2002 Congress on Evolutionary Computation 2002. CEC ’02., vol. 1 , pp. 884 -889. [11] S. B. Hamida and A. Petrowski “The need for improving the exploration operators for constrained optimization problems,” Proc. of the 2000 Congress on Evolutionary Computation, vol. 2, pp. 1176 -1183. [12] C. A. C. Coello, “Constraint-handling using an evolutionary multiobjective optimization technique,” Civil Engineering and Environmental Systems, 17, pp. 319–346, 2000. [13] H. G. Beyer and H. P. Schwefel, “Evolution strategies - a comprehensive introduction,” Natural Computing, 1: 3-52, 2002. [14] D. B. Fogel, “An introduction to simulated evolutionary optimization,” IEEE Trans. Neural Networks, vol. 5, pp. 3-14, Jan. 1994. [15] T. B¨ack and H.-P. Schwefel, “An overview of evolutionary algorithms for parameter optimization,” Evol. Comput., vol. 1, no. 1, pp. 1-23, 1993.

[16] J.-H. Kim and H. Myung, “Evolutionary programming techniques for constrained optimization problems,” IEEE Trans. Evol. Comput., vol. 1, no. 2, pp. 129-140, 1997. [17] M. Yuchi and J.-H. Kim, “Grouping-based Evolutionary Algorithm: Seeking Balance Between Feasible and Infeasible Individuals of Constrained Optimization Problems,” in Proc. of 2004 Congress on Evolutionary Computation, IEEE Press, 2004, vol. 1, pp. 280-287.