Evolutionary exploration of search spaces - Semantic Scholar

6 downloads 394 Views 192KB Size Report
and the corresponding genetic search operators, the selection mechanism and ... combinatorial and numerical optimization problems have been treated by EP,.
Evolutionary exploration of search spaces A.E. Eiben [email protected] Leiden University, Department of Computer Science

Abstract. Exploration and exploitation are the two cornerstones of problem solving by search. Evolutionary Algorithms (EAs) are search algorithms that explore the search space by the genetic search operators, while exploitation is done by selection. During the history of EAs di erent operators have emerged, mimicing asexual and sexual reproduction in Nature. Here we give an overview of the variety of these operators, review results discussing the (dis)advantages of asexual and sexual mechanisms and touch on a new phenomenon: multi-parent reproduction.

1 Introduction Generate-and-test search algorithms obtain their power from two sources: exploration and exploitation. Exploration means the discovery of new regions in the search space, exploitation amounts to using collected information in order to direct further search to promising regions. Evolutionary Algorithms (EAs) are stochastic generate-and-test search algorithms having a number of particular properties. The standard pseudo-code for an EA is given in Figure 1, after [18]. There are di erent types of EAs, the most common classi cation distinguishes Genetic Algorithms (GA), Evolution Strategies (ES) and Evolutionary Programming (EP), [3]. Genetic Programming (GP) has grown out of GAs and can be seen as a sub-class of them. Besides the di erent historical roots and philosophy there are also technical di erences between the three main streams in evolutionary computation. These di erences concern the applied representation and the corresponding genetic search operators, the selection mechanism and the role of self-adaptation. Here we will concentrate on the question of operators.

2 Evolutionary search operators Search operators can be roughly classi ed by their arity and their type, the latter actually meaning the data type or representation they are de ned for. In traditional GAs binary representation is used, that is candidate solutions (individuals or chromosomes) are bit-strings with a xed length L. The study of sequencing problems, such as routing and scheduling, has yielded order-based representation, where each individual is a permutation, [16, 32]. Parameter optimization problems with variables over continous domains has led to real-valued, or oating point, representation. Genetic Programming uses individuals that are

EVOLUTIONARY ALGORITHM T := 0 // start with an initial time INITPOPULATION P(T) // initialize a usually random population of individuals EVALUATE P(T) // evaluate fitness of all initial individuals in population WHILE NOT DONE DO // test for termination criterion (time, fitness, etc.) T := T + 1 // increase the time counter P'(T) := SELECTPARENTS P(T) // select sub-population for offspring production RECOMBINE P'(T) // recombine the "genes" of selected parents MUTATE P'(T) // perturb the mated population stochastically EVALUATE P'(T) // evaluate it's new fitness P := SURVIVE P,P'(T) // select the survivors based on actual fitness OD END EVOLUTIONARY ALGORITHM

Fig. 1.

Pseudo-code of an Evolutionary Algorithm

tree structures, [24]. Accordingly, the search operators are de ned for trees instead of strings. Real valued numerical optimization is the standard application area of Evolution Strategies [29]. Therefore, real-valued representation is the standard in ES. The original Evolutionary Programming scheme was applied to nite state machines, using appropriately de ned operators, [13]. More recently, combinatorial and numerical optimization problems have been treated by EP, thus involving real-valued representation in the EP paradigm too. As far as their arity is concerned, the operators used commonly in EAs are either unary or binary. Unary operators are based on analogies of asexual reproduction mechanisms in nature, while binary operators simulate sexual reproduction. The so-called global recombination in ES and recently introduced multi-parent operators in GAs are n-ary. 2.1

Mutation

The most commonly used unary operator is mutation which is meant to cause a random, unbiased change in an individual, thus resulting in one new individual. In GAs for bit-representation a so-called mutation rate pm is given as

external control parameter. Mutation works by ipping each bit independently with probability pm in the individual it is applied to. Mutation rate is thus a parameter that is used during performing mutation in case of bit representation. The probability that an individual is actually mutated is a derived measure:

P [individual is mutated] = 1 ? (1 ? pm )L In case of order-based representation swapping or shifting randomly selected genes within an individual can cause the random perturbation. In this case the mutation rate pm = P [individual is mutated] is used before mutating to decide whether or not the mutation operator will be applied. For tree-structured individuals replacing a sub-tree (possibly one single node) by a random tree is appropriate. Mutation rate in GP is interpreted as the frequency of performing mutation, similarily to order-based GAs. It is remarkable that the general advice concerning mutation rate in GP is to set pm to zero, [24]. For the real-valued representation as used in ES, EP and lately in GAs the basic unit of inheritance, i.e. one gene, is a oating point number xi , rather than a bit. In ES and EP a separate mutation mechanism is used for each gene, i.e. object variable. Mutating a certain value xi means perturbing that value with a random number drawn form a Gaussian distribution N (0; i ) by

xi = xi + i  N (0; 1): 0

That is, i is the standard deviation, also called the mean step size, concerning xi . It is essential that the i 's are not static external control parameters but are part of the individuals and undergo evolution. In particular, an individual contains not only the object variables xi , but also the strategy parameters i , resulting in 2  n genes for n object variables. (For this discussion we disregard the incorporation and adaptation of covariances.) In ES the mean step sizes are modi ed log-normally:

i = i  exp(  N (0; 1) +   Ni (0; 1)) where  N (0; 1) and  Ni (0; 1) provide for global, repsectively individual control 0

0

0

of step sizes, [3]. In Evolutionary Programming the so-called meta-EP scheme is introduced for adapting mean step sizes. It works by modifying 's normally:

i = i +   i  N (0; 1) 0

where  is a scaling constant, [13]. Summarizing, the main di erences between the GA and ES/EP mutation mechanisms are that { in a GA every gene (bit) of every individual is mutated by the same mutation rate pm , while ES/EP mutation is based on a di erent step size for every individual and every gene (object variable); { pm is constant during the evolution in a GA, while the step sizes undergo adaptation in ES/EP.

Let us note that in [12] exponentially decreasing mutation rates were succesfully applied in a GA. Cross-fertilization of ideas from ES and GAs has led to using individual mutation rates that undergo self-adaptation in a GA with binary representation, [2]. The basic idea is to extend each individual with extra bits that represent the individual's own mutation rate. Mutation, then, happens by rst decoding these extra bits to the individual's own mutation rate p, mutating the extra bits by probability p, again decoding the (now mutated) extra bits to obtain the mutation rate p which is nally applied to the 'normal' bits of this individual. Recently, oating point representation is gaining recognition in GAs for numerical function optimization. Mutating a gene, then, can happen by uniform mutation, replacing xi by a number xi drawn form its domain by a uniform distribution, or by non-uniform mutation, where 0

0

 + (t; UBi ? xi ) if a random digit is 0 xi = xxi ? i (t; xi ? LBi ) if a random digit is 1 0

UBi and LBi being the upper, repectively lower bound of the domain and t the generation number (time counter), [25]. The function (t; y) returns a value from [0; y] getting closer to 0 as t increases. This mechanism yields a dynamically changing mutation step size. This resembles ES and EP, but di ers from them in an important aspect. Namely, while in ES and EP step sizes are adapted by the evolutionary process itself, here the changes follow a previously determined schedule. An interesting GA using bit representation combined with Gaussian mutation is presented in [20, 21]. The individuals are binary coded and undergo crossover acting on bits as usual. However, mutation is applied on the decoded values of the object variables xi and not on their binary codes. Before mutation random numbers drawn from a Poisson distribution determine the object variables that are to be mutated. Mutation itself works by decoding that part of the binary code that represents a selected variable xi to a real value and perturbing it by Gaussian noise from N (0; i ), where i is 0.1 times the maximum value of that gene. In the self-adapting version of this mechanism an extra gene, coded by a binary sequence, is added to the (front of the) individual. During mutation the extra bits coding the extra gene are decoded to a value, this value  is perturbed by an N (0;  ) Gaussian noise ( is a control parameter), resulting in  and nally the object variables are mutated according to an N (0;  ) distribution. 0

0

2.2

Crossover

As for sexual reproduction, the binary operator in GAs is called crossover, it creates two children from two parents. The crossover rate pc prescribes the probability that a selected pair of would-be parents will actually mate, i.e. that crossover will be applied to them. It is thus used before crossing over. The interpretation of pm and pc is therefore di erent in GAs with bit representation. There are two basic crossover mechanisms: n-point crossover and uniform crossover.

The n-point crossover cuts the two parents of length L into n + 1 segments along the randomly chosen crossover points (the same points in both parents) and creates child1 by gluing the odd segments form the father and the even segments from the mother. In child2 odd segments come from the mother and even segments come from the father. In the majority of GA applications 1-point crossover or 2-point crossover are used, higher n's are applied seldomly. Uniform crossover decides per bit whether to insert the bit value from the father or from the mother in child1; child2 is created by taking the opposite decisions. Orderbased representation requires that each individual is a permutation. Applying n-point and uniform crossover to permutations may create children containing multiple occurrences of genes, i.e. children of permutations will not necessarily be permutations. This fact has led to the design of special order-based crossovers that do preserve the property of 'being a permutation', [16, 32]. One example is the order crossover OX that resembles 2-point crossover. OX cuts the parents in three segments along two randomly chosen crossover points. Child1 inherits the middle segment from parent1 without modi cation. The remaining positions are lled with genes from parent2 in the order in which they appear, beginning with the rst position after the second crossover point and skipping genes already present in child1. Child2 is created alternatively. Clearly, tree-based representation in GP requires special crossover operators too. The standard GP crossover works by randomly chosing one node in parent1 and in parent2 independently. These nodes de ne a subtree in each parent, crossover exchanges these subtrees by cutting them from parentx and inserting it at the crossover node in parenty . In Evolution Strategies there are di erent sexual recombination mechanisms all creating one child from a number of parents. The usual form of recombination, [3], produces one new individual x from two parents x and y by  discrete recombination xi = xxi +oryi (y ? x ) intermediate recombination i i i Discrete recombination is pretty much like uniform crossover creating only one child. Intermediate recombination (where  2 [0; 1] is a uniform random variable) makes a child that is the weighted average of the two parents. Just like mutation, recombination is applied to the object variables (xi 's) as well as to the strategy parameters (i 's). Empirically, discrete recombination on object variables and intermediate recombination on strategy parameters gives best results. Similar operators are also introduced for GAs with real valued representation applied to convex search spaces [25]. These operators create two children of two parents. The so-called simple crossover choses a random crossover point k and crosses the parents after the k-th position applying a contraction coecient a to guarantee that the o spring fall inside the convex search space. In particular, if x and y are the two parents then child1 is 0

0

hx1 ; : : : ; xk ; a  yk+1 + (1 ? a)  xk+1 ; : : : ; a  yn + (1 ? a)  xn i and child2 is obtained by interchanging x and y. Single arithmetic crossover e ects only one gene in each parent. The children of x and y obtained by this

mechanism are: hx1 ; : : : ; a  yk + (1 ? a)  xk ; : : : ; xn i and hy1 ; : : : ; a  xk + (1 ? a)  yk ; : : : ; yni where a is a dynamic parameter, drawn randomly from a domain calculated for each pair of parents in such a way that the children are in the convex space. Whole arithmetical crossover is somewhat similar to intermediate recombination in ES in that it creates the linear combinations a  x + (1 ? a)  y and a  y + (1 ? a)  x as children of x and y. Here again, the choice of a has to guarantee that the children are in the convex space. A very particular feature within the EP paradigm is the total absence of sexual reproduction mechanisms. Mutation is the only search operator, and 'recombination in evolutionary programming is a nonissue beacause each solution is typically viewed as the analog of a species, and there is no sexual communication between species.', cf. [13] p. 103. Indeed, this view justi es the omittance of recombination, however, this view is only justi ed by the human operator's freedom in taking any arbitrary perspective. Nevertheless, the fact that apparently all higher species apply sexual recombination indicates that it does have a competitive advantage in natural evolution. This ought to make one cautious and not rejecting it a priori.

3 Sex or no sex: importance of crossover and mutation The traditional views of EA streams on genetic operators are summarized in Table 1, after [3]. Triggered by the provoking success of EP without recombinaES EP GA mutation main operator only operator background operator recombination important for none main operator self-adaptation Table 1.

Role of mutation and recombination in di erent EA paradigms

tion, as well as by 'introspective' motives in GA and ES there is more and more research devoted to the usefulness of mutation and recombination. Although neither GAs nor EP were originally developed as function optimizers, [5, 13], such investigations are usually performed on function optimization problems. Classical GA investigations tried to obtain a robust parameter setting concerning mutation rate and crossover rate, in combination with the pool size [4, 17, 27]. The generally acknowledged good heuristic values for mutation rate

and crossover rate are 1=chromosome:length, respectively 0.7-0.8. Recent studies, for instance [2, 12, 19, 23] provide theoretical and experimental evidence that decreasing mutation rate along the evolution is optimal. This suggests that mutation is important at the beginning of the search but becomes 'risky' at the end. In the meanwhile, GA's 'public opinion' lately acknowledges that mutation has a more important role than (re)introducting absent or lost genes in the population. As far as recombination is concerned, the main area of interest is traditionally the crossover's ability for combining and/or disrupting pieces of information, called schemata. Formally, a schema can be seen as a partial instantiation of the variables, i.e. a string based on a ternary alphabet 0; 1; #, where # means unde ned or don't care. Investigations in, for instance [6, 10, 30, 31], cumulated substantial knowledge on the disruptiveness of n-point and uniform crossover. The most important observation from [6] concerns the relationship between the disruptiveness and the power to recombine schemata. '... if one operator is better than another for survival, [i.e. less disruptive] it is worse for recombination (and vice versa). This ... suggests very strongly that one cannot increase the recombination power without a corresponding increase in disruption. Another important feature of crossover operators is their exploration power, that is the variety of di erent regions in the search space that can be sampled. In other words, the number of di erent chromosomes that can be created by applying an operator. Clearly, 'exploration in the form of new recombinations comes at the cost of disruption', [10]. Recent research has thus shown that disruption is not necessarily disadvantageous. Nevertheless, no general conclusions on the relationship between the disruptiveness of genetic operators and the quality of the GA they are applied in could be established so far. Most probably this relation is (to some extent) problem dependent. Besides comparisons of crossover operators, the relative importance, i.e. search power, of sexual recombination and asexual, unary operators is investigated. The experiments reported in [28] show that mutation and selection are more powerful than previously believed. Nevertheless, it is observed that a GA with highly disruptive crossover outperforms a GA with mutation alone on problems with a low level of interactions between genes. It is also remarked that the power of an operator is strongly related to the selection mechanism. In [11] 'crossover's niche' is sought, i.e. problems where pair-wise mating has competitive advantages. This niche turns out to be non-empty, in the meanwhile the authors suspect that sexual recombination in GAs might be less powerful than generally believed. The usefulness of recombination is investigated on NK landscapes that allow gradual tuning of the ruggedness of the landscape in [22]. The conclusion is that 'recombination is useless on uncorrelated landscapes, but useful when high peaks are near one another and hence carry mutual information about their joint locations in the tness landscape'. The view presented in [30] relativizes the mutation-orcrossover battle by noting that both operators serve another purpose. 'Mutation serves to create random diversity in the population, while crossover serves as an accelerator that promotes emergent behavior from components.' Additionally,

crossover is useful for maximizing the accumulated payo , while it can be harmful if optimality is sought. Based on a comparison of EP and GA in [14] it is argued that crossover does not have any competitive advantage above mutation. Critiques on this investigation, for instance in [28], initiated more extensive comparisons of GAs and EP on function optimization problems. The results in [15] show a clear advatage of EP and make the authors conclude that 'no consistent advantage accrues from representing real-valued parameters as binary strings and allowing crossover to do within-parameter search.' Summarizing, at the moment there is a lot of e ort payed to establishing the (dis)advantages of sexual, respectively asexual reproduction in simulated evolution. So far there is no generally valid judgement, and it is possible that preferences for either type of recombination are context and goal dependent.

4 More sex: n-ary operators N-ary recombination operators were rst introduced in ESs. The global form of recombination produces one new individual x by  global discrete recombination xi = xxi +oryi  (y ? x ) global intermediate recombination i i i i where x and y are two parent individuals selected at random from the whole population for each i anew and drawing i 2 [0; 1] also happens for every i independently, [3]. Thus, although for each component xi only two mating partners are consulted, the resampling mechanism causes that the child may inherit genes from more parents. This results in a higher mixing of the genetic information than the 2-parent versions, [1]. In GAs there are three general multi-parent crosover mechanisms. Scanning crossover and diagonal crossover were introduced in [7], where the performance of scanning was studied. In [9] diagonal crossover was investigated, compared to scanning and the classical 2-parent n-point crossover. An extensive overview of experiments is given in [8]. Scanning crossover generalizes uniform crossover, although creating only one child, by chosing one of the i-th genes of the n parents to be the i-th gene of the child. Formally, xi = C (x1i ; : : : ; xni ) where C is a choice mechanism choosing a value C (u1 ; : : : ; un) from u1; : : : ; un . The choice can be random, based on a uniform distribution (uniform scanning), or biased by the tness of the parents ( tness based scanning). It can be deterministic, based on the number of occurrences of the genes (occurrence based scanning). Diagonal crossover generalizes n-point crossover by selecting (n ? 1) crossover points and composing n children by taking the resulting n chromosome segments from the parents 'along the diagonals'. Figure 2 illustrates this idea. The Gene Pool Recombination (GPR) in GAs was introduced in [26]. It was further studied and extended with a fuzzy mechanism in [33]. In GPR rst 0

0

0

0

CHILD 3

CHILD 2

11111111 00000000 00000000 11111111

11111111 00000000 11111111 00000000

CHILD 1

11111111 00000000 00000000 11111111

PARENT 3

PARENT 2

X

11111111111111111111111 00000000000000000000000 00000000000000000000000 11111111111111111111111

PARENT 1

X

Fig. 2.

Diagonal crossover with three parents

a set of parents has to be selected, these form the so-called gene pool. Then the two parent alleles of an o spring are chosen randomly for each position with replacement from the gene pool (as opposed to ES, where they are chosen from the whole population). The o spring's allele is computed using any of the standard recombination schemes that work on two parents. GPR allows for theoretical analysis of GA convergence for in nite populations with binomial tness distribution. As for the performance of multi-parent recombination, at the moment there are not many accessible results within ES. In [29] p. 146 the following is stated: 'A crude test yielded only a slight further increase in the rate of progress in changing from the bisexual to the multisexual scheme, whereas appreciable acceleration was achieved by introducing the bisexual in place of the asexual scheme, which allowed no recombination.' Scanning crossover and diagonal crossover show increased performance when using more parents on several numerical functions [7, 9, 8], and ongoing research proves the same on NK-landscapes. GPR was reported to converge 25 % faster than two parent recombination (TPR) on the ONEMAX problem [26], and the fuzzy ed version of GPR proved to outperform fuzzy ed TPR in speed as well as in realized heritability on the spherical function [33].

5 Conclusions Evolutionary Algorithms perform search by means of creating genetic diversity by genetic operators and reducing this diversity by selection. There exists a great variety of genetic operators, in this paper we gave an overview. Forced by space constraints this overview was necessarily limited, there are numerous other operatos developed for special purposes. Grouping genetic operators by arity we discussed asexual reproduction (mutation) and sexual recombination between two parents. The importance of sex in Evolution, more technically the relative power of unary and binary operators, is a hot issue in recent EA research. We brie y reviewed some of the related results, noting that at the moment none

of them can be given a clear preference. At the end we touched on a new phenomenon: sexual recombination between more than two parents. (This is multiparent, rather than multi-sexual recombination, as there are no genders in EAs.) The rst results on multi-parent operators in GAs are very promising, showing that they lead to enhanced search. In other words, sexual recombination is more powerful than previously believed, it just takes more sex, i.e. more parents. Although a lot of work is still to be done, this might give new impetus to the sex or no sex debate.

References 1. T. Back F. Ho meister and H.-P. Schwefel. A survey of evolution strategies. In Fourth International Conference on Genetic Algorithms, pages 2{9. Morgan Kaufmann, 1991. 2. T. Back. The interaction of mutation rate, selection, and self-adaptation within a genetic algorithm. In Parallel Problem Solving from Nature - 2, pages 85{94. North-Holland, 1992. 3. T. Back and H.-P. Schwefel. An overview of evolutionary algorithms for parameter optimization. Journal of Evolutionary Computation, 1:1{23, 1993. 4. K.A. De Jong. An analysis of the behavior of a class of genetic adaptive systems. Doctoral dissertation, University of Michigan, 1975. 5. K.A. De Jong. Are genetic algorithms function optimizers? In R. Manner and B. Manderick, editors, Parallel Problem Solving from Nature - 2, pages 3{13. North-Holland, 1992. 6. K.A. De Jong and W.M. Spears. A formal analysis of the role of multi-point crossover in genetic algorithms. Annals of Mathematics and Arti cial Intelligence, 5:1{26, 1992. 7. A.E. Eiben, P-E. Raue, and Zs. Ruttkay. Genetic algorithms with multi-parent recombination. In Parallel Problem Solving from Nature - 3, LNCS 866, pages 78{87. Springer-Verlag, 1994. 8. A.E. Eiben and C.H.M. van Kemenade. Performance of multi-parent crossover operators on numerical function optimization problems. Technical Report TR-9533, Leiden University, 1995. 9. A.E. Eiben, C.H.M. van Kemenade, and J.N. Kok. Orgy in the computer: Multiparent reproduction in genetic algorithms. In Third European Conference on Arti cial Life, LNAI 929, pages 934{945. Springer-Verlag, 1995. 10. L.J. Eshelman and R.A. Caruana andJ.D. Scha er. Biases in the crossover landscape. In Third International Conference on Genetic Algorithms, pages 10{19, 1989. 11. L.J. Eshelman and J.D. Scha er. Crossover's niche. In Fifth International Conference on Genetic Algorithms, pages 9{14, 1993. 12. T. Fogarty. Varying the probability of mutation in the genetic algorithm. In Third International Conference on Genetic Algorithms, pages 104{109, 1989. 13. D.B. Fogel. Evolutionary Computation. IEEE Press, 1995. 14. D.B. Fogel and J.W. Atmar. Comparing genetic operators with gaussian mutations in simulated evolutionary processes using linear systems. Biological Cybernetics, 63:111{114, 1990.

15. D.B. Fogel and L.C. Stayton. On the e ectiveness of crossover in simulated evolutionary optimization. Biosystems, 32:3:171{182, 1994. 16. B.R. Fox and M.B. McMahon. Genetic operators for sequencing problems. In Foundations of Genetic Algorithms - 1, pages 284{300, 1991. 17. D.E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, 1989. 18. J. Heitkotter and D. Beasley. The Hitch-Hiker's Guide to Evolutionary Computation. FAQ for comp.ai.genetic, 1996. 19. Hesser and Manner. Towards an optimal mutation probability for genetic algorithms. In Parallel Problem Solving from Nature - 1, LNCS 496, pages 23{32. Springer-Verlag, 1990. 20. R. Hinterding. Gaussian mutation and self-adaptation for numeric genetic algorithms. In Second IEEE conference on Evolutionary Computation, pages 384{389, 1995. 21. R. Hinterding, H. Gielewski, and T.C. Peachey. The nature of mutation in genetic algorithms. In Sixth International Conference on Genetic Algorithms, pages 65{72, 1995. 22. W. Hordijk and B. Manderick. The usefulness of recombination. In Third European Conference on Arti cial Life, LNAI 929, pages 908{919. Springer-Verlag, 1995. 23. B.A. Julstrom. What have you done for me lately? adapting operator probabilities in a steady-state genetic algorithm. In Sixth International Conference on Genetic Algorithms, pages 81{87, 1995. 24. J.R. Koza. Genetic Programming. MIT Press, 1992. 25. Z. Michalewicz. Genetic Algorithms + Data structures = Evolution programs. Springer-Verlag, second edition, 1994. 26. H. Muhlenbein and H.-M. Voigt. Gene pool recombination in genetic algorithms. In Proc. of the Metaheuristics Conference. Kluwer Academic Publishers, 1995. 27. J.D. Scha er R.A. Caruana, L.J. Eshelman and R. Das. A study of control parameters a ecting the online performance of genetic algorithms. In Third International Conference on Genetic Algorithms, pages 51{60, 1989. 28. J.D. Scha er and L.J. Eshelman. On crossover as an evolutionary viable strategy. In Fourth International Conference on Genetic Algorithms, pages 61{68, 1991. 29. H.-P. Schwefel. Evolution and Optimum Seeking. Sixth-Generation Computer Technology Series. Wiley, New York, 1995. 30. W.M. Spears. Crossover or mutation? In Foundations of Genetic Algorithms - 2, pages 221{238, 1993. 31. G. Syswerda. Uniform crossover in genetic algorithms. In Third International Conference on Genetic Algorithms, pages 2{9, 1989. 32. T. Starkweather, S. McDaniel K. Mathias D. Whitley and C. Whitley. A comparison of genetic sequenceing operators. In Fourth International Conference on Genetic Algorithms, pages 69{76, 1991. 33. H.-M. Voigt and H. Muhlenbein. Gene pool recombination and utilization of covariances for the Breeder Genetic Algorithm. In Second IEEE conference on Evolutionary Computation, pages 172{177, 1995.

This article was processed using the LATEX macro package with LLNCS style