Electromagnetic device optimization by hybrid evolution strategy

3 downloads 0 Views 92KB Size Report
improve the search ability of evolution strategies (ES). .... (4) parents (chosen randomly or according to a roulette wheel criterion with areas proportional to ...
The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

Electromagnetic device optimization by hybrid evolution strategy approaches Leandro dos Santos Coelho

Electromagnetic device optimization 269

Automation and Systems Laboratory, Pontifical Catholic University of Parana´, Parana´, Brazil, and

Piergiorgio Alotto Dipartimento di Ingegneria Elettrica, Universita` di Padova, Padova, Italy Abstract Purpose – This paper aims to show on a widely used benchmark problem that chaotic sequences can improve the search ability of evolution strategies (ES). Design/methodology/approach – The Lozi map is used to generate new individuals in the framework of ES algorithms. A quasi-Newton (QN) method is also used within the iterative loop to improve the solution’s quality locally. Findings – It is shown that the combined use of chaotic sequences and QN methods can provide high-quality solutions with small standard deviation on the selected benchmark problem. Research limitations/implications – Although the benchmark is considered to be representative of typical electromagnetic problems, different test cases may give less satisfactory results. Practical implications – The proposed approach appears to be an efficient general purpose optimizer for electromagnetic design problems. Originality/value – This paper introduces the use of chaotic sequences in the area of electromagnetic design optimization. Keywords Electromagnetic fields, Optimization techniques, Newton method, Evolution, Chaos theory Paper type Research paper

1. Introduction Conventional numerical optimization methods can remain trapped in local maxima due to their inherent hill climbing features and several problems arise when trying to apply analytical methods to constrained optimization problems where the objective function is not differentiable, e.g. when it is extracted from a numerical simulation of a device. Furthermore, the objective functions needed in these numerical methods must be not too “ill-behaved.” Evolutionary algorithms (EAs) are one of the possible techniques to solve such complex problems. EAs are computer-based problem-solving systems based on some principles of evolution theory. The interest in EAs has been increasing very fast due to their robust and powerful adaptive search mechanisms and they have been used successfully in many engineering areas for solving difficult multidimensional and multimodal problems. There are a variety of EAs that have been proposed, such as: genetic algorithms, evolution strategies (ES), evolutionary programming, and genetic programming. They share a common conceptual base of simulating the evolution of individual structures via selection and reproduction procedures. The basic idea is to

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 26 No. 2, 2007 pp. 269-279 q Emerald Group Publishing Limited 0332-1649 DOI 10.1108/03321640710727638

COMPEL 26,2

270

maintain a population of candidate solutions that evolve under selective pressure that favors better solutions. EAs usually do not require deep mathematical knowledge of the underlying problem and do not guarantee the optimal solution in a finite time. However, they are useful for large-scale optimization problems, dealing efficiently with huge and irregular search spaces. Summarizing, EAs are robust and offer exceptional adaptive capabilities to handle nonlinear, highly dimensional and complex engineering problems. In the last decade, EAs emerged also as a powerful method to achieve goals in optimization problems of electromagnetic devices (Magele et al., 1993; Alotto et al., 1996b, 1998; Mohammed and U¨ler, 1997; Vasconcelos et al., 1997; Pahner and Hameyer, 2000; Sareni et al., 2000; Farina and Sykulski, 2001). The literature contains references to several optimization algorithms based on artificial intelligence and nonlinear optimization approaches, mainly evolutionary ones, for solving TEAM problem 22. Examples are neuro-fuzzy methods (Rashid et al., 2001), particle swarm optimization (Baumgartner et al., 2004), neural networks (Ebner et al., 1998), ellipsoidal optimization (Saldanha et al., 1999), simulated annealing (Alotto et al., 1996a), ES (Magele et al., 1993), and genetic algorithms (Vasconcelos et al., 2001). There are many variants of EAs, but the main differences depend on how individuals are represented, the genetic operators that modify individuals (especially mutation and crossover) and the selection procedure. ES, a variant of EA, were first applied by students at the Technical University of Berlin (TUB) (Rechenberg, 1965; Schwefel, 1965; Rechenberg, 1973): they operate directly on floating point vectors while classical genetic algorithms operate on binary strings. Genetic algorithms rely mainly on recombination to explore the search space, while ES use mutation as the main operator. However, some variants of ES also utilize recombination as a search operator. ES have been found not only efficient in solving a wide variety of optimization problems (Ba¨ck, 1996, Ba¨ck et al., 1997; Oduguwa et al., 2005), but also have a strong theoretical background (Beyer and Schwefel, 2002, Beyer, 1996; Schwefel, 1995; Ostermeier et al., 1995). An advantage of using ES is that the objective function does not have to be differentiable. However, ES suffer from a certain inefficiency, characterized by a slow convergence and a lack of accuracy when a high-quality solution is sought. In contrast, deterministic search methods focus on obtaining an adequate local solution. The dichotomy between global and local searches is a recurring theme in computational models of evolution and biology. In computational contexts, the hybridization of global and local search is known to produce more efficient optimization algorithms. This paper contributes by presenting a hybrid approach of ES combined with the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton (QN) method for local search and a chaotic sequence generation to increase the diversity of ES’ population. Chaos describes the complex behavior of a nonlinear deterministic system (Strogatz, 2000; Kantz and Schreiber, 1997; Peitgen et al., 2004). The application of chaotic sequences (Wang and He, 2003; Fujita et al., 1997; Zou et al., 2004) instead of random sequences in ES is a powerful strategy to diversify the ES population and improve the convergence performance of ES, preventing premature convergence to local minima. Simulation results demonstrate the excellent performance of the hybrid ES compared with the classical ES.

The performance of the algorithm is tested on TEAM workshop benchmark problem 22. 2. Evolution strategies 2.1 Classical evolution strategies Rechenberg (1973) developed a theory of convergence speed for the so-called (1 þ 1)-ES, a simple mutation-selection mechanism in which one individual creates one offspring per generation by means of Gaussian mutation, and he proposed a theoretically confirmed rule for changing the standard deviation of mutations exogenously (1/5-success rule) (Ba¨ck, 1996). He also proposed the first multimembered ES, a (m þ 1)-ES where m $ 1 individuals recombine to form one offspring, which eventually replaces the worst parent individual (Ba¨ck and Schwefel, 1993) in analogy with the simplex method of Nelder and Mead (1965). Schwefel (1965) introduced recombination and populations with more than one individual, and provided a comparative study of ES with traditional optimization techniques. The motivation to extend the (1 þ 1)-ES and (m þ 1)-ES to a (m þ l)-ES and a (m, l)-ES has two aspects of essential importance: the use of parallel computers, and to enable self-adaptation of strategy parameters like the standard deviations of the mutations. The nomenclature (m þ l)-ES suggests that m parents produce l offspring and the whole population is reduced again to the m parents in the next generation; in other words, the selection operates on the joined set of parents and offspring. Thus, parents survive until they are superseded by better offspring. The (m, l)-ES suggests that only the offspring undergo selection, whereas the ancestors are forgotten. If m . 1 then the principle of recombination can be introduced (Davidor and Schwefel, 1992; Ba¨ck and Schwefel, 1993) giving rise to so-called (m/r, l). In contrast to some other types of EAs, ES are based on a real-valued representation of the individuals. This representation allows a precise adjustment of the control parameters to gain a specific adaptation towards the goal function. The control parameters are part of each individual and are changed during the optimization process. The basic implementation of the (m, l)-ES in an N-dimensional parameter space consists in the following steps: (1) set the generation number, k ¼ 1; (2) initialize the population, P(k) of m individuals (xi, si), ;i [ {1, . . . ,m}, where xi’s give the i-th object variables (problem solution) and si’s are the strategy variables (self-adaptation parameters); (3) evaluate the fitness value, f(xi), of each individual of the population; (4) parents (chosen randomly or according to a roulette wheel criterion with areas proportional to fitness) create offspring according to: x 0i ð jÞ ¼ xi ð jÞ þ si ð jÞ · N j ð0; 1Þ j ¼ 1; . . . ; N

ð1Þ

where Nj(0,1) is a zero-mean random Gaussian number with unitary standard deviation; (5) evaluate the fitness value, f ðx 0i Þ, of each offspring; (6) sort parents and offspring (m þ l) or only offspring (m, l) in a non-descending order according to their fitness values, and apply the selection operator (m þ l) or (m, l) to choose the parents of next generation;

Electromagnetic device optimization 271

COMPEL 26,2

272

(7) increment the generation number, k ¼ k þ 1; (8) tune strategy variables; and (9) while a stop criterion is not satisfied, return to step (4). ES have strong analogies with evolutionary programming: they use the mutation operator as the main operator, work directly with floating point vectors, and allow self-adaptation of strategy parameters through standard deviation and covariance. The selection operator in ES is completely deterministic. Several recombination mechanisms can be used in ES (Ba¨ck, 1996, Ba¨ck et al., 1997) producing a new individual from two randomly selected parent individuals. Furthermore, the recombination operator can be applied to the strategy parameters (standard deviation) as well as on the object variables. In this work, an intermediate recombination of strategy parameter is used. Several self-adaptation strategies have been proposed for ES (Ba¨ck and Schwefel, 1993, Ba¨ck, 1996). The procedure used in classical ES (CES) for the mutation step is given by a lognormal self-adaptation strategy given by:

s 0i ð jÞ ¼ si ð jÞ · exp½t 0 · N 0j ð0; 1Þ þ t · N j ð0; 1Þ

ð2Þ

where N(0,1) and N0 (0,1) represent vectors of independent random, zero-mean Gaussian numbers with unitary standard deviations. The global factor, t 0 allows for an overall change of the mutability, whereas the local factor t allows pffiffiffiffiffiffiffiffiffiffi pffiffiffiffifor individual pffiffiffiffiffiffiffi changes of the step sizes. The factors t and t0 are usually set to 1= 2 N and 1= 2N , where N is the number of parameters to be optimized (Ba¨ck and Schwefel, 1993). 2.2 Evolution strategies with chaotic sequences Optimization algorithms based on chaos theory are stochastic search methodologies which differ from all the existing EAs: EAs are optimization approaches with bio-inspired concepts from genetics and natural evolution, whereas chaotic optimization approaches are based on ergodicity, stochastic properties and irregularity (Coelho and Mariani, 2006). These approaches are somewhat similar to some stochastic optimization algorithms which escape from local minima by accepting bad solutions according to a certain probability (Shengsong et al., 2003). Furthermore, chaotic optimization approaches can escape from local minima more easily than other stochastic optimization algorithms (Li and Jiang, 1998). In the ES context, the concepts of chaotic optimization methods can be useful to improve the global convergence rates and improve the exploration versus exploitation behavior of ES algorithms. In this context, equation (1) is modified by introducing one of the simplest dynamic systems which exhibit chaotic behavior, the Lozi map. The Lozi map (Alligood et al., 1996; Chen et al., 1997) consists in a non-differentiable time series described by: yðtÞ ¼ 2P · jyðt 2 1Þj þ Q · yðt 2 2Þ þ 1 ð3Þ where t is the time step and y is the output of the Lozi map. The Lozi map presents a chaotic attractor when P ¼ 1:8 and Q ¼ 0:4. These values are used in this work. Designing strategies to improve the convergence of ES is a challenging task in evolutionary computation. This paper proposes the use of a new ES in which equation (1) is combined with a Lozi map chaotic sequence (LES) giving rise to following equation:

x 0i ð jÞ ¼ xi ð jÞ þ si ð jÞ · yj ðtÞ

ð4Þ

where yj(t) is a chaotic mutation factor (output of the Lozi map) based on evolution of generations t and the index of the object variable. 2.3 Combining of ES or LES with quasi-Newton method QN or variable metric methods, build an approximation of the inverse of the Hessian using only information of the first derivatives of the error function over a number of steps. The two most commonly used update formulae are the Daivdson-Fletcher-Power (DFP) and the BFGS procedures. The BFGS routine used in this paper is the one provided by the Matlab Optimization Toolbox ( fminunc rotine). Details of the BFGS procedure are presented in Bazarra et al. (1979) Fletcher (1987) and Nocedal (1991). QN method and ES have advantages that complement each other. The proposed combination of CES with QN or LES with QN for local search consists in a form of sequential hybridization (Preux and Talbi, 1999). Basically, in this combined method, the CES (or LES) is applied to the optimization problem and the best solution (or other chosen solution) obtained by CES (or LES) is used as starting point for the QN method. The rationale behind this approach is that the stochastic optimizer can identify the “valley” in which the global minium lies (but such an optimizer does not provide fast improvements in “flat” valleys) while the deterministic one is capable of providing rapid improvements once such valley has been found. 3. Optimization results TEAM benchmark problem 22 was chosen to demonstrate the performance of the classical ES and hybrid ES in electromagnetics. 3.1 A brief description of TEAM workshop problem 22 TEAM workshop problem 22 consists in determining the optimal design of a superconducting magnetic energy storage (SMES) device (Alotto et al., 1996b; Magele, 1996) in order to store a significant amount of energy in magnetic fields with a fairly simple and economical coil arrangement which can be rather easily scaled up in size. However, such arrangements usually suffer form their remarkable stray field. A reduction of the stray field can be achieved if a second solenoid is placed outside the inner one, with current densities J1 and J2 flowing in opposite directions, as shown in Figure 1. A correct design of the system should then couple the required value of energy to be stored with a minimal stray field. These two conflicting goals give indeed rise to a multiobjective problem which is reduced to a single-objective one in the benchmark definition. There are eight design variables all related to the two coils. These consist of the radius of the coils 1 and 2, R1 and R2 (i.e. the distance of the coils to the axis of rotation); their height h1/2 and h2/2; their thickness d1 and d2 and the value of the current densities J1 and J2, respectively. Finding the optimal design is not an easy task because, besides usual geometrical constraints, there is a material related constraint: the given current density and the maximum magnetic flux density value on the coil must not violate the superconducting

Electromagnetic device optimization 273

COMPEL 26,2

z [m] line a (11 points)

d1 d2 J1

h1

J2

(0, 0)

h2

line b (11 points)

274

axis of rotation

(0, 10)

r [m]

(10, 0)

Figure 1. Setup of the SMES device (TEAM workshop problem 22)

R1 R2

quench condition which can be well represented by a linear relationship shown in Figure 2. Mathematically, this optimization problem has an objective function taking both the energy and the stray field requirements into account. In this case, the optimization problem to be solved is the following: min OF ¼

B2stray B2normal

þ

jEnergy 2 E ref j E ref

ð5Þ

Figure 2. Critical curve of the superconductor

Current density, J [A/mm2]

where the reference stored energy and stray field are Eref ¼ 180 MJ, Bnormal ¼ 200 mT. This form of single objective function and the reference values for Eref and Bnormal are given in the benchmark definition. B2stray is defined as:

60 NbTi-Superconductor

50 40 30 20 10 0 0

1 2 3 4 5 Magnetic flux density, B [T]

6

22 X

B2stray ¼

jBstray;i j

2

i¼1

ð6Þ

22

where Bstray;i is evaluated along 22 equidistant points along line a and line b in Figure 1. Both the energy and the stray field are calculated using an integral formulation for the solution of the forward problem (Biot-Savart law) (Alotto et al., 1998). The bounds of the optimization parameters are shown in Table I. Further details and results regarding TEAM workshop problem 22 are discussed in Alotto et al. (1996a, b, 1998), Rashid et al. (2001), Baumgartner et al. (2004), Ebner et al. (1998), Saldanha et al. (1999), Magele et al. (1993) and Vasconcelos et al. (2001).

Electromagnetic device optimization 275

3.2 Optimization results The ES approaches were implemented using MathWorks’ Matlabq software. For each of the above described ES approaches, a total of 30 independent runs (using different seeds to generate random numbers in each run) were made, using the aforementioned parameters and different initial random seeds. All the runs were terminated after Gmax ¼ 200 generations. A total of 3,000 cost function evaluations were made by each CES, LES, ES-QN, and LES-QN approaches in each run. (m þ l) ES approaches were selected, along with the values of m ¼ 3 and l ¼ 15. Other specific parameters and design procedures used in the standard optimization methods, which were set empirically, were ES-QN and LES-QN with evaluation of best individual by QN method (with five evaluations of the objective function in each call) after every five generations of the evolution strategy. Also m ¼ 3 and l ¼ 14 were selected for ES-QN and LES-QN tested approaches (total: 3,000 cost function evaluations): in this case, the best individual was changed by a randomly chosen, QN optimized, individual if the best fitness of the population was not improved by the last ten generations. Table II summarizes the experimental results obtained by applying the CES, LES, ES-QN, and LES-QN approaches. The table shows the statistics for the 30 independent runs, including the best, mean, and worst optimum found; the standard deviation is also reported. An analysis of Table II reveals that both CES and LES presented difficulties and did not provide a good solution in terms of mean objective function (in 30 runs) with best OF values of approximately 0.99. The best OF values for these methods are local minima. CES-QN and LES-QN found good results in terms of best, mean and worst optimum found. On the other hand, the computational cost of CES-QN and LES-QN approaches

Variables

R1 (m)

R2 (m)

h1/2 (m)

h2/2 (m)

d1 (m)

d2 (m)

J1 (A/mm2)

J2 (A/mm2)

Minimum Maximum

1.00 4.00

1.80 5.00

0.10 1.80

0.10 1.80

0.10 0.80

0.10 0.80

10.0 30.0

230.0 10.0

Note: Limits of the optimization parameters for the SMES device (TEAM workshop problem 22)

Table I.

COMPEL 26,2

276

is considerably higher due the cost of the local QN search method. It should be noted that this drawback disappears in the case of problems for which the computational time is dominated by the evaluation of the objective function, e.g. when the objective function value is determined from a FEM simulation. The best, best mean, and best worst optimal results were all obtained by LES-QN. In Table III, the best results of each tested approach (mentioned with statistical details in Table II) are shown. 4. Conclusions A significant and widely adopted optimization benchmark problem in electromagnetics is the TEAM workshop problem 22. In this work, TEAM workshop problem 22 was optimized using different ES approaches. The proposed novel algorithm using Lozi maps emerged as the best among the tested ones for this benchmark problem. The results of these simulations using local search based on QN method are very encouraging and may represent an important contribution to improve the performance of ES optimizers for other electromagnetic problems as well. Therefore, in future work, more detailed studies and experiments using other ES methodologies will be carried out in order to solve multiobjective electromagnetic design problems and to draw some more general conclusions on this class of algorithms.

Objective function OF in 30 runs Tested optimization Mean time approaches (s)a Maximum

Table II. Results (30 runs) for TEAM workshop problem 22 using ES approaches

CES LES CES-QN LES-QN

57.3 60.6 541.4 483.7

Minimum

Standard deviation

1.0285 1.1733 0.1122 3.0551 £ 102 3

0.9679 0.9487 4.6998 £ 102 4 4.4541 £ 102 4

5.4413 £ 102 2 2.6870 £ 102 1 2.3308 £ 102 1 4.3489 £ 102 3

Note: aMean time of each run in a PC-compatible with Pentium 3.2 GHz processor and 2 GB RAM using Matlab 6.5

CES

Table III. Best results (30 runs) for TEAM workshop problem 22 using ES approaches

1.1770 1.9079 0.6168 0.0150

Mean

R1 (m) R2 (m) H1/2 (m) H2/2 (m) D1 (m) D2 (m) J1 (A/mm2) J2 (A/mm2) Energy (MJ) Bstray (mT) OF

1.0000 1.8000 0.1000 1.8000 0.8000 0.1000 28.6995 24.0108 181.6604 4.5917 0.9679

LES 1.0000 1.8000 0.1000 1.8000 0.8000 0.1000 28.5996 24.0292 180.3336 3.8669 0.9487

CES-QN 1.0010 1.8000 0.4503 1.3398 0.6520 0.2344 29.4005 28.7898 180.0000 4.1687 4.6998 £ 102 4

LES-QN 1.0000 1.8000 0.3959 1.2941 0.7953 0.1487 27.8757 2 14.8222 180.0000 4.0702 4.4541 £ 10 2 4

References Alligood, K.T., Sauer, T.D. and Yorke, J.A. (1996), Chaos: An Introduction to Dynamical Systems, Springer, London. Alotto, P.G., Caiti, A., Molinari, G. and Repetto, M. (1996a), “A multiquadrics-based algorithm for the acceleration of simulated annealing optimization procedures”, IEEE Transactions on Magnetics, Vol. 32 No. 3, pp. 1198-201. Alotto, P.G., Kuntsevitch, A.V., Magele, Ch., Molinari, G., Paul, C., Preis, K., Repetto, M. and Richter, K.R. (1996b), “Multiobjective optimization in magnetostacis: a proposal for benchmark problems”, IEEE Transactions on Magnetics, Vol. 32 No. 3, pp. 1238-41. Alotto, P.G., Eranda, C., Brandsta¨tter, B., Fu¨rntratt, G., Magele, C., Molinari, G., Nervi, M., Repetto, M. and Richter, K.R. (1998), “Stochastic algorithms in electromagnetic optimization”, IEEE Transactions on Magnetics, Vol. 34 No. 5, pp. 3674-84. Ba¨ck, T. (1996), Evolutionary Algorithms in Theory and Practice, Oxford University Press, New York, NY. Ba¨ck, T. and Schwefel, H-P. (1993), “An overview of evolutionary algorithms for parameter optimization”, Evolutionary Computation, Vol. 1 No. 1, pp. 1-23. ¨ Back, T., Fogel, D.B. and Michalewicz, Z. (Eds) (1997), Handbook of Evolutionary Computation, Oxford University Press, New York, NY. Baumgartner, U., Magele, Ch. and Renhart, W. (2004), “Pareto optimality and particle swarm optimization”, IEEE Transactions on Magnetics, Vol. 40 No. 2, pp. 1172-5. Bazarra, M.S., Sherali, H.D. and Shetty, C.M. (1979), Nonlinear Programming: Theory and Algorithms, 2nd ed., Wiley, New York, NY. Beyer, H-G. (1996), “Toward a theory of evolution strategies: self-adaptation”, Evolutionary Computation, Vol. 3 No. 3, pp. 311-47. Beyer, H-G. and Schwefel, H-P. (2002), “Evolution strategies”, Natural Computing, Vol. 1, pp. 3-52. Chen, G., Chen, Y. and Ogmen, H. (1997), “Identifying chaotic systems via a Wiener-type cascade model”, IEEE Control Systems, Vol. 17 No. 5, pp. 29-36. Coelho, L.S. and Mariani, V.C. (2006), “Combining of chaotic differential evolution and quadratic programming for economic dispatch optimization with valve-point effect”, IEEE Transactions on Power Systems, Vol. 21 No. 2. Davidor, Y. and Schwefel, H-P. (1992), “An introduction to adaptive optimization algorithms based on principles of natural evolution”, in Soucek, B. and The IRIS Group (Eds), Dynamic Genetic, and Chaotic Programming: The 6th Generation, Wiley, New York, NY, pp. 183-202. Ebner, T., Magele, C., Brandsta¨tter, B.R. and Richter, K.R. (1998), “Utilizing feed forward neural networks for acceleration of global optimization procedures”, IEEE Transactions on Magnetics, Vol. 34 No. 5, pp. 2928-31. Farina, M. and Sykulski, J.K. (2001), “Comparative study of evolution strategies combined with approximation techniques for practical electromagnetic optimization problems”, IEEE Transactions on Magnetics, Vol. 37 No. 5, pp. 3216-20. Fletcher, R. (1987), Practical Methods of Optimization, 2nd ed., Wiley, New York, NY. Fujita, T., Watanabe, T., Yasuda, K. and Yokoyama, R. (1997), “Global optimization method using intermittency chaos”, Proceedings of the 36th Conference on Decision & Control, San Diego, CA, USA, pp. 1508-9. Kantz, H. and Schreiber, T. (1997), Nonlinear Time Series Analysis, Cambridge University Press, Cambridge.

Electromagnetic device optimization 277

COMPEL 26,2

278

Li, B. and Jiang, W. (1998), “Optimizing complex functions by chaos search”, Cybernetics and Systems, Vol. 29 No. 4, pp. 409-19. Magele, C.A., Preis, K., Renhart, W., Dyczij-Edlinger, R. and Richter, K.R. (1993), “Higher order evolution strategies for the global optimization of electromagnetic devices”, IEEE Transactions on Magnetics, Vol. 29 No. 2, pp. 1775-8. Magele, Ch. (1996), TEAM Benchmark Problem 22, available at: www-igte.tu-graz.ac.at/team Mohammed, O.A. and U¨ler, G.F. (1997), “A hybrid technique for the optimal design of electromagnetic devices using direct search and genetic algorithms”, IEEE Transactions on Magnetics, Vol. 33 No. 2, pp. 1931-4. Nelder, J.A. and Mead, R. (1965), “A simplex method for function minimisation”, Computer Journal, Vol. 7, pp. 308-13. Nocedal, J. (1991), “Theory of algorithms for unconstrained optimization”, Acta Numerica, Vol. 1, pp. 199-242. Oduguwa, V., Tiwari, A. and Roy, R. (2005), “Evolutionary computing in manufacturing industry: an overview of recent applications”, Applied Soft Computing, Vol. 5, pp. 281-99. Ostermeier, A., Gawelczyk, A. and Hansen, N. (1995), “A derandomized approach to self-adaptation of evolution strategies”, Evolutionary Computation, Vol. 2 No. 4, pp. 369-80. Pahner, U. and Hameyer, K. (2000), “Adaptive coupling of differential evolution multiquadrics approximation for the tuning of the optimization process”, IEEE Transactions on Magnetics, Vol. 36 No. 4, pp. 1047-51. Peitgen, H-O., Ju¨rgens, H. and Saupe, D. (2004), Chaos and Fractals: New Frontiers of Science, 2nd ed., Springer, New York, NY. Preux, Ph. and Talbi, E-G. (1999), “Towards hybrid evolutionary algorithms”, International Transactions in Operational Research, Vol. 6, pp. 557-70. Rashid, K., Ramirez, K.J.A. and Freeman, E.M. (2001), “Optimization of electromagnetic devices using sensitivity information form clustered neuro-fuzzy models”, IEEE Transactions on Magnetics, Vol. 37 No. 5, pp. 3575-8. Rechenberg, I. (1965), “Cybernetic solution path of an experimental problem”, Royal Aircraft Establishment, Farnborough Library Translation, p. 1122. Rechenberg, I. (1973), “Evolutionsstrategie: optimierung systeme nach prinzipien der biologischen evolution”, Dr -Ing thesis, Department of Process Engineering, Technical University of Berlin, Berlin. Saldanha, R.R., Takahashi, R.H.C., Vasconcelos, J.A. and Ramirez, J.A. (1999), “Adaptive deep-cut method in ellipsoidal optimization for electromagnetic design”, IEEE Transactions on Magnetics, Vol. 35 No. 3, pp. 1746-9. Sareni, B., Kra¨henbu¨hl, L. and Nicolas, A. (2000), “Efficient genetic algorithms for solving hard constrained optimization problems”, IEEE Transactions on Magnetics, Vol. 36 No. 4, pp. 1027-30. Schwefel, H-P. (1965), “Kybernetische evolution als strategie der exprimentellen forschung in der stro¨mungstchnik”, master’s thesis, Technical University of Berlin, Berlin. Schwefel, H-P. (1995), Evolution and Optimum Seeking, Wiley, New York, NY. Shengsong, L., Min, W. and Zhijian, H. (2003), “Hybrid algorithm of chaos optimisation and SLP for optimal power flow problems with multimodal characteristic”, IEE Proceedings in Generation, Transmission, and Distribution, Vol. 150 No. 5, pp. 543-7. Strogatz, S.H. (2000), Nonlinear Dynamics and Chaos, Perseus Publishing, Cambridge, MA.

Vasconcelos, J.A., Ramirez, J.A., Takahashi, R.H.C. and Saldanha, R.R. (2001), “Improvements in genetic algorithms”, IEEE Transactions on Magnetics, Vol. 37 No. 5, pp. 3414-7. Vasconcelos, J.A., Saldanha, R.R., Kra¨henbu¨hl, L. and Nicolas, A. (1997), “Genetic algorithm coupled with a deterministic method for optimization in electromagnetics”, IEEE Transactions on Magnetics, Vol. 33 No. 2, pp. 1860-3. Wang, G. and He, S. (2003), “A quantitative study on detection and estimation of weak signals by using chaotic Duffing oscillators”, IEEE Transactions on Circuits and Systems – I: Fundamentals Theory and Applications, Vol. 50 No. 7, pp. 945-53. Zou, X., Wang, M., Zhou, A. and Mckay, B. (2004), “Evolutionary computation based on chaotic sequence in dynamic environments”, Proceedings of the IEEE International Conference on Networking, Sensing & Control, Taipei, Taiwan, pp. 1364-9. About the authors Leandro dos Santos Coelho was born in Santa Maria, Brazil, in 1968. He received a BS in Computer Science and a BS in Electrical Engineering from the Federal University of Santa Maria (UFSM), Brazil, in 1994 and 2000, respectively. He earned his MS and Doctoral degrees in Computer Science and Electrical Engineering from the Federal University of Santa Catarina (UFSC), Brazil, in 1997 and 2000, respectively. He is currently an Associate Professor at the Department of Automation and Systems, Pontifical Catholic University of Parana, Brazil. He has published several papers on computational intelligence approaches applied to engineering at the international level and is co-author with M. Jamshidi, R.A. Krohling and P. Fleming of the book Robust Control Systems with Genetic Algorithms, CRC Press, Boca Raton, USA, 2002. He is interested in power systems, computational intelligence, nonlinear identification, electromagnetic problems, optimization methods, and advanced control systems. Piergiorgio Alotto was born in Genova, Italy, in 1968, and graduated with honours in Electrical Engineering in 1992 at the University of Genova. From 1992 to 1994, he worked at Vector Fields Ltd, Oxford, UK. In 1997, he received the PhD in Electrical Engineering at the University of Genova. From December 1996 to September 2005, he was an Assistant Professor at the Department of Electrical Engineering at the University of Genova. Since, October 2005, he is an Associate Professor at the Department of Electrical Engineering at the University of Padova. He is author of over 70 papers at the international level on various topics concerning the numerical solution of electromagnetic problems. Piergiorgio Alotto is the corresponding author and can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Electromagnetic device optimization 279