Heuristic and Metaheuristic Optimization Techniques with ... - wseas

0 downloads 0 Views 485KB Size Report
presentation of the recent advances and applications in ..... genetic algorithm principles and inspired by protection mechanisms of .... sections that must be opened to obtain a radial ... feeders in the system where the loops should be opened,.
SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

Heuristic and Metaheuristic Optimization Techniques with Application to Power Systems MIHAI GAVRILAS Power System Department “Gheorghe Asachi” Technical University of Iasi 21-23 D. Mangeron Blvd., 700050, Iasi ROMANIA [email protected] http://www.ee.tuiasi.ro/~mgavril Abstract: - The development of modern wide-area power systems, as well as recent trends towards the creation of sustainable energy systems have given birth to complex studies addressing technical, but also economical and environmental, aspects related to simple or multi-objective optimization problems. Recently, heuristic and metaheuristic approaches that apply combinations of different heuristics with or without traditional search and optimization techniques were proposed to solve such problems. This paper provides basic knowledge about most widely used (meta)heuristic optimization techniques, and their application in optimization problems in power systems. Key-Words: - Heuristic optimization, Metaheuristic optimization, Power systems, Efficiency. complexity of problems faced by power engineers, researchers in the field of Power Systems were pioneers in using heuristic and metaheuristic techniques for solving optimization problems [4]. This paper presents an overview of the most popular (meta)heuristic techniques used for solving typical optimization problems in the field of Power Systems. Thus, Sections 2 and 3 describe generic heuristic methods and metaheuristics. Then, Section 4 considers typical optimization problems in power engineering and for each such problem a brief description is presented. The author assumed the task to prepare the presentation of the recent advances and applications in (meta)heuristic-based optimization in power systems considering the fact that most numerous and valuable contributions belong to the worldwide scientific community. Results of research undertaken by the author, together with other fellows, were added with modesty to the numerous other theoretical and practical results presented in this paper.

1 Introduction Optimization is a branch of mathematics and computational science that studies methods and techniques specially designed for finding the “best” solution of a given “optimization” problem. Such problems aim to minimize or maximize one or more objective functions based on one or more dependent variables, which can take integer or real values. Optimization is widely applied in fields such as engineering, commerce, transportation, finance, medicine, and any decision making processes. Numerous conventional optimization techniques have been designed to solve a wide range of optimization problems, such as Linear Programming, Nonlinear Programming, Dynamic Programming or Combinatorial Optimization [1]. However, many of the existing conventional optimization techniques applied to real world problems suffer from a marked sensitivity with respect to problems such as: difficulties in passing over local optimal solutions; risk of divergence; difficulties in handling constraints or numerical difficulties related to computing first or second order derivatives [1]. To overcome these problems, heuristic and metaheuristic techniques were proposed in the early 70‟s [3]. Unlike exact methods, (meta)heuristic methods have a simple and compact theoretical support, being often based on criteria of empirical nature. These issues are responsible for the absence of any guarantee for successfully identifying the optimal solution. Many optimization problems encountered in the field of Power Systems may be approached using heuristic and metaheuristic techniques. Moreover, as in many other situations, due to the large diversity and

ISSN: 1792-5967

2 Heuristic optimization Difficulties faced by exact, conventional optimization methods often end up by preventing these methods to determine a solution to the optimization problem within a reasonable amount of time. To avoid such cases, alternative methods have been proposed, which are able to determine not perfectly accurate, but good quality approximations to exact solutions. These methods, called heuristics, were initially based essentially on experts‟ knowledge and experience and aimed to explore the search space in a particularly convenient way.

95

ISBN: 978-960-474-238-7

SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

Heuristics were first introduced by G. Polya in 1945 [2] and were developed later in the 70‟s, when various heuristics were also introduced for specific purpose problems in different fields of science and technique, including electrical engineering [3]. Basically, a heuristic is designed to provide better computational performance as compared to conventional optimization techniques, at the expense of lower accuracy. However, the „rules of thumb” underlying a heuristic are often very specific to the problem under consideration. Moreover, since heuristics are problem solving techniques based on solvers‟ expertise, they use domainspecific representations, and hence general heuristics can be successfully defined only for fundamental problem solving methods, such as search methods [6]. A wide range of heuristic search strategies can be cited from the literature, from uninformed approaches (basically non-heuristic) like Depth First Search (DeFS), Breadth First Search (BrFS), or Uniform Cost Search (UnCS), to informed approaches (mainly heuristic) like Best First Search (BeFS), Beam Search (BeS) or the A* Search (A*S) [5,6,7,8,10]. Uninformed or blind search strategies are applied with no information about the search space, other than the ability to distinguish between an intermediate-state and a goal-state. Informed search strategies use problem-specific knowledge. Usually, this knowledge is represented using an evaluation function that assesses either the quality of each state in the search space, or the cost of moving from the current state to a goal-state, using various possible paths. In case of BeFS, among all possible states at one level, the algorithm chooses to expand the most “promising” one in terms of a specified rule [9]. BeS is an enhanced version of the BeFS algorithm. The improvements consist in reducing the memory requirements [11]. With this aim in view, BeS is defined based on BrFS, which is used to build the search tree. At each level, all new states are generated and the heuristic function is computed for each state that a inserted in a list ordered by heuristic function values. The list is of limited length, equal to the so called “beam width”. This limits the memory requirements, but the compromise risk to pruning out the path to the goal-state. The A* search algorithms uses a BeFS strategy, and a heuristic function that combines two metrics: the cost from the origin to the current state (or the cost-so-far) and an estimation of the cost from the current state to a goal-state (or the cost-to-goal). The A* algorithm is considered to be very efficient [7]. One of the shortcomings of the search strategies presented above is the numerical inefficiency of the search process, especially for high dimensional problems. Thus, significant efforts were devoted to identify new heuristics able to successfully cope with the above problems.

ISSN: 1792-5967

3 Metaheuristics The new paradigms were called metaheuristics and were first introduced at mid-80s as a family of searching algorithms able to approach and solve complex optimization problems, using a set of several general heuristics. The term metaheuristic was proposed in [18] to define a high level heuristic used to guide other heuristics for a better evolution in the search space. Although traditional stochastic search methods are mainly guided by chance (solutions change randomly from one step to another), they can be used in combination with metaheuristic algorithms to guide the search process and to accelerate the convergence. Most metaheuristics algorithms are only approximation algorithms, because they cannot always find the global optimal solution. But the most attractive feature of a metaheuristic is that its application requires no special knowledge on the optimization problem to be solved, hence it can be used to define the concept of general problem solving model for optimization problems or other related problems [4, 12]. Since their introduction in the mid-80s till now, metaheuristic methods for solving optimization problems have been continuously developed, allowing addressing and solving a growing number of such problems, previously considered difficult or even impossible to solve. These methods include simulated annealing, tabu search, evolutionary computation techniques, artificial immune systems, memetic algorithms, particle swarm optimization, ant colony algorithm, differential evolution, harmony search, honey-bee colony optimization etc The next section presents a brief review of basic issues for the most commonly used metaheuristics cited above. Several applications of these methods in the field of power systems will be presented in section 5.

4 Metaheuristic methods 4.1 Simulated Annealing Studies on Simulated Annealing (SA) were developed in the 1980s based on the Metropolis algorithm [17], which was inspired by statistical thermodynamics, where the relationship between the probabilities of two states A and B, with energies EA and EB, at a common temperature T, suggests that states with higher energies are less probable in thermodynamic systems. Thus, if the system is in state A, with energy EA, another state B of lower energy (EB < EA) is always possible. Conversely, a state B of higher energy (EB > EA) will not be excluded, but it will be considered with probability exp(- (EB – EA)/T). Application of Metropolis algorithm in search problems is known as SA and is based on the possibility of moving in the search space towards states with poorer

96

ISBN: 978-960-474-238-7

SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

Table 1 – Pseudocode for Simulated Annealing Data: initial approximation X0, initial temperature T, number of iteration for a given temperature nT. Optimal solution:Xbest ← X0. WHILE {stopping criterion not met} n = 0; i = 0; WHILE (n < nT) DO choose a new approximation Y accept or reject the new approximation based on the Metropolis rule: Xi+1 = G(Xi,Y,T) update optimal solution: if Xi+1 is better than Xbest, then Xbest ← Xi+1 next n-iteration (n ← n+1) update temperature T next T-iteration (i ← i+1)

Table 2 – Pseudocode for Tabu Search Data: length of the Tabu list LT, number of intermediate solutions N. Initialization: approximation X, Tabu list TABU={X}. Optimal solution: Xbest ← X. WHILE {stopping criterion not met} prepare the Tabu list: if Length(TABU) = LT, then delete the oldest item from the list. generate N new aproximations in the neighborhood of X and select the best candidatesolution Y which is not TABU. update current approximation X ← Y and add it in the Tabu list: Add(TABU,X). update optimal solution: if X is better than Xbest, then Xbest ← X

values of the fitness function. Starting from a temperature T and an initial approximation Xi, with a fitness function Fitness(Xi), a perturbation is applied to Xi to generate a new approximation Xi+1, with Fitness(Xi+1). If Xi+1 is a better solution then Xi, i.e. Fitness(Xi+1) > Fitness(Xi), the new approximation will replace the old one. Otherwise, when Fitness(Xi+1) < Fitness(Xi), the new approximation will be considered with probability pi=exp(-[Fitness(Xi)–Fitness(Xi+1)] / T). The above steps are repeated for a given number of times for a constant value of temperature, then temperature is updated by decreasing its value, and the iterative process continues until a stopping criterion is met. The pseudocode for SA is shown in Table 1.

Table 3 – Pseudocode for Evolution Strategy Data: number of parents μ and offsprings λ (λ = k· μ). Initialization: create initial population P = {Pi}, i=1 … λ, and initialize the best solution Best ← void. WHILE {stopping criterion not met} evaluate P and update the best solution, Best. reproduction stage: select μ fittest individuals from P and create parent-population, R = {Rj}, j=1 … μ. mutation stage: apply stochastic changes to parents and create k = λ / μ offsprings for each parrent: [ R = {Rj}, j=1… μ ] → [ Q = {Qi}, i=1… λ ] mutation stage: replace the current population with the mutated one: [ Pi = Qi, i=1 … λ ]

common notation is ES(μ, λ). The main genetic operator that controls the evolution from one generation to another is mutation. In a general ES(μ, λ) model (where λ = k · μ), each generation starts with a population of λ individuals. The fitness of each individual is computed to rank them in descending order of their fitness. Amongst the current population, only the first μ fittest individuals are selected to create the parent population (this selection phase is sometimes called truncation). Next, each of the μ parents will create by repeated mutation k = λ / μ offsprings. Eventually, the new, mutated population will replace the old one and the algorithm reiterates. The pseudocode for the ES(μ, λ) model is shown in Table 3.

4.2 Tabu Search Tabu Search (TS) was introduced by [18], as a search strategy that avoids returning to solutions already visited by maintaining a Tabu list, which stores successive approximations. Since the Tabu list is finite in length, at some point, after a number of steps, some solutions can be revisited. Adding a new solution to a complete Tabu list is done by removing the oldest one from the list, based o a FIFO principle (First In – First Out). New approximations can be generated in different ways. The pseudocode presented in Table 2 uses the following procedure: at each step a given number of new approximations are generated in the neighborhood of the current solution X, but considering as feasible only the ones which are not in the Tabu list. Amongst the new approximations the best one is chosen to replace the current solution, being also introduced in the Tabu list.

4.4 Genetic Algorithms The Genetic Algorithm (GA) is a step forward of ES in the general framework of Evolutionary Computation. GAs have been designed and developed by Holland [15] and later by Goldberg [26] and De Jong [25]. GAs are search strategies that are based on specific mechanisms of genetics and natural selection, using three basic operators: selection, crossover and mutation. For each generation, selection is used to choose parent individuals , based on their fitness function. After

4.3 Evolution Strategy Evolution Strategy (ES) was first proposed in [13] as a branch of evolutionary computation. It was further developed after 1970‟s. An ES may be described by two main parameters: number of parents in a generation μ, and number of offsprings created in a generation λ. A

ISSN: 1792-5967

97

ISBN: 978-960-474-238-7

SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

Table 4 – Pseudocode for Genetic Algorithm Data: population size N, crossover rate ηc and mutation rate ηm Initialization: create initial population P={Pi}, i=1…N, and initialize the best solution Best ← void. WHILE {stopping criterion not met} evaluate P and update the best solution Best. initialize offspring population: R ← void. create offsprings: FOR k = 1 TO N / 2 DO selection stage: select parents Q1 and Q2 from P, based on fitness values. crossover stage: use crossover rate ηc and parents (Q1;Q2) to create offsprings (S1; S2). mutation stage: use mutation rate ηm to apply stochastic changes to S1 and S2 and create mutated offsprings T1 and T2. add T1 and T2 to offspring population: R ←R { T1 and T2 }. replace current population P with offspring population R: P ← R. elitism: replace the poorest solution in P with the best solution stored in Best.

Table 5 – Pseudocode for Differential Evolution Data: population size parents N, weighting factors α, β. Initialization: create initial population P={Pi}, i=1…N. Evaluate current population P and store the fittest individual as Best. WHILE {stopping criterion not met} FOR i = 1 TO N DO select two different individuals Xr1 and Xr2, other than Xi . apply mutation: Xi‟= Xi + α·( Xr1 –Xr2). apply crossover: Xi‟‟= Xi + β·( Xi –Xi‟). evaluate Xi‟‟ and replace Xi with Xi‟‟ anytime when Fitness (Xi‟‟) > Fitness (Xi). update the best solution Best.

solutions to which the mechanisms of DE are applied. Thus, for a reference individual Xi, two different individuals Xr1 and Xr2, other than Xi, are randomly selected and an arithmetic mutation is applied to Xi based on the difference between Xr1 and Xr2, to produce a mutant Xi’. Then an arithmetic crossover, based on the difference between current and mutated solutions, is applied to generate the new estimation Xi’’. Xi’’ will replace the reference solution and the best one anytime when its fitness function is better. The pseudocode for the DE algorithm is shown in Table 5.

selecting a pair of parent chromosomes, they enter the crossover stage to generate two offsprings. Crossover is useful to create new individuals or solutions that inherit good characteristics from both parents. Newly created individuals will be altered by small-scale changes in the genes, applying mutation operator. Mutations ensure the introduction of "novelty" in the genetic material. After completing the offspring population, this will replace the parents from the previous generation and the selection-crossover-mutation process will be resumed for a next generation. To avoid losing the best solution due to the stochastic character of the search procedure described above, [25] has proposed to apply a special replacement procedure called “elitism” that makes a copy of the best individual from the current population and transfer it unchanged in the next generation. The pseudocode for the GA is shown in Table 4.

4.6 Immune Algorithms The Immune Algorithm (IA) was proposed first be [19], to simulate the learning and memory abilities of immune systems. The IA is a search strategy based on genetic algorithm principles and inspired by protection mechanisms of living organisms against bacteria and viruses. The problem coding is similar for both GA and IA, except that chromosomes in GA are called antibodies in IA, and problem formulation, i.e. objective or fitness functions are coded as antigens in IA. The basic difference between AG and IA lies in the selection procedure. Instead of fitness functions, IA computes affinities between antibodies and / or between antibodies and antigens. Based on the affinities between antibodies and antigens a selection and reproduction pool (the proliferation pool), is created using antibodies with greatest affinities. The proliferation pool is created by clonal selection: the first M antibodies, with highest affinities relative to antigens, are cloned (i.e. copied unchanged) in the proliferation pool. Using a mutation rate inversely proportional to the affinity of each antibody to antigens, mutations are applied to the clones. Then affinities for new, mutated clones and affinities between all clones are computed, and a limited number of clones Nrep (with lowest affinities) are replaced by randomly generated antibodies, to introduce diversity. Elitism can be applied to avoid losing best solutions. The pseudocode of IA is described in Table 6.

4.5 Differential Evolution Differential Evolution (DE) was developed mainly by [23] as a new evolutionary algorithm. Unlike other evolutionary algorithms, DE change successive approximations of solutions or individuals based on the differences between randomly selected possible solutions. This approach uses indirectly information about the search space topography in the neighborhood of the current solution. When candidate-solutions are chosen in a wide area, mutations will have large amplitudes. Conversely, if candidate-solutions are chosen in a narrow area, mutations will be of small importance. For each generation, all current individuals that describe possible solutions are considered as reference

ISSN: 1792-5967

98

ISBN: 978-960-474-238-7

SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

Table 6 – Pseudocode for Immune Algorithm Data: population, clonal and replacement size N, M, Nrep. Initialization: create initial population P. Evaluate affinities for antibodies in current population P. WHILE {stopping criterion not met} clonal selection: clone the first M antibodies from P, with highest affinity. Number of clones for an antibody is proportional to its affinity. Number of clones in the proliferation pool Q is N. mutation: apply stochastic changes to clones from the proliferation pool Q, with mutation rate inversly proportional to their affinity. replacement: evaluate affinities for mutated antibodies and replace the worst Nrep clones from population Q with randomly generated antibodies. elitism: evaluate new created antibodies and replace the worst antibody from Q with the best one from P. next generation: replace current population with the one from the proliferation pool: P ← Q.

Table 7 – Pseudocode for Particle Swarm Optimization Data: population size N, personal-best weight α, localbest weight β, global-best weight γ, correction factor ε. Initialization: create initial population P. WHILE {stopping criterion not met} select the best solution from the current P: Best. select global-best solution for all particles: BG. apply swarming: FOR i = 1 TO N DO select personal-best solution for particle Xi: BiP. select local-best solution for particle Xi: BiL. compute velocity for particle Xi: FOR j = 1 TO DIM DO generate correction coefficients: a = α · rand(); b = β · rand(); c = γ · rand(). update velocity of particle Xi along dimension j: Vij = Vij + a · (BijP –xij) + + b · (BijL –xij) + c · (BjG –xij) update position of particle Xi: Xi = Xi + ε · Vi

4.7 Particle Swarm Optimization Particle Swarm Optimization (PSO) was developed by Kennedy and Eberhart in the mid-1990s [21]. PSO is a stochastic optimization technique which emulates the “swarming” behavior of animals such as birds or insects. Basically, PSO develops a population of particles that move in the search space through cooperation or interaction of individual particles. PSO is basically a form of directed mutation. Any particle i is considered in two parts: particle‟s location Xi and its velocity Vi. At any moment, the position of a particle i is computed based on its prior position Xi and a correction term proportional with its velocity ε·Vi. In its turn, the velocity assigned to each particle is computed using four components: (i) the influence of the previous value of velocity Vi; (ii) the influence of the best personal solution for particle i, XiP; (iii) the influence of the best local solution so far for informants of particle i, XiL and (iv) the influence of the best global solution so far for the entire swarm, XG. These components are taken into consideration using three weighting factors, denoted by a, b and c. The pseudocode of PSO is presented in Table 7.

Table 8 – Pseudocode for Ant Colony Optimization Data: population size N, set of components C = {C1, …, Cn}, evaporation rate evap. Initialization: amount of pheromones for each component PH = {PH1, … , PHn}; best solution Best WHILE {stopping criterion not met} initialize current population, P = void. create current population of virtual solutions P: FOR i = 1 TO N DO create feasible solution S. update the best solution, Best ← void. Add solution S to P: P ← P S Apply evaporation: FOR j = 1 TO n DO PHj = PHj · (1 – evap) Update pheromones for each component: FOR i = 1 TO N DO FOR j = 1 TO n DO if component Ci is part of solution Pj, then update pheromones for this component: PHj = PHj + Fitness(Pj)

number of ants will follow the same path. For each ant, a taboo list may be defined to memorize its path. The ants move between components Ci and Cj with probability Pij, defined based on the pheromone density between components, the visibility between the two components and two weighting factors. After each ant completes its path, the pheromone density for every component is updated based on the fitness functions. The pseudocode of ACO is presented in Table 8.

4.8 Ant Colony Optimization The classic ant colony optimization (ACO) algorithm, proposed by Marco Dorigo in 1992 [22], is inspired from the natural behaviour of ants, which are able to find their way using pheromone trails. Ants travel between two fixed points A and B on the shortest route, leaving behind them trail of pheromone that mark chosen paths. After the first ant reaches the B point, it returns in A following its own pheromone trail, and doubling the pheromone layer density. Further, more ants will probabilistically prefer to choose a path with higher pheromone density, and gradually an increasing

ISSN: 1792-5967

4.9 Honey Bee Colony Optimization The Honey Bee Colony Optimization (HBCO) algorithm was first proposed in [24]; it is a search

99

ISBN: 978-960-474-238-7

SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

Table 9 – Pseudocode for Honey Bee Optimization Data: size of populations: - drones (ND), broods (NB) and genetic pool (NP); initial queen‟s speed Smax; crossover rate ηc and mutation rate ηm Initialization: create initial population Drones with ND individuals, and select the best drone as the Queen. WHILE {stopping criterion not met} create the genetic pool: use Drones popoulation and select NP individuals using a Simulated Annealing-type acceptance rule, based on the queen‟s speed S, and gradually reduce S. crossover: apply arithmetic crossover between Queen and succesively selected drones from the genetic pool, until a population of NB broods (offsprongs) is created. mutation: apply arithmetic mutation to randomly selected broods (offsprings). update the Queen: if any brood is better than the Queen, update the Queen. selection: use broods and, based on a selection criterion, create the new population of Drones.

Paper [28] presents a new approach to this problem, based on joint application of genetic programming (GP) and symbolic regression (SR). GP is a special form of GA that describes possible solutions using chromosomes with a tree-like structure [16]. SR is a special numerical technique used to identify those analytical expressions, previously unknown, that best fit some relations among observed data. The GP&SR method was applied to a dataset comprising hourly load profiles. The daily peak load was estimated using different combinations of data. The optimal solution found by the GP&SR estimates the daily peak load using as input data other peak loads form days d-7, d-6 and d-1 and the number of the reference day in the year. One of the most widely used techniques to represent consumers is the Load Profiling (LP-ing) method, which associates typical load profiles (TLPs) with simple readings in the network [29, 30, 31]. Paper [32] proposes a new LP-ing technique that uses a fuzzy implementation of Self Organizing Maps (SOMs) as a basic clustering approach, combined with a weighting procedure to compute distances between metered load profiles and TLPs around peak and valley load hours. Then the growing SOM technique is applied to determine the optimal architecture of the SOM, using a control strategy based on the assessment of affinity measures between metered LPs and TLPs. Paper [33] presents a new approach to the LP-ing problem based on the HBCO algorithm. The proposed method can be easily implemented as a clustering technique, with high robustness properties. The application of HBCO algorithm is an original approach to the LP-ing problem. A major advantage of the proposed method is its capability to produce qualitative classification results with fewer parameters that need calibration than other alternative clustering approaches.

procedure that mimics the mating process in honey-bee colonies, using selection, crossover and mutation. A honey bee colony houses a queen-bee, drones and workers. The queen-bee is specialized in egg laying; drones are fathers of the colony and mate with the queen-bee. During the mating flight the queen mates with drones to form a genetic pool. After the genetic pool was filled with chromosomes, genetic operators are applied. During the crossover stage, drones are randomly selected from the current population and mate with the queen using a Simulated Annealing-type acceptance rule based on the difference between the fitness functions of the selected drone and the queen. The final stage of the evolutionary process consists in raising the broods generated during the second stage, and creating a new generation of NB broods, based on mutation operators. A new generation of ND drones is created based on a specific selection criterion. The pseudocode of HBCO algorithm is presented in Table 9.

5.2 Network reconfiguration The network reconfiguration problem arises usually in distribution systems and aims at changing the network topology by altering the position and status of sectionalizing switches. Generally speaking, this is a complex combinatorial problem, because in real systems the number of candidate solutions is huge. Paper [34] approaches the network reconfiguration problem using the ACO algorithm. Candidate solutions are represented by vectors consisting of the load sections that must be opened to obtain a radial configuration of the network, taking into account the problem‟s constraints (no consumer is to be disconnected and the thermal current and voltage drop must meet the admissible limits). A different approach, based on the PSO algorithm was proposed in [35]. This time, a particle in the swarm encodes two m-tuples of indices, where the value of

5 Applications of (meta)heuristic methods in power systems Many optimization problems in Power Systems may be approached using heuristic and metaheuristic techniques. A brief description of most representative applications of the (meta)heuristic methods in the field of power system optimization is presented below.

5.1 Load assessment and profiling The knowledge of load characteristics at system buses, in general, and the estimation of peak loads in distribution systems, in particular, is one of the top requirements for taking high quality decisions for the optimal operation and planning of a distribution system.

ISSN: 1792-5967

100

ISBN: 978-960-474-238-7

SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

parameter m is equal to the sum of the number of source nodes and the number of independent loops in the system, minus one. The first m-tuple indicates the feeders in the system where the loops should be opened, and the second m-tuple shows the line sections on each feeder where the switches must be turned off. An interesting approach to network reconfiguration problem is the one depicted in [36]. This approach is based on a heuristic that replace the traditional approach that computes network losses for every switching configuration with a simple one based on the following heuristic. Start with the standard radial configuration and select the one switch p with the maximum voltage difference across it. If this difference is large enough, stop the search. Otherwise, choose the bus across switch p with minimum voltage, and open the switch q adjacent to this bus, while closing switch p. Move to the q+1 switch and repeat the above steps until losses begin to increase. At that moment, stop the search and choose as optimal solution the one with the open switch q.

5.4 System security analysis System equivalents are a good solution to simplify the on-line analysis of present day wide-area power systems. Paper [42] presents an original approach to the problem of the REI equivalent design optimization based on the sensitivity of the complex bus voltage to a set of simulated contingencies. The optimal structure of the REI equivalent is determined using a GA approach. Basically, besides the standard and mixed REI equivalents, with PQ or PV-type buses, the new proposed model allows to define any structure of the equivalent, with any number of virtual REI buses, which group real buses from any category (PQ or PV-type). In [39] the authors propose a feeder reconfiguration approach to increase voltage stability limits for a distribution system, based on a TS optimization procedure. The quality of a candidate solution is assessed based on a voltage stability index recommended in [39]. FACTS (Flexible AC Transmission Systems) optimization is approached in [41], where a hybrid metaheuristic approach to the efficient allocation of UPFC (Unified Power Flow Controller) for maximizing transmission capability is presented. The model proposed in [41] is applied in two steps. The first step consists in determining the optimal location of UPFC using TS algorithm, while the second step consists in optimizing the outputs of the UPFC using a evolutionary PSO approach.

5.3 Reactive power planning Reactive power control can be achieved using several approaches such as generator voltage control, transformer tap control and fixed or controllable VAR sources. At the distribution level, the most efficient approach, as it produces other positive effects too, is the Reactive Power Compensation (RPC) method through power factor correction. RPC was approached in [34] and [35] in parallel with the distribution system reconfiguration, using the ACO and the IA algorithms. The two algorithms were run to produce the final solution of the RPC optimization problem. Multiple runs of the PSO and IA algorithms will lead eventually to the optimal solution. A different approach was used in [37], where an enhanced PSO (E-PSO) algorithm was proposed. For the E-PSO algorithm, a particle changes its position based not only on its best position so far and the global best position, but also on other successful positions of itself and other particles in the current population. Tests have proved that a second order E-PSO model is enough to guarantee a faster convergence. In [38] the reactive power planning problem is approached in a more complex way, considering the simultaneous optimization of reactive generation of generators, transformer tap positions and capacitor banks. The authors uses a hybrid optimization method: while the optimal settings for reactive generation of generators and transformer tap positions are determined using metaheuristics (GA, PSO and DE), the number of capacitor banks in the system buses are established using a heuristic approach based on an analysis of loss sensitivity at the buses.

ISSN: 1792-5967

5.5 State estimation The most popular State Estimation (SE) model is based on the Weighted Least Squares method, and uses measurements for bus voltage, real and reactive power flows or injections. Recent works, such as [42], approached the problem of power system SE using a parallel implementation of the PSO algorithm based on PC cluster systems. The main advantages of parallel processing consist in reducing computing time and improving computation efficiency. The proposed PC cluster configuration considers sub-populations that evolve independently and change information only with other predefined neighboring sub-populations. Recently, GPS synchronized phasor measurement data were proposed to be used as input data to the traditional SE model [43]. Phasor measurement units (PMUs) are placed at the buses of transmission substations to measure voltage and current phasors. From a financial standpoint, a hybrid solution which uses both traditional and synchronized phasor measurements is better in terms of costs. From this standpoint paper [44] approaches the problem of optimal PMUs placement as an optimization problem which aims to expand an existing traditional measurement configuration through the placement of

101

ISBN: 978-960-474-238-7

SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

additional PMUs using a GA – based approach. The algorithm aims to find the optimal placement of a given set of PMUs at the buses of a power system with a known conventional measurement configuration.

[3] F. Box, A Heuristic technique for assigning frequencies to mobile radio nets, IEEE Transactions on Vehicular Technology, 27, 57–74, 1978. [4] K. Y. Lee and M.A. El-Sharkawi (editors), Modern Heuristic Optimization Techniques with Applications to Power Systems, IEEE Press Series on Power Engineering, John Wiley & Sons, 2008. [5] J. Pearl., Heuristics: Intelligent Search Strategies for Computer Problem Solving, Add.-Wesley, 1984. [6] S. J. Russell, P. Norvig, Artificial Intelligence: A Modern Approach (3rd ed.), 2009. [7] S. Koenig M. Likhachev, Y. Liu, et al., Incremental heuristic search in AI, AI Magazine, 25 (2), 2004. [8] R. Zhou, E. A. Hansen, Breadth-First Heuristic Search, Proc. of the 14th International Conference on Automated Planning and Scheduling, ICAPS-04, Whistler, British Columbia, Canada, June 2004. [9] R. Dechter, J. Pearl, Generalized Best-First Search Strategies and the Optimality of A*. Journal of the Assoc. for Comp.Machinery, 32, pp. 505-536, 1985. [10] E. Burns, S. Lemons, R. Zhou,W. Ruml, Best-First Heuristic Search for Multi-Core Machines, Proc. of the 21-st International Joint Conference on Artificial Intelligence (IJCAI-09), Pasadena, California, pp. 449-455, 2009. [11] R. Zhou, E. A. Hansen, Beam-Stack Search: Integrating Backtracking with Beam Search, Proc. of the 15th International Conference on Automated Planning and Scheduling, Monterey, June, 2005. [12] C. Blum, A. Rolli, Metaheuristics in Combinatorial Optimization: Overview and Conceptual Comparison, ACM Computing Surveys,35(3),268–308,2003. [13] I. Rechenberg, Cybernetic Solution Path of an Experimental Problem, Royal Aircraft Establishment Library Translation, 1965. [14] L. Fogel, A.J. Owens, MJ. Walsh, Artificial Intelligence through Simulat.ed Evolution, Wiley 1966. [15] J. H. Holland, Adaptation in Natural and Artificial Systems, Univ. of Michigan Press, 1975. [16] J. Koza, Genetic programming, MIT Press; 1992. [17] S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, Optimization by Simulated Annealing. Science, 220 (4598): 671–680, 1983. [18] F. Glover, Future Paths for Integer Programming and Links to Artificial Intelligence, Computers and Operations Research 13 (5): 533–549, 1986. [19] J. D. Farmer, N. Packard, A. Perelson, The immune system, adaptation and machine learning, Physica D 22: 187–204, 1986. [20] P. Moscato, On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts : Towards Memetic Algorithms, Techn. Report C3P 826, 1989. [21] J. Kennedy, R. Eberhart, Particle Swarm Optimization, Proc. of IEEE Internal Conference on Neural Networks,pp.1942–1948, 1995.

5.6 Distributed generation Present day distribution systems are facing deep changing that transforms traditional design for passive operation into new concepts centered on distributed generation (DG) and a more active role of end-users. Two representative types of optimization problems for DG applications will be briefly described forward. An appropriate location of DG sources can determine reduction in system losses, improvement of voltage profiles or simplification of network protection schemes. Thus, a multi-objective approach to the optimal placement of DG, based on the HBCO algorithm is presented in [45]. The fitness function used by the HBCO algorithm is a combination of two objective functions, which describe different features of the system: system real power losses and a penalty function associated with bus voltage and line overloading violations for contingency analysis. The problem of optimal design of multiple energy hubs was approached in [46] for a trigeneration-energy system using a GA-based approach. The hub uses as primary energy electricity and natural gas or biomass, and produces as outputs electricity, cooling and heating. The mathematical model is based on a compound objective function with two components: the primary energy function and an error term equal to the difference between the primary energy computed on two independent ways (starting from the heating or cooling output energy). The optimization process consists in finding the optimal values for three conversion coefficients that uniquely describe the hub energy flow.

6 Conclusion During the last 20 years numerous (meta)heuristic approaches have been devised and developed to solve complex optimization problems. Their success is due largely to their most important features, namely the need of minimal additional knowledge on the optimization problem and a highly numerical robustness of algorithms. This paper provided basic knowledge of most popular (meta)heuristic optimization techniques, and how they are applied in common optimization problems in power systems. References: [1] R. Fletcher, Practical Methods of Optimization, 2nd Edition, John Wiley & Sons, 2000. [2] G. Polya, How to Solve It., Princeton Univ.Pr, 1945.

ISSN: 1792-5967

102

ISBN: 978-960-474-238-7

SELECTED TOPICS in MATHEMATICAL METHODS and COMPUTATIONAL TECHNIQUES in ELECTRICAL ENGINEERING

[22] M. Dorigo, Optimization, Learning and Natural Algorithms (Phd Thesis). Polit.di Milano, 1992. [23] R. Storm, K. Price, Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces, Journal of Global Optimization 11: 341–359, 1997. [24] S. Nakrani, S. Tovey, On honey bees and dynamic server allocation in Internet hosting centers, Adaptive Behaviour 12, 2004. [25] K.A. De Jong, Genetic Algorithms: A 10 Year Perspective; Proceedings of the 1st Internal Conf. on GAs and Their Appl, pp. 169-177, 1985. [26] D. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, AddisonWesley,Reading, MA, 1989. [27] R. Dawkins et al., The Selfish Gene, Oxford Univ. Press, 1989. [28] M. Gavrilas, C.V. Sfintes, O. Ivanov, Comparison of Neural and Evolutionary Approaches to Peak Load Estimation in Distribution Systems, Proc. of the Internal Conf. on "Computer as a Tool", EUROCON 2005,Belgrade,Nov.2005,pp.1461-1464 . [29] G. Chicco, R. Napoli, and F. Piglione, Comparisons among clustering techniques for electricity customer classification, IEEE Trans. Power Syst., vol. 21, no. 2, pp. 933–940, May 2006. [30] M. Gavrilas, O. Ivanov, G. Gavrilas, Load Profiling with Fuzzy Self-Organizing, Proc. of the 9th Symposium on Neural Network Applications in Electrical Engineering NEUREL 2008, Belgrade, Sept. 2008, CD-ROM, ISBN: 978-1-4244-2903-S. [31] G. Chicco, M. S. Ilie, Support Vector Clustering of Electrical Load Pattern Data ,IEEE Trans. Power Syst.,Vol.24,No.3,2009,pp.1619-1628. [32.GG] M. Gavrilas, O. Ivanov, Load Profiling by Self Organization with Affinity Control Strategy, Rev. Roum. Sci. Techn. – Électrotechn. et Énerg., 53, 4, Bucarest, 2008, pp. 413–421. [33] M. Gavrilas, G. Gavrilas, C.V. Sfintes, Application of Honey Bee Mating Optimization Algorithm to Load Profile Clustering, Proc. of the IEEE International Conference on Computational Intelligence for Measurement Systems and Applications, Taranto, Italy, Sept. 2010. [34] O. Ivanov, M. Gavrilas, C.V. Sfintes, Loss Reduction in Distribution Systems using Ant Colony Algorithms, Proc. of the 2-nd Reg. Conf. & Exhib.on Electricity Distribution, Serbia, Oct. 2006. [35] M. Gavrilas, O. Ivanov, Optimization in Distribution Systems Using Particle Swarm

ISSN: 1792-5967

Optimization and Immune Algorithm, Proc. of the First International Symposium on Electrical and Electronics Engineering, Galati, Romania, 2006. [36] R. Srinivasa, S.V.L. Narasimham, A New Heuristic Approach for Optimal Network Reconfiguration in Distribution Systems, Internnal Journal of Appl. Science, Engin. and Technol., 5:1 2009, pp.15-21. [37] M. Gavrilas, O. Ivanov, C.V. Sfintes, Enhanced Particle Swarm Optimization Method for Power Loss Reduction in Distribution Systems, Proc. of 19th Internal CIRED, Vienna, May 2007. [38] B. Bhattacharyya, S.K. Goswami, Combined Heuristic and Evolutionary approach for Reactive Power Planning Problem, Journal of Electrical Systems, Volume 3, Issue 4, 2007, pp. 203-212. [39] M.A.N. Guimaraies, J.E.C. Lorenzeti, C.A. Castro, Reconfiguration of distribution systems for voltage stability margin -enhancement using tabu search, Proc.. of Intematlonal Conference on Power System Technology - POWERCON 2004, Singapore, Nov. 2004, pp. 1556-1561. [40] M. Gavrilas, O. Ivanov, G. Gavrilas, A New Static Network Reduction Technique Based on REI Equivalents and Genetic Optimization, Proc. of the 8th WSEAS Intnal Conference on Power Systems (PS '08),Santander,Spain, Sept. 2008, pp. 106–111. [41] H. Mori, Y. Maeda, A Hybrid Method of EPSO and TS for FACTS Optimal Allocation in Power Systems, Proc. of the 2006 IEEE Internal Conf. on Systems,Man, and Cybernetics,2006,pp.1831-1836. [42] H.M. Jeong, H.S. Lee, J.H. Park, Application of Parallel Particle Swarm Optimization on Power System State Estimation, Proc. of IEEE Transm. & Distrib. Asia Conference, 2009. [43] A. Abur, A. G. Expósito, Power System State Estimation—Theory and Implementations, Marcel Dekker, Inc., 2004., ISBN: 0-8247-5570-7. [44] M. Gavrilas, I. Rusu, G. Gavrilas, O. Ivanov, Synchronized Phasor Measurements for State Estimation, Rev. Roum. Sci. Techn. – Électrotechn. et Énerg., Vol. 54, No. 4, 2009, pp. 335-344. [45] S. Anantasate, C. Chokpanyasuwan, W. Pattaraprakor and P. Bhasaputra, Multi-objectives Optimal Placement of DG Using Bee Colony Optimization, Proc. of the 3rd GMSARN Internal Conference, Nov.2008, China. [46] E. Hopulele, M. Gavrilas, C. Afanasov, Optimal Design of a Hybrid Trigeneration System with Stirling Engine, Proc. of the 6th Internal Conf. on Electrical and Power Engineering, Iasi, Oct. 2010.

103

ISBN: 978-960-474-238-7