Simulated Rebounding Algorithm: an evolutionary ...

1 downloads 0 Views 259KB Size Report
novel algorithm, Simulated Rebounding Algorithm (SRA), is compared in performance to the well known evolutionary strategy Simulated Annealing, two ...
Simulated Rebounding Algorithm: an evolutionary strategy to solve combinatorial optimizations in power systems Alejandro Hoese1 Abstract--One evolutionary strategy based on the physical kinetic process of rebounding is presented in this work. The novel algorithm, Simulated Rebounding Algorithm (SRA), is compared in performance to the well known evolutionary strategy Simulated Annealing, two implementations of Genetic Algorithms (GA) and a hybrid GA with some characteristics taken from SRA. All these evolutionary algorithms (EA) are applied to the optimal design of stand-alone generation systems using renewable and conventional energy sources. The main goal of the optimization is to find a set of optimal configurations for the optimal expansion planning of the power system. The optimal set is found minimizing an objective function (OF), which includes investment, installation, operation and maintenance costs and the associated uncertainties. In the OF a penalization due to energy and power not supplied, originated in unit outages, is also considered. The OF is evaluated by means of stochastic correlated simulation, requiring considerable calculation time. The goal of this paper is to compare the behavior of the different EAs applied to this complex optimization problem, and to analyze the empirical performance of the novel algorithm in comparison to well known EAs regarding accuracy and computation time

plan can be projected in different ways, giving different expansion alternatives with different costs, reliability and environmental impact. The optimal expansion plan (OEP) is the alternative with minimum costs and maximum power quality and reliability over the planning horizon (typically 20-30 years). During the expansion process the system is conditioned by stochastic events leading to long and short term uncertainties (Figure 1). Because of the stochastic nature of the problem, stochastic correlated simulation (SCS) [1] is applied to calculate the operation with minimal O&M costs and to assess appropriate indices for the evaluation of each proposed configuration for each reference year in the expansion plan. [Annual energy] Demand:

Short-term uncertainties 1

V

1

Index Terms-- Evolutionary planning , renewable energies

Algorithms,

power

system

Configurations

2

1

Independent Consultant in Energy Projects. Former ressearcher at the Instituto de Energía Eléctrica, Universidad Nacional de San Juan - Av. Lib. Gral. San Martín 1109 (o) - 5400 - San Juan Argentina - [email protected]

1

V

4 2

1

V

5 2

1

V

6 2

1

V

7 2

V

V

V

V

V

V

V

V

V

V

1 3

1

The electrical power system must be expanded over time in order to meet increasing energy demand. The expansion

2

V

4

Power generation systems must cover the given energy demand at any time maximizing the electrical product quality and reliability, and minimizing ecological and economic costs. For a given load scenario it is possible to propose different system configurations (i.e. installed power for each type of generation unit and storage unit). Each of these different system configurations has a certain reliability behavior, lifetime, investment and installation costs, operation and maintenance (O&M) costs and environmental impact.

3

2

V

1

I. INTRODUCTION

1

V

2

V

V

Reference Year #:

1

V

V

1

2 3

3 3

2

3

4

4

V

V

2

3

2

3

4 3 4 4

V

4

4

5 3 5 4

V

5

6 3 6 4

V

5

6

6

7 3

Optimal Expansion Alternative

7 4

V

7

7

Long-term uncertainties

Planning Horizon (>20 years)

Load Fluctuation Short-term uncertainties

Primary Energy Fluctuation Unit outages

Load growth Long-term uncertainties

Fuel costs Interest rates

Fig. 1. Optimal Expansion Planning process considering uncertainties.

To consider the influence of short-term uncertainties in the OEP, the planning horizon is partitioned into reference years. Each reference year represents a number of years in which the boundary conditions (i.e. demand scenario, interest rates, fuel costs) are to be considered unvarying [2,3,4]. For each reference year a set of technically appropriate configurations (TAC) of the power system should be found. The TACs are assumed to supply the load with minimal O&M costs fulfilling the expected reliability level and technical constraints.

1

A. Optimality hypothesis of the OEP. As the optimality of the expansion alternative to be found depends on the quality of the proposed configurations at each reference year, it is assumed that to find the OEP only a set with the best configurations is needed. This set is supposed to contain the configurations with higher probability of belonging to the OEP. Thus, a reduction of the solution space should be done by evaluating and comparing configurations, choosing at each reference year a set of TAC with high probability of being part of the OEP. The optimal set is found by minimizing an objective function (OF), which includes investment, installation, and O&M costs. In the OF a penalization due to unserved energy and power (originated in unit outages) is also considered. The OF is evaluated through stochastic correlated simulation (SCS), which takes a sizable amount of calculation time.

3. 4.

Decimal Genetic Algorithm (DGA): GA in which the configurations are coded using a chromosome with positive integer numbers expressed in base 10. Hybrid Genetic Algorithm (HGA): GA in which the configurations are coded using a chromosome with decimal positive integers, but the selection operator includes a particular heuristic taken from SRA.

These five evolutionary algorithms are compared when applied to the optimal selection of TACs for long-term expansion planning studies. The comparison is based on the analysis of the performance of each EA with respect to the relative error (distance from the chosen solution to the global optimum of the OF) and the computation time (number of inspected solutions needed to find the optimal set). II. THE SIMULATED REBOUNDING ALGORITHM (SRA)

B. Evolutionary Algorithms applied to the optimal selection of TACS. Selection of m TACs within the solution space should be made in such a way that the TACs with higher probability of being part of the OEP are identified without an exhaustive search. The selection methodology is of a combinatorial nature and can be implemented using Random Local Search and Artificial Intelligence techniques [5]. Taking into account that: 1. 2. 3.

the assessment of qualifying indices for a certain configuration is performed using stochastic correlated simulations (SCS), the stochastic simulations require high computational efforts the selection of m TACs should be accomplished for each reference year of the planning horizon.

it is concluded that the most appropriate optimization heuristic is the one with the ability to identify the m configurations with a minimal inspection of the solution space. This paper describes an evolutionary strategy, the Simulated Rebounding Algorithm (SRA) and evaluates its empirical performance regarding computation time and relative error of the solution found by comparing it with well known evolutionary algorithms: 1. 2.

Simulated Annealing Algorithm (SAA): random local search algorithm based on the physical process of annealing [6]. Binary Genetic Algorithm (BGA): GA in which the configurations are coded using a chromosome with binary genes [7].

SRA was introduced in 1993 [8] as an evolutionary heuristic based on such principles like the SAA but making an analogy with the physical kinetic process of inelastic shock of a body under the action of a gravitational field. The algorithm was designed in principle for a specific kind of problem of parametric combinatorial optimization, where the number of constituent parameters of the solution is not very high but the evaluation of the cost function requires an important computational effort (the cost value of each solution is obtained in such problems by means of simulative methods). In this kind of problem, the speed of convergence is essential. Therefore the search on the solution space must be highly efficient, to reach a local optimal solution near the global optimum with a minimum number of proposed transitions.

A. The simulation of the physical process of inelastic rebounding Suppose that a ball with elasticity coefficient near 1 is dropped over a surface S. The ball will fall on the surface due to the action of the gravitational field in which it is immersed. The shock produced by the fall will be able to affect the kinetic energy of the ball if part of this energy is dissipated in permanent deformations (inelastic shock). These deformations will be proportional to the kinetic energy of the shock, being this one equivalent to the difference of potential energy between the shock point and the point from which the ball falls. If the process is not stopped, the bounces will stop when the kinetic energy of the ball is zero. In this process of inelastic bounces, the ball hits different points of the surface, turned aside by bounces such on a nonflat surface. If the surface has "valleys" and "hills", the points in a valley will have less potential energy than the hills. By the action of gravity, the ball "will try" to stop in a valley, and depending on the kinetic energy it has, it will be able to skip over hills to explore new valleys. 2

C Eo

A

B

optimum, since the ball diminishes its potential energy very fast when it is too high compared with that of the rebounding point on the surface. For the k-th inelastic collision (permanent deformation), the energy of the ball is obtained as:

E k   E k 1 ; 0    1 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17

Fig. 2- One-dimensional example of simulated rebounding.

The analogy with a problem of combinatorial optimization is made by means of an equivalence between the surface of bounces and the space of solutions, and between the height of each point of the surface and the value of the cost function linked to that solution. SRA is therefore a multidimensional extrapolation from the physical process of bounces. Due to the gradual loss of potential and kinetic energy after each collision, at the end of the rebounding process the ball is expected to stop at the global minimum (point of smaller height of the surface) or at some local minimum near the global one. Figure 2 shows a onedimensional example: as shown with the trajectory lines, the ball can reach the global minimum (step 11) if it is sent from point A or C. Nevertheless it is trapped in the local minimum of step 14 if the ball is sent from point B. That is to say, the global minimum is reached if: -

the initial potential energy (Eo), and the percentage of loss of energy due to inelastic collisions (1-)

allow the ball "to jump" all local minimums during the process of bounces. In SRA the deformations produced by the collisions can be of three types: -

-

transitory deformation: the ball does not lose kinetic energy because the height of the fall is small; slight deformation: the ball bounces on "walls" of a valley because its potential energy is smaller than the height of the destiny point, decreasing its potential energy in a small value; permanent deformation: the ball loses kinetic energy due to a collision caused by a considerable height of fall.

Each type of deformation acts in a particular way on the optimization process. The transitory deformation is the one that allows to explore the solution space. If only this type of deformations exists, the process would never stop. The slight deformation decreases the energy of the ball when it is trapped in a valley, and causes the halting of the process if any solution with potential energy smaller than the one of the ball does not exist (deep valley). The permanent deformation accelerates the convergence towards the

(1)

where  has its analogy in the physical process with the coefficient of elasticity of the ball. For a totally elastic material ( = 1), permanent deformations do not exist and therefore the energy of the ball does not diminish with the collisions.

B. Neighboring structure of the SRA Transitions are made in a neighborhood structure s around the point of rebounding s. The neighborhood structure can vary from one problem to another. In the 1dimensional example of Figure 2, the ball can only bounce to neighboring steps (to left or right). This is defined as "minimal neighborhood structure ", whose order is calculated for the d-dimensional case as:

  2 dim( )  2  d

(2)

The minimal neighborhood structure s defines which solutions are considered "close to" a given solution s. One possible definition for the solution space of d-dimensional integers is:

 x j( z)  x j( s)  1 z  s    xi( z)  xi( s)

( j  1,  , d ) (i  j )

(3)

where z is a neighboring solution of s and xj(s) the j-th (integer) element of s. A generation mechanism proceeds to choose the transition solution. For example, every neighboring solution of (3) can be randomly selected by means of a uniform distributed random number q between 1 and s= 2d , so that q represents one solution of s:

z  ( s1 , s 2 ,..., s j  y ,..., s d )  transition solution of s ; y  1 if ( q  d ) jq q  random(1,2 d ) ;   j  q  d ; y  1 if ( q  d ) The neighborhood structure and the generation mechanism should be adapted to the solution space of the given problem. I.e., the generation mechanism given in (3) is not appropriate for solution spaces belonging to real numbers.

3

-

C. Acceptance criterion and stop criterion of SRA The order of  defines the number of solutions belonging to the minimal neighborhood structure (2). The transition solution is randomly chosen from the neighborhood structure. If the energy of the transition solution is lower than the energy of the process, it is accepted as a new rebounding solution. In the opposite case, a new transition corresponding to the same neighborhood structure is chosen. The acceptance criterion of SRA is therefore:

Pr( j  next solution

k

1 if ) 0 if

f ( j)  E k f ( j)  E k

(4)

The process stops when its energy at the k-th iteration Ek is not enough to reach a new solution, after searching the whole neighboring structure of the current solution i. The stop criterion is therefore:

f ( x )  Ek

 x  Vi

(5)

Actually, the stop criterion is obtained by reducing the energy of the ball whenever a bounce does not reach a neighboring solution (i.e., a neighboring solution whose cost value is greater than the energy of the ball). This reduction by "frustrated" rebounding, that in analogy with the physical process corresponds with the slight deformations, must nevertheless allow the complete exploration of the neighboring structure Vi. That is why the reduction of kinetic energy by frustrated rebounding  is defined as:



E k 0  f (i ) 

(6)

and the reduction of the energy of the process with each frustrated rebound r will be:

E k r 1  E k r  

( r  0 , 1, ... ,   1)

(7)

with

Ek r  f(i) =

Total energy of the process at the k-th iteration after r frustrated bounces. Potential energy of the solution i at the kth iteration, i.e. the cost function value for the current solution.

This reduction by frustrated bounces stops the process of bounces at the solution reached at the k-th iteration after a complete exploration of the neighboring space without finding a solution feasible of being reached, as stated in (5).

-

an initial condition (E0 and the initial solution x0), the value of the parameter  (percentage of energy conservation), a neighborhood structure V, and a generation mechanism to select the next solution to be evaluated.

The algorithm is not robust, since it can be seen very intuitively that the quality of the solution found depends in general on the adopted initial solution. Nevertheless, it is a very efficient algorithm regarding calculation time (only a very reduced part of the solution space is inspected). Due to these characteristics, it is advisable to make the optimization with different initial solutions, with the following purposes: - to reach the global optimum or at least a solution very close to the global optimum, - to find a set of near local optima. The set of near optimal solutions is found by analyzing the set of inspected solutions until convergence. Therefore, to ensure a representative set of inspected solutions, a number of initial solutions corresponding to the vertices of the solution space should be taken:

x01  ( x1,max , 0 , 0, ) x0 2  (0 , x2,max , 0 , ) x0 k  (0 , 0 ,  , xk ,max , 0 , )  x0  (0 ,  ,0 , xd ,max ) d

where xk,max is the maximum value of the k-th parameter (1 k  d) It is noted that the initial solutions given in (8) are not necessarily feasible regarding the objective function and constraints of the problem. The initial energy of the process E0 can be adopted in practice [3] as a function of the parameter  and the cost value of the initial solution f(x0): (9) E0   1 f ( x ) 0

0.7 <  < 0.99

with:

During the evolutionary process, the SRA could propose the same solution more than once. Due to the computation effort needed to evaluate each solution and in order to avoid a re-evaluation of the same solution, each one is identified with a cost value c(s) such that:

c( s ) D. Practical implementation of SRA For the implementation of SRA it is necessary to count on:

(8)

  0 for s not yetproposed     0 for s not feasible  f  ( s) for s feasible

(10)

assuming f(s) > 0 for all s. 4

III. EMPIRICAL PERFORMANCE ANALYSIS

Problem formulation:

The optimization problem represents each possible configuration of the power system as an array with n components. Each component is coded as a positive integer representing a certain type and installation power for a given unit (photovoltaic panels, wind turbines, microhydro, diesel gensets, electrochemical batteries, waterpumping units, etc.) and system parameters (maximal state of discharge for batteries, orientation angles of the photovoltaic panels, etc.). Since each EA has certain parameters leading the search strategy, these parameters were tuned to the best values regarding performance (relative error and computation time) of the algorithm for the given problem instance.

minimize

CF( x )

for

x  ( n b ,n pv ,n w ,q min ) 

d 4



 8

where CF(x) is a cost function which evaluates a given solution x with respect to investment, operation costs and penalization costs due to energy not supplied, and nb = number of (identical) batteries (nb = 1,2,...9) npv = number of (identical) PV-panels (npv =2,4,6....,80) nw = number of (identical) Wind-generators (nw = 0,1,2,3,4) qmin= min. allowed battery discharge (qmin= 0,10,...90 [%]) Initial solutions:

A. Comparing SRA vs. SAA

x 0 1  1, 2, 0, 0 x 0 2  3,20,1,20 x 0 3  5,40,2,50

The performance analysis is made by using the following quantitative criteria: a-

computation time needed to find the set of near optimal solutions, measured as the number of solution configurations evaluated through stochastic correlated simulation until convergence is achieved, normalized with respect to the maximum number of configurations in the solution space:

t

number of elementary transitions module of the solution space

(11)

where the number of elementary transitions denotes non repeated solutions proposed along the optimization process (see equation 11). In this way, computation time is independent of the processor employed. b- the quality of the solutions is quantified in an average case through the relative error of solutions found in several runs of the algorithm for the same problem instance:



E[ f ( v ) ]  f (op )

x 0 4  7,60,3,70

x 0 5  9,80,4,90

Global optimal solution1: x opt  3, 2 , 3, 90  Figures 3 and 4 show the results of the performance indices of both algorithms for 14 runs with every initial solution (white points) and 70 runs with the initial solution x0 (black points). Figure 3.a shows the average relative error and computation time as a function of parameter  (SRA). Figure 3.b shows the same indices for SAA, as a function of temperature control parameter (equivalent to  in SRA). Figure 4.a shows the performance of SRA for different values of E0, normalized with respect to the cost function value for x0 (f(x0)) and fixing  in its optimal value obtained from Fig. 3.a. Figure 4.b shows the behavior of SAA with respect to the (equivalent) parameter c0 = (initial temperature of the annealing process) and fixing  (temperature control parameter) in its optimal value obtained from Fig. 3.b.

B. Comparing SRA vs. Genetic Algorithms

(12)

f (op )

where  is the average relative error, E[f(v)] the expected value of the cost function for the best solution found with the algorithm and f(op) the cost of the global optimal solution found through exhaustive search. The considered problem instance deals with the optimal design of a small power system with photovoltaic and wind generators. The SRA is launched independently for each initial solution, in order to evaluate its robustness (dependence of the solution found with respect to the initial solution).

In this section, SRA is compared versus three kinds of GAs: Binary Genetic Algorithm (BGA), Decimal Genetic Algorithm (DGA) and a hybrid GA (HGA). Problem formulation:

minimize

CF( x )

for

x  ( n t ,n pv ,n w ,n b ) 

d 4



 8

1

Found through exhaustive search of the entire solution space  (=18000). 5

t [%]

t [%] 2.8

2.8

2.2

2.2

1.7

1.7

1.1

1.1

0.5

0.5

 (%)

0.4

0.5

0.6

0.7

0.8

0.9

1.0



 [%]

6

6

5

5

4

4

3

3

2

2

1

1 0.4

0.5

0.6

0.7

0.8

0.9

1.0



Fig. 3.a-  [%] = energy conservation parameter in SRA for E0 = 1.5 f(xo) (initial energy of the process)

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0



0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0



Fig. 3.b- [%] = temperature control parameter in SAA for c0 = 1000 (initial temperature)

t [%]

t [%] 1.11

1.11

0.83

0.83

0.56

0.56

0.28

0.28

 [%]

1

2

3

4

5

6



 [%]

4

4

3

3

2

2

1

1

1

2

3

4

5

6



Fig. 4.a- Relative error with respect to the initial energy E0 =  f(xo) for opt = 0.95

where CF(x) is a cost function which evaluates a given solution x with respect to investment, operation costs and penalization costs due to energy not supplied, and

200

400

600

800

1000

1200

c0

200

400

600

800

1000

1200

c0

Fig. 4.b- Relative error with respect to the initial energy c0 for opt = 0.90

nt npv nw nb

= combination of diesel gensets (nb = 1,2,...9) = combination of PV-panels (npv =1,2,...9) = combination of Wind-generators (nw = 1,2,...9) = combination of batteries (nb = 1,2,...9) 6

For this codification of the solution space, a value of nx=0 corresponds to no use of type x units and a value different from zero, a predefined feasible combination of one or several type x units.

After selection, crossover and mutation operations are applied with the following probabilities:

1) Implementation of SRA: For this problem instance, the solution codification with positive integers is used. The SRA defines a minimal neighboring structure (MNS) in which the local search (next rebound) is performed. The MNS of a given solution v is given by the solutions differing on one component, with a value increased or decreased one unit from the corresponding value of v. The control parameter is the elasticity coefficient, and its value is fixed in 0.92, which has proved to be optimal for this particular problem instance. The SRA depends on the initial solution, but it is extremely efficient in terms of computation time. Therefore, it is recommended to start the algorithm from all vertices of the solution space, in order to better assure the optimal quality of the solution chosen. In addition, SRA was initialized from three different initial solutions, corresponding to three vertices of the solution space (13). Each individual run was recorded as SRA1 , SRA2 and SRA3, and the multiple runs2 as SRA.

x 0 1  9,0, 0, 0 x 0 2  9,9,0,0

(13)

x 0  9,0,9,0 3

2) Implementation of Genetic Algorithms (BGA, DGA). Both BGA and DGA are implemented by using the same parameter values. Their main difference is the codification of the solution space and the crossover operator applied. The former algorithm codifies each system configuration with a 30-elements binary string, using 5 bits for each unit type, and a double point crossover. The latter (DGA) directly uses the integer codification and one point crossover. The results obtained with both algorithms are strongly different. The selection operator was implemented in the following way: 1- After the evaluation of all solutions of the population, individuals are ordered by increasing OF costs. 2

2- The best individual is copied to the set of parents, its cost is multiplied by a factor p > 1 and reordered into the population (the lower p the more the weight given to the solution). 3- Repeat 2- until the number of parents is reached.

Multiple runs means that a complete inspection (or optimization process) is considered after running the algorithm with all proposed initial solutions. Thus, the union of inspected sets is used to find the set of inspected solutions.

Pr(crossover) = 0.99 Pr(mutation) = 0.001 The convergence criterion is to stop the algorithm when the best solution found with the new population is not better than the best solution found in the last k populations (k is a given parameter). 3) Implementation of a Hybrid Genetic Algorithm The Hybrid Genetic Algorithm (HGA) is basically a GA, but the selection operator includes specific knowledge of the problem instance (used also in SRA). In order to improve the search over the solution space, the following modifications are introduced: 1- the population number is fixed to 4 times the dimension of the solution vector; 2- the initial population is randomly chosen; 3- the best half of the population is copied into the new population (elitist selection); 4- the best solution of the population is mutated, so as to build the complete neighboring structure defined for SRA. The fourth step is a mutation giving 2d children, where d is the dimension of the solution vector (number of elements). Each children is identical to the parent but for a single element, either one unit greater or one unit smaller. If a mutated element overrides the maximum allowed value (or becomes negative), a random feasible value is used instead. Finally, the rest of the new population is completed by applying the same selection, recombination and mutation operators used in DGA. 4) Performance analysis The performance analysis is made using two quantitative criteria as before (see 3.1, equations (11) and (12)) and one qualitative criterion: the convergence of the algorithm, which can be evaluated through the trend of the cost values obtained along the optimization process [9]. For each algorithm the set of inspected solutions until convergence was stored for each run. Figure 6 shows a typical trend for BGA. The trend line was found using quadratic regression of cost values. This trend represents the evolutionary tendency of the algorithm for a given set of control parameters. The sequence followed by the algorithm in the inspection of the solution space is called "order". 7

[%]

12 10

Indices

The results obtained are summarized in Table 1. Figure 7 shows the comparison indices obtained for each algorithm. Figure 8 shows the convergence represented with the corresponding trend line.

8 6 4 2

IV. CONCLUSIONS

0 BGA

Analyzing these single runs it can be seen that convergence follows an exponential shape, similar to simulated annealing [6].

DGA

HGA

Cost Function

SRA(1)

SRA(2)

SRA(3)

Computation time

Fig. 7- Algorithm performance comparision. 120000 100000 80000 60000 40000

BGA

20000 0

160000

SRA

Relative Error (%)

Cost Function

Figure 8 shows that the tested algorithms have a considerable convergence disparity. The curious shape for SRA is due to multiple initializations with different initial solutions, so that when a new initial solution starts, a divergence of the cost values emerges. On the other hand, when simple initializations (SRA1, SRA2 and SRA3) are used, the computation time is markedly reduced while the error remains about the same order.

BGA

Order DGA

HGA

SRA

SRA (1)

SRA (2)

SRA (3)

140000

Fig. 8- Evolution of convergence.

120000 100000 80000

In general it can be stated that:

60000 40000



20000 0

Order Polinomial regression trend

30 individual population

Fig. 6-Typical evolution of a Genetic Algorithm and polynomial regression trend line of the cost evolution.

Table 1- Results obtained with the algorithms tested.

Cop(1) Cop(2) Cop(3) (v) t(v) tmin tmax

BGA 74 7 5 2.26 11.3 6.72 18.8

DGA 38 12 7.87 6.12 3.14 11.90

HGA 80 16 1.58 5.44 3.31 8.08

SRA 93 1 4 0.57 9.86 5.86 15.3

SRA1 75 2 12 2.47 3.91 2.29 6.37



Computation time, measured as the number of inspected solutions, is minimal when using evolutionary algorithms. The percentage of solutions inspected falls between 4% and 10%, leading to a reduction of the solution space greater than 90%. The tested evolutionary algorithms approximate the global optimal solution with a small expected relative error by inspecting only a reduced portion of the solution space.

From comparisons between evolutionary algorithms: SRA2 49 2 13 11.88 4.08 1.44 6.72

SRA3 59 4 10 5.48 3.40 1.02 6.63

Cop (1) [%]: arrivals at the global optimum (GO). Cop (2) [%]:arrivals at the local optimum closest to GO Cop (3) [%]:arrivals at the second local optimum closest to GO average relative error. (v) [%]: t(v) [%]: average computation time. tmin [%]: minimal computation time. tmax [%]: maximal computation time.

   

  

The deviation of the relative error is higher for SAA than SRA. Considering the same relative error, the computation time is higher for SAA than SRA. Considering the same computation time, the relative error is higher for SAA than SRA. The expected relative error for SRA cannot be lower than a minimum, which is reached for an optimal range value of the parameter : 0.90    0.97 The SAA is able to reduce the expected error arbitrarily at the expense of incrementing the computation time. The SRA is expected to find the global optimum with a probability of 93% (SRA). DGA shows linear convergence with shorter computation time than BGA. 8





Although all GAs have good performance, for this implementation the error of DGA and BGA depends on the initial population, so that both show lower robustness than the others (SAA, SRA, HGA). Computation time for single runs of SRA is always shorter than for the other algorithms.

From the comparison it follows that, for complex combinatorial problem instances with few parameters to be optimized (analogous to the problem here presented), SRA and HGA have demonstrated better performance than SAA and traditional GAs. In particular, the SRA algorithm is shown to be extremely efficient regarding computation time, but strongly dependent on the initial solution. Thus, multiple initialization can be used to enhance the approximation to the global optimum without a considerable increment of computation time.

VI. APPENDIX: C++ PSEUDOCODE OF SRA // Begin of code

solution rebounding (solution x, float , int dim) { solution xn float , , Eo, E1, F_x, F_xn F_x = Evaluate_Cost_of(x) //initial energy of the ball Eo = F_x / ; //decrement due to frustrated bounces  = (Eo - F_x)/2.1/dim; //decrement due to friction  = (9 + )/10; do { // next energy level of the process E1 =  * Eo; do { xn = neighbor_of(x); F_xn = Evaluate_Cost_of(xn); // slight deformation (frustrated bounce)

if (*Eo < F_xn) { Eo = Eo -  } // transitory deformation (elastic bounce)

else if (E1 < F_xn) { x = xn F_x = F_xn  = (Eo - F_x)/2.1/dim }

All tested evolutionary algorithms show to be efficient for optimization problems like the selection of generation alternatives for long-term planning. However, the efficiency of evolutionary algorithms is strongly dependent on the correctness of the formulation of the operators, on proper codification of the solution space and on the use of heuristics tailored to the particular problem instance.

// permanent deformation (inelastic bounce)

else { Eo = E1 x = xn F_x = F_xn  = (Eo - F_x)/2.1/dim // recalculate new energy level :

V. REFERENCES [1]

[2]

[3] [4] [5] [6] [7] [8] [9]

Hoese A. and Garcés F. "Stochastic correlated simulation: an extension of the cumulant method to include time-dependent energy sources". International Journal of Electrical Power & Energy Systems. Vol 21, No1, (1999) pp. 13-22. Hoese A. "Consideration of uncertainties in the expansion planning of stand alone systems including time-dependent sources”. Proceedings of the VI Symposium of Specialists in Electrical Operational and Expansion Planning (SEPOPE), Salvador, Bahía, Brazil (1998). Working Group 37.10 "Methods for planning under uncertainty". Électra No. 161, (1995) pp. 143-163. Mariani, E. "Methodologies in medium/long-term operation planning". IEEE Transactions on Electrical Power and Energy Systems, Vol. 11, N° 3, (1989) pp. 176-188. Miranda, V. Srinivasan, D. Proença, L.M. "Evolutionary computation in power systems" International Journal of Electrical Power and Energy Systems, Vol.20, No2 (1998) pp. 89-98. Aarts E. and Korst J. Simulated Annealing and Boltzmann machines: a stochastic approach to combinatorial optimization and neural computing. John Wiley & Sons (1990) Goldberg, D. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley (1989). Hoese A. Modeling, simulation and analysis of renewable electrical generation systems M.Sc. Thesis. Univ. Kaiserslautern, Germany (1993). Sánchez,G. and Hoese, A. "Comparing evolutionary algorithms applied to the optimal design of generation power systems". Proceedings of the Argentinean Symposium on Artificial Intelligence (ASAI '99) - 28th International Conference of the Argentine Computer Science and Operational Research Society (SADIO). Buenos Aires, Argentina, September (1999).

break do; } end if } while (F_x