Particle swarm optimization performance on ... - Semantic Scholar

3 downloads 52955 Views 109KB Size Report
Jun 18, 2010 - Linear programming (LP) is one of the best known optimization problems solved generally with .... as function optimization (Seo at al., 2006), Electrical Power System ..... Company. ... Proceedings of the 7th World Congress on.
Scientific Research and Essays Vol. 5(12), pp. 1506-1518, 18 June, 2010 Available online at http://www.academicjournals.org/SRE ISSN 1992-2248 ©2010 Academic Journals

Full Length Research Paper

Particle swarm optimization performance on special linear programming problems Pakize Erdo mu Department of Electronic and Computer Education, Faculty of Technical Education, Düzce University, Duzce, Türkey. Email: [email protected]. Fax: +90 380 5421134. Accepted 19 May, 2010

Linear programming (LP) is one of the best known optimization problems solved generally with Simplex Method. Most of the real life problems have been modeled as LP. The solutions of some special LP problems exhibiting cycling have not been studied except for the classical methods. This study, aimed to solve some special LP problems, exhibit cycling with Particle Swarm Optimization (PSO) and to show PSO performance for these problems. So, some special problems taken from literature have been solved with PSO. Results taken from Genetic Algorithm (GA) and PSO have been compared with the reference. The results have shown that PSO performance is generally better than GA in view of optimality and solution time. And it is also proposed that cycling problems are used for testing the performance of new developed algorithms like Numerical Benchmark Functions. Key words: LP, cycling, PSO. INTRODUCTION Optimization problems arise in a wide variety of scientific and engineering applications including signal processing, system identification, filter design, function approximation, regression analysis, and so on. In many engineering and scientific applications, the real-time solution of optimization problems is widely required. However, traditional algorithms for digital computers may not be efficient since the computing time required for a solution is greatly dependent on the dimension and structure of the problems (Effati and Nazemi, 2006). Linear programming (LP) is perhaps the most widely applicable technique in Operational Research (Troutt et al., 2005). Most of the real-life problem such as hydropower reservoir optimal operation (Cheng at al., 2008) water distribution system design problems (Milan, 2010), the calculation of chemical equilibrium in complex thermodynamic systems (Belov, 2010), Power Allocation for Coded orthogonal frequency-division multiplexing (OFDM) (Kenarsari and Lampe, 2009) have been modeled and solved using LP approximation. LP model aims to optimize a linear object function, subject to linear equality or inequality constraints. Different solution methods have been proposed for LP.

Simplex Method is the best-known method developed by Dantzig. Revised Simplex methods have been developed by G.B. Dantzig for computer based solutions (Dantzig, 1998). There are a lot of different approaches for solving linear programming problems except for the classical methods. Simplified neural net-work (Oskoei and Mahdavi-Amiri, 2006), recurrent neural network (Malek and Alipour, 2007) and PSO (Kuo, 2009) have been used for solving linear programming problems. Particle swarm optimization (PSO) is a population based optimization technique developed by Kennedy and Eberhart (1995), inspired by social behavior of bird flocking or fish schooling (Lazinca, 2009). Since PSO is also population based method, convergence to optimal solution is quite rapid. The solution of LP problems exhibit cycling is difficult with classical Simplex Method. So some extra methods like Perturbation method are applied to problem, even if a cycling is a rare situation for LP models. Comparing with iterative methods, metaheuristics are thought to be successful for the cycling LP problems solutions due to their algorithms. In this study, some special LP problems exhibit cycling are selected for solving with PSO.

Erdogmus

LINEAR PROGRAMMING MODELS

vik +1 = K (vik + ϕ1rand ()( pbest i − xik ) + ϕ2 rand ()( gbest − xik )) k

A general LP model can be formulated as a mathematical optimization problem as Equation (1): fopt = n i =1

i =1

i =1

i =1

ai xi

d i xi = ei , f i xi > g i

(2)

xik +1 = xik + vik +1 K=

bi xi < ci

n

n

n

,

Xi>0 i=1, 2,…n

1507

(1)

Solution in a General LP problem is found with movement from the basic solution point to the more optimum basic solution point, until it reaches the best optimum point. LP problems converge to an optimal solution, according to non-degeneracy assumption (NDA) (Gass and Vinjamuri, 2004). Finding the leaving variable, the presence of more than one candidate for leaving the basis causes degeneration. These candidates, namely basic solutions with one or more basic variables at zero are called degenerate. Simplex iterations that do not change the basic solution are also called degenerate. In some cases, after some degenerate iterations, simplex method reaches non-degenerate solution. But sometimes Simplex method goes through an endless sequence of iterations without ever finding an optimal solution. Simplex Method repeats some iterations in a loop. The first example that was shown to cycle have been constructed by Hoffman in 1953 (Chvatal, 1983; Garc´ia and Palomo, 2003). And some techniques preventing cycling have been developed by Dantzig. Even if cycling is a rare situation in practical applications, cycling is overcome in most computer implementations of simplex method (Gass and Vinjamuri, 2004).

PARTICLE SWARM OPTIMIZATION Particle Swarm Optimization (PSO) is introduced by James Kennedy and Russell Eberhart in 1995. PSO is an evolutionary computation technique like genetic algori-thms. Since PSO have many advantages such as compa-rative simplicity, rapid convergence and little parameters to be adjusted, it has been used in many fields such as function optimization, neural network training, fuzzy system control and pattern identification ( Li and Xiao, 2008). The particle swarm algorithm is an optimization technique inspired by the metaphor of social interaction observed among insects or animals. The kind of social interaction modeled within a PSO is used to guide a population of individuals (particles) moving toward the most promising area of the search space. In a PSO algorithm, each particle is a candidate solution and each particle “flies” through the search space, depending on two important factors; the best position the current particle have found so far and the global best position identified from the entire population. The rate of position change of particle is given by its velocity (Clerc, 1999). k is the iterations number. Particles velocity and positions are updated according to (2), (3) and (4) equations related to the pbest and gbest values:

2 2 − ϕ − ϕ − 4ϕ 2

(3)

ϕ = ϕ1 + ϕ2 ϕ > 4

(4)

The velocity and position formulas with constriction factor(K) have been introduced by Maurice Clerc (Clerc, 1999) given (2),(3),(4). The constriction factor(K) pro-duces a damping effect on the amplitude of an individual particle’s oscillations, and as a result, the particle will converge over time. ϕ1 and ϕ2 represent the cognitive and social parameters, respectively, rand is the random number uniformly distributed (Parrot and Li, 2006). The PSO algorithm shares many similarities with evolu-tionary computation techniques such as GAs. The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, the PSO algorithm have no evolutionary operators, such as crossover and mutation. In the PSO algorithm, the potential solutions, called particles, move through the problems space by following the current optimal particles (Kuo and Huang, 2009). PSO has been used for solving discrete and continuous problems. It has been applied to a wide range of applications such as function optimization (Seo at al., 2006), Electrical Power System applications (Valle at al., 2008), neural network training (Zhang at al., 2007), task assignment and scheduling problems in Operation Research (Yoshida at al., 1999; Sevkli and Guner, 2006). PSO is started with initial solutions belonging to each particle. The global best solutions are selected to fitness function among initial solutions. Local best solutions for each particle is saved. Velocity and positions are updated according to the formulas. Until the stopping criteria, global best solutions and local best solutions are updated for the iterations. The pseudo code of PSO is given as: Generated initial P particle swarm Do

For i=1:P If fitness(Pi)