(PSO) Algorithm - IEEE Xplore

8 downloads 0 Views 192KB Size Report
IEEE EUROCON 2017, 6–8 JULY 2017, OHRID, R. MACEDONIA. The velocity value is calculated according to how far an individual's data is from the target.
IEEE EUROCON 2017, 6–8 JULY 2017, OHRID, R. MACEDONIA

A Generic Model Order Reduction Technique Based On Particle Swarm Optimization (PSO) Algorithm Khaled Salah Mentor Graphics Cairo, Egypt [email protected]

Abstract— In this Paper, a Particle Swarm Optimization based solution is proposed to solve the problem of model order reduction for high order transfer functions. The main advantages of our approach are that it is generic, with relatively small time and reasonable accuracy. Moreover, it preserves both time and frequency responses of the original transfer function.

II.

Inspired by the flocking of birds or the schooling patterns of fish, PSO is invented by Russell Eberhart and James Kennedy in 1995 [12]. Originally, these two started out developing computer software simulations of birds flocking around food sources, and then later realized how well their algorithms worked on optimization problems. [13]-[14]. To understand the algorithm, suppose the following scenario: a group of birds is randomly searching food in an area. There is only one piece of food in the area being searched. All the birds do not know where the food is. But, they know how far the food is in each iteration. So what's the best strategy to find the food?. The effective one is to follow the bird which is nearest to the food. So, the one who is closest to the food chirps the loudest and the other birds swing around in his direction. If any of the other circling birds comes closer to the target than the first, it chirps louder and the others veer over toward him. This tightening pattern continues until one of the birds happens upon the food. It's an algorithm that's simple and easy to implement [13]. PSO learns from the scenario and uses it to solve the optimization problems. In PSO, each single solution is a "bird" in the search space. We call it "particle". All of particles have fitness values which are evaluated by the fitness function to be optimized, and have velocities which direct the flying of the particles. The particles fly through the problem space by following the current optimum particles [14]. PSO is initialized with a group of random particles (solutions) and then searches for optima by updating generations. In every iteration, each particle is updated by following two "best" values. The first one is the best solution (fitness) each particle has achieved so far, this value is called Pbest. Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population. This best value is a global best and called Gbest [14]. Each particle consists of: data representing a possible solution, a velocity value indicating how much the Data can be changed, a personal best (Pbest) value indicating the closest the particle's Data has ever come to the Target. The particles' data could be anything. In the flocking birds example above, the data would be the X, Y, Z coordinates of each bird. The individual coordinates of each bird would try to move closer to the coordinates of the bird which is closer to the food's coordinates (Gbest). If the data is a pattern or sequence, then individual pieces of the data would be manipulated until the pattern matches the target pattern.

Keywords— Model Order Reduction, Particle Swarm Optimization.

I.

BACKGROUND

INTRODUCTION

As the increasing need for modeling very complex physical models, more computation power is needed in the field of simulation and CAD tools. Although we can increase the hardware computation power of our computers, this process will stop at a certain limit as stated by Moore's law. Another approach to solve this problem, is to use optimization algorithms to simplify these complex models, and deal with the simplified version with acceptable error. Model order reduction is the process of decreasing the order of a complex system modeled as a transfer function, to make another transfer function, which has the same response as the original transfer function in time and frequency domains. Since the first contribution in model order reduction published in [1]-[2], there are many proposed methods such as: truncated balanced realization [3], Hankel-norm reduction method [4], and the method of proper orthogonal decomposition [5]. Asymptotic waveform evaluation [6] is the first method that is related to Krylov subspaces. PRIMA [7] is another fundamental method in MOR. Other methods related to artificial intelligence are proposed in [8]-[11]. In this paper, Particle Swarm Optimization (PSO) algorithm is used to provide a generic solution to the problem of model order reduction. PSO algorithm is invented by Russell Eberhart and James Kennedy in 1995 [1], to model and simulate the behavior of flocking of birds or the schooling patterns of fish. The main idea is to consider many solutions, and model them as a particles (birds), then let every particle find its own way depending on its previous decision, and the choice of all birds, which is a global variable. This paper is organized as follows: Section II presents the needed background, the pseudo code, and the flow chart of the algorithm. Section III presents our approach to solve the problem of MOR using PSO. In section IV, we present the results for many transfer functions examples, and see the time and frequency response, the time needed for the algorithm, the accuracy of each function. Section V is the conclusions.

978-1-5090-3843-5/17/$31.00 ©2017 IEEE

193

IEEE EUROCON 2017, 6–8 JULY 2017, OHRID, R. MACEDONIA

The velocity value is calculated according to how far an individual's data is from the target. The further it is, the larger the velocity value. In the birds example, the individuals furthest from the food would make an effort to keep up with the others by flying faster toward the gBest bird. If the data is a pattern or sequence, the velocity would describe how different the pattern is from the target, and thus, how much it needs to be changed to match the target. Each particle's pBest value only indicates the closest the data has ever come to the target since the algorithm started. After first initialization of the two best values, the particle updates its velocity and positions each iteration with following equations: v[i] = w*v[i] + c1*rand()*(Pbest[i] - present[i]) + c2*rand()*(Gbest[i] - present[i]) (1) present[i+1] where,

=

persent[i]

+

v[i]

(2)

• v[i] is the velocity of the current particle at this iteration. • persent[i] is the current particle (solution). • w = inertia weight factor. • c1, c2 = cognitive and social acceleration factors respectively. • rand() = random numbers uniformly distributed in the range (0, 1). • Pbest = the best value obtained by the current particle till now. • Gbest = Global best of the group. Fig. 1 PSO flow-chart.

Pseudo code of Particle Swarm Optimization algorithm is shown in Listing 1. and its flow chart is shown in Fig. 1. III.

Listing 1: Pseudo code of Particle Swarm Optimization algorithm

THE PROPOSED MODEL

In model order reduction, we try to solve the problem of finding the reduced order model of an original higher order model, with the minimum error and the best cost (time and computer resources). The SISO model of our work is the transfer function of the high order system, and our goal is to find another transfer function which describe the same system but with less data, i.e. lower order. In next section, we will find out how we can use PSO in MOR.

For each particle { Initialize particle } Do until maximum iterations or minimum targeted error { For each particle { Calculate Data fitness value If the fitness value is better than Pbest { Set Pbest = current fitness value } If Pbest is better than Gbest { Set Gbest = Pbest } Calculate particle velocity according equation (1) Update particle position according equation (2) } }

In Model Order Reduction problem, the solution that the PSO algorithm will try to find (x & y of the particles in the previous explanation) is the coefficients of the transfer function. So for example, in the next 8th order transfer function:

(3) we will try to find another one witch is 2nd order TF, so we will consider the coefficients of the new TF as the particles data that the algorithm will iterate to find; for example, we can consider the new system as:

194

IEEE EUROCON 2017, 6–8 JULY 2017, OHRID, R. MACEDONIA

(4)

IV.

So each particle will be a vector of 4 elements x1, x2, x3, and x4, and the algorithm will try to find the local best of each particle and the global best of all the particles, which are those coefficients.

EXPERIMENTAL RESULTS

The PSO algorithm works with the next steps:

The PSO Algorithm is applied to set of transfer functions. Our model is implemented using MATLAB R2013a on a laptop with Intel core i5-3210M CPU @ 2.50GHz, and 6 GB of memory. We show some examples of the experimental Transfer functions, and the result of reduction. Fig. 2 show the step response and the frequency for three transfer function examples.

1- Set the full order system parameters:

Transfer Function #1: Original Function

In this step, we must provide the algorithm with the original non-reduced model, so that it can compare this original model with the reduced model generated by the algorithm and decide when to stop.

Denominator = [1, 36, 546, 4536, 22449, 67284, 118124, 109584, 40320]

2- Set appropriate level step inputs to the system: As to find a good match between the original and the reduced models, we must provide them with the same input to compare their response. In this work, we choose to use step input to the models to compare their response. 3- Choose a suitable order of the reduced order system: The user must choose the order of the reduced model, so that the algorithm can choose the width of the data of each particle and to simulate the model of this order with the full order model and find the fitness function.

Numerator = [14.17 11.39]

Original Order

10

Reduced Order

2

MSE

0.025

Run Time

40.72 seconds

Special Parameters

c1 = 1, c2 = 3, Particles Number = 13, Initialization Range = 8

Denominator = [1 6.115 10.18]

Original Function



The number of particles in the swarm.



The maximum number of iterations.



The Randomization range of the particles data and velocity.

Reduced Function

Num= [1.0e+03 *0.5* [0.0010 0.0275 1.5722 4.0561 4.1137] Den= [ 1.0e+03 * [0.0010 0.0241 0.8382 1.5052 1.0706 0.2559] N= [-9.687 47.95]

Original Order

6

Reduced Order

2

MSE

0.136116

Run Time

50.8 seconds

Special Parameters

c1 = 3 , c2 = 1, Particles Number =19 , Initialization Range =10

0.2977 0.2120

D= [1 23.36 5.573]

The cognitive and social acceleration factors, c1 and c2.

5- Definition of a fitness function: This step is the link between the optimization algorithm and the physical problem in hand. A good fitness function that is well representative of the parameters is crucial in the PSO algorithm. In this work, the fitness function is defined by the mean-squared error MSE of the difference between the step response of the reduced and the full ordered models.

Transfer Function #3:

Considering y1 is the step response of the original system, and y2 is the step response of the reduced order system; so the MSE is defined by: MSE = mean (y2 - y1)2

Reduced Function

Transfer Function #2:

4- Set the PSO parameters:



Numerator = [18, 514, 5982, 36380, 122664, 222088, 185760, 40320]

Original Function

(5)

Num= 10.*[1 46.0600507555559937.682439545239 11111.8055141986 84562.9537475526 428059.740758828 1436351.07057188 3062103.98802177 3729333.06928649 1965178.55987176] Den= [1 42.3650646860047 10370.6795371454 475158.201252174 5580586.10230344 11919153.5801979

So the goal of the algorithm is to reduce this fitness function, i.e. The mean square error between the reduced and the full order models. The main drawback of PSO Algorithm is the parameter tuning which is different for each new transfer function. It needs an experienced user to make the most benefit from the algorithm. We can automate the process of parameter tuning by looping for all possible combinations of parameters to find the best values which cause the least error of them all. But the problem here is the time needed for this looping, which is relatively very high compared to other algorithms.

195

854.129432908204 83705.1066688127 1940158.84360976 10617457.2420351 6115697.46062431]

Reduced Function

N= [19.18 50.26] D= [1 7.106 15.37]

Org/Red Order

10/2

MSE

0.030756

Run Time

36.09 seconds

Special Parameters

c1 = 3, c2 = 1, Particles Number = 13, Init Range =15

IEEE EUROCON 2017, 6–8 JULY 2017, OHRID, R. MACEDONIA

V.

CONCLUSIUONS

This Paper presents a new generic approach, based on Particle Swarm Optimization algorithm. The aim of this approach is to solve the problem of model order reduction for high order transfer functions. The main advantages of our approach is that it is generic, with relatively small time and reasonable accuracy. Although the algorihm works in time domain, it preserves both time and frequency responses of the original transfer function.

REFERENCES [1] C. Lanczos. “An iteration method for the solution of the eigenvalue problem of linear differential and integral operators”. J. Res. Natl. Bur. Stand, 45:225– 280, 1950.

(a)

[2] C. Lanczos. “Solution of systems of linear equations by minimized iterations”. J. Res. Natl. Bur. Stand, 49:33–53, 1952. [3] Moore, B.C.: “Principal component analysis in linear systems: controllability, observability, and model reduction”. IEEE Trans. Automat. Control, 26, 17–32 (1981). [4] K. Glover. “Optimal Hankel-norm approximations of linear multivariable systems and their l’-error bounds”. Int. J. Control, 39(6):115-193, 1984. [5] Lawrence Sirovich. “Turbulence and the dynamics of coherent structures”. Quart. Appl. Math., 45(3):561–590, 1987. [6] Lawrence T. Pillage and Ronald A. Rohrer. “Asymptotic Waveform Evaluation for Timing Analysis”. IEEE Trans. Computer-Aided Design, 9(4):352–366, April 1990. [7] A. Odabasioglu, M. Celik, L. T. Pileggi. PRIMA: “Passive Reduced-order Interconnect Macromodeling Algorithm”. 34th DAC, pp. 58-65, 1997. [8] K.Salah “Model order reduction using fuzzy logic algorithm” ICM, 2016. (b)

[9] K.Salah “Model order reduction using artificial neural networks” ICECS, 2016. [10] K.Salah “Model order reduction using genetic algorithm” UEMCON, 2016. [11] K.Salah “A novel model order reduction technique based on simulated annealing” ICM, 2016. [12] J. Kennedy and R.C. Eberhart, “ Particle Swarm Optimization”, in Proc. IEEE Conf. Neural Networks IV, Piscataway, NJ , 1995. [13] http://mnemstudio.org/particle-swarm-introduction.htm. [14] http://www.swarmintelligence.org/tutorials.php.

(c)

Fig. 2 Step Response and Frequency response of the original system versus the reduced one (a) TF#1, (b) TF#2, (C) TF#3.

196