A Novel Numerical Computation Method Based on ... - Semantic Scholar

1 downloads 0 Views 671KB Size Report
I. INTRODUCTION. Particle swarm optimization (PSO) is originally proposed by a social-psychologist named James Kennedy and an electrical engineer named ...
226

JOURNAL OF COMPUTERS, VOL. 5, NO. 2, FEBRUARY 2010

A Novel Numerical Computation Method Based on Particle Swarm Optimization Algorithm Yongquan Zhou

College of Mathematics and Computer Science Guangxi University for Nationalities, Nanning 530006, China Email: [email protected]

Xingqiong Wei

College of Mathematics and Computer Science Guangxi University for Nationalities, Nanning 530006, China Abstract— In this paper, a novel numerical computation method based on Particle Swarm Optimization (PSO) was presented, including numerical integral, eigenvalues and eigenvectors of matrix, interpolation polynomial. Simulation examples show that the algorithms are validated methods with high precision and powerful self-adapting. These algorithms have value in numerical calculation and engineering practice. Index Terms—PSO, Optimization, Self-adapting, Numerical Computation

I. INTRODUCTION Particle swarm optimization (PSO) is originally proposed by a social-psychologist named James Kennedy and an electrical engineer named Russell Eberhart in 1995[1]. It is based on swarm intelligence and has been proven to perform well on many optimization problems by experiments. PSO is attractive, for there are few parameters to adjust, the algorithm is easy to implement, and compared with other methods, it gets better results in a faster and cheaper way. Therefore, PSO has been successfully applied in many engineering areas and scientific areas such as parameter tuning, pattern recognition, neural network training, and function optimization [2]. Recently, with the appearance of the new computational intelligence, the research about numerical computation suited for computational intelligence method is becoming extremely urgent and essential. Based on these, we use the PSO to study the related questions of traditional numerical computational methods. Aiming at the deficiency of the traditional numerical methods, the primary task of this paper is solving the problems based on the auto-adaptive search, global convergence and robustness of PSO. In this paper, numerical methods proposed based on PSO mainly include numerical integral, eigenvalues and eigenvectors of matrix, interpolation polynomial. II. PARTICLE SWARM ALGORITHM A. Standard PSO *Corresponding author. Tel.+86 771 3260264 E-mail: [email protected].

© 2010 ACADEMY PUBLISHER doi:10.4304/jcp.5.2.226-233

The PSO approach simulates the social behavior of particles moving in a multidimensional search space, while each particle has its position and velocity. Each particle is treated as a potential solution to the optimization problem. Usually, the position is represented as a vector X i  ( xi1 , xi 2 , , xiD ) , while the velocity is represented as another vector Vi

 (vi1 , vi 2 ,

, viD ) , where i represents the index of the particle, and D is

the dimensionality of the search space. For each time step, the particle compares its current position with the goal (global/personal best) position, adjusting its velocity accordingly towards the goal with the help of explicit memory of the best position ever found both globally and individually. The most popular formulation of how particle adjusts its velocity and position are shown in Equations 1 and 2.

vid k 1  vid k  c1r1 ( pid k  xid k )  c2r2 ( pgd k  xgd k ) (1) xid k 1  xid k  vid k 1

(2)

In the above formulation, k is the time step, d is the index of dimension in the search space, w represents the inertia weight of the “flying” dynamic, c1and c2are regarded as cognitive and social parameters for the algorithm respectively, while r1 and r2 are random numbers within the interval [0, 1].

Pid is the personal best position which is recorded by particle i , while Pgd is the global best position obtained by any particle in the population. B. Improved PSO To solve the premature convergence and increase the convergent speed, the formulation of how particle adjusts its position adopts the following strategy: 1) The formulation of how particle whose fitness value doesn’t equal fitness ( Pg ) adjusts its velocity and position are shown in Equations 1 and 2,

JOURNAL OF COMPUTERS, VOL. 5, NO. 2, FEBRUARY 2010

227

2) The formulation of how particle whose fitness value equals fitness ( Pg ) adjusts its position is shown in Equations 1 and 3.

xid k 1  1.5* xid k

and

(3)

The aim of this strategy is to enhance the global exploration abilities and solve the premature convergence caused by some particles in standard PSO fall into stagnation. III. PSO FOR NUMERICAL COMPUTATION

, xd ),(v1, v2 ,

, vd ))

interval.

( x1 , x2 ,, xd ) -the nodal points on the integral interval; (v1 , v2 , , vd ) -the velocity corresponds to the nodal points, used to adjust the position of the nodal points. Step 2: Initialize the initial population in the searching space randomly: The initial population is composed of N individuals, each individual ( X , V ) includes D components of

xi

and vi .

Step 3: Evaluate the fitness values: Place every individual between left endpoint and right endpoint; arrange them in ascending sequence respectively. So, there are D  2 nodal points and D  1 segments on every individual integral interval, calculate the distance d j between the interfacing D  2 nodal points, j  1,2,  , D  1 , then calculate the fitness values at the D  2 nodal points and the midpoints on the D  1 segments. Compare the fitness values at the left endpoint, right endpoint and midpoint on each segment, we write the minimum function value w j , the maximum function value W j , j  1,2,  , D  1 , the fitness of this individual is defined as:

m d J 1

j

j

,where (m1 , m2 ,..., mD 1 )

are the function values correspond to the midpoints on the segments divided by the optimum individual, the left endpoint and right endpoint on the integral interval; (d1 , d 2 ,..., d D1 ) are the distances between the

D  1 segments divided by the optimum individual, the left endpoint and right endpoint on the integral interval. B. PSO for Double Integral Step1: Determine the representation of the individual:

Individuals in x axis direction are composed of two parts: particle position X and velocity U , each part has S components; individuals in y axis direction are composed of two parts: particle position Y and velocity V , each part has T components- namely,

( X ,U )  (( x1 , x2 ,, xs ), (u1 , u2 ,, u s )) (Y ,V )  (( y1 , y2 ,, yt ), (v1 , v2 ,, vt )) where, S —The number of nodal points on the integral interval in x axis direction; T —The number of nodal points on the integral interval in y axis direction;

( x1 , x2 ,, xs ) —The nodal points on the integral interval in x axis direction;

(u1 , u 2 ,, u s ) —The velocity corresponds to the nodal points in x axis direction, which is used to adjust

the range-ability of each nodal point; ( y1 , y2 ,, yt ) —The nodal points on the integral interval in y axis direction;

(4)

The nearer the fitness value approach 0, the better the individual is. The termination condition is defined as: choose a  which is very close to 0, if the minimum fitness value is less than  , then stop. Step 4: If the termination condition is met, then stop, choose the optimum solution. Otherwise, continue.

© 2010 ACADEMY PUBLISHER

a constant which lies between 0.4 and 0.9. ② Calculate the fitness value of the new individuals. Step 6: Execute Step 5 repeatedly until the termination condition is met, choose the optimum solution as the result. Step 7: When the algorithm stops, the integral value D 1

where d is the number of nodal points on the integral

1 D1 f (i)   d j W j  w j 2 j 1

c2 are nonnegative constants, the inertia weight w is

approximately equal:

A. PSO for Sloving Numerical Integral Algorithm Step 1: Determine the representation of the individual: The individual is composed of two parts: particle position X and velocity V which is represented by D dimensional vectors-namely,

( X ,V )  (( x1 , x2 ,

Step 5: The population is updated according to PSO: ① Update the i-th particle velocity and position 2,  D  1 , c1 according to (1) and (2). Where d  1,

(v1 , v2 ,, vt ) —The velocity corresponds to the nodal points in y axis direction, which is used to adjust the range-ability of each nodal point. Step 2: Initialize the initial population in the searching space randomly: Initialize the initial population in both x and y axis direction, the initial population is composed of N individuals, each individual ( X , U ) includes

228

JOURNAL OF COMPUTERS, VOL. 5, NO. 2, FEBRUARY 2010

S components of xi and u i in x axis direction, individuals (Y , V ) include T components of y j and v j . In the two particles swarm, the personal best position

Pid

recorded by i-th particle and the global best position Pgd obtained by any particle in the population are

Step 6: Execute Step 5 repeatedly until the termination condition is met, choose the optimum solution as the result. Step 7: When the algorithm stops, the integral value approximately equal:

I

1 S 1 T 1  ( gij  gi 1, j  gi, j 1  gi 1, j 1  gmidij )  areaij 5 i 1 j 1

initialized as number sequence that divides x axis and y axis uniformly. Step 3: Evaluate the fitness values: Place every individual between left endpoint and right endpoint in both directions; arrange them in ascending sequence respectively. The first individual in x axis direction corresponds to the first one in y axis direction,

Where g ij , g i 1, j , g i , j 1 , g i 1, j 1 , g mid are the function

and so do the others. Thus, there are S  2 nodal points and S  1 segments on the individual integral interval in

B. PSO for Solving Matrix Eigenvalues and Eigenvectors Algorithm

(6) ij

values at the vertex and midpoint of the small rectangle respectively.

x axis direction, then calculate the distance d i between the interfacing S  2 nodal points, i  1,2,  , S  1 . In y axis direction, there are T  2 nodal points and T  1 segments, calculate the distance d j between the

1) Method for Solving Matrix Eigenvalues 1.1. Determine the representation of the individual: The individual is composed of two parts: the particle position X and velocity V , each part has two segments-

interfacing T  2 nodal points, j  1,2,  , T  1 . Putting the individuals in both directions into integrand gives us the function value at the corresponding intersection point and the midpoints of the small rectangle. Find out the minimum and maximum function value ( wij and Wij ) among the four intersection points and

( x1 , x2 ) -the real and imaginary part of the eigenvaluenamely, eigenvalue:   x1  x2 * i ; (v1 , v2 ) -The

midpoint of the small rectangle, thus the fitness of the individual is:

f ( n) 

1 S 1 T 1  areaij  Wij  wij 2 i 1 j 1

(5)

In which, n  1,2,  , N . The nearer the fitness value approach 0, the better the individuals in both direction are. The termination condition is defined as: choose a  which is very close to 0, if the minimum fitness value is less than  , then stop. Step 4: If the termination condition is met, then stop, choose the optimum solution. Otherwise, continue. Step 5: The populations in both direction are updated according to PSO: ① Update the i-th particle velocity and position in x axis direction according to (1) and (2).

2, S 1 , Where d  1,

c1 and c2 are nonnegative constants, the inertia weight w is a constant which lies between 0.4 and 0.9. Update the i-th particle velocity and position in y axis direction according to (1) and (2) 2, T  1 , c1 and c2 are nonnegative Where d  1,

constants, the inertia weight w is a constant which lies between 0.4 and 0.9. ② Calculate the fitness value of the new individuals.

© 2010 ACADEMY PUBLISHER

namely,

( X ,V )  (( x1 , x2 ), (v1 , v2 ))

.

Where,

velocity corresponds to the eigenvalue. 1.2. Determine the fitness function On the basis of the determined individual representation this has been determined, putting the individual into the characteristic equation:

P( )= A- I  0 Gives us

P( )  A  ( x1  x2 * i) I  0 , Letting

e  det( A  ( x1  x2 * i) I ) , we find that the solving problem of eigenvalue changes into the minimization problem of:

min f  det  A  ( x1  x2 * i) I  . The fitness function of PSO is defined as:

min f  det  A  ( x1  x2 * i) I 

(7)

2) Method for Solving Matrix Eigenvectors 2.1) Determine the representation of the individual: The individual is composed of two parts: the particle position X and velocity V , each part has n segmentsnamely,

( X ,V )  (( x1 , x2 ,

, xn ),(v1, v2 ,

, vn ))

JOURNAL OF COMPUTERS, VOL. 5, NO. 2, FEBRUARY 2010

In

229

which

n is the order of the matrix A , ( x1 , x2 ,, xn ) is the eigenvector corresponds to the

eigenvalue,

(v1 , v2 ,

, vn ) and

is

the

velocity

corresponds to the eigenvector. 2.2) Determine the fitness function On the basis of the obtained eigenvalue and determined individual representation, putting the individual into AZ   Z gives us

A *( x1 , x2 ,

, xn )   *( x1 , x2 , T

T

, xn )

yn )  ( A   I )*( x1 , x2 ,

, xn )T

n

y i 1

 an x1n  y1 ,

(9)

 an xnn  yn ,

, an ) , we define the fitness function

of PSO as n

n

i 0

j 0

(10)

Here a which satisfies: n

n

i 0

j 0

f (a)  0   ( a j xij  yi )2  0 is the solution

The fitness function is defined as

min f 

Here a  (a0 , a1

 an x0n  y0 ,

min f (a)   ( a j xij  yi ) 2

Letting

Y  ( y1 , y2 ,

a0  a1 x0  a2 x02   2 a0  a1 x1  a2 x1    a  a x  a x 2  2 n  0 1 n

2 i

(8)

satisfies the Equation 9. Thus, the fitness function is defined as min

C .PSO for solving Matrix Eigenvalues and Eigenvectors Step 1: Initialize the initial population randomly: the initial population is composed of N individuals which is generated randomly. Step 2: Calculate the individual fitness value according to (7) or (8), the nearer the fitness value approach 0, the better the individual is. The termination condition is defined as: choose a  which is very close to 0, if the minimum fitness value is less than  , then stop. Step 3: If the termination condition is met, then stop, choose the optimum solution. Otherwise, continue. Step 4: The population is updated according to PSO ① Update the i-th particle velocity and position according to (1) and (2) In which D  2 when solving eigenvalue problem, D  n when solving eigenvector problem, n is

n

n

i 0

j 0

f (a)   ( a j xij  yi )2 , and then

the seeking of interpolation polynomials is changed into a non-constraint optimization problem. ② The interpolation polynomials in which derivatives are interpolated The fitness function is defined as n

2 n 1

i 0

j 0

2 n 1 2 n 1

min f (a)   (  a j xij  yi )2   (  ja j xij 1  yi ')2 (i  0,1,

i  n 1

j 0

(11)

, n) Where

a  (a0 , a1

, a2n1 ) .

The procedure of constructor method of interpolation polynomials based on PSO can be described as follows: Step 1: Initialize the value of N , D , W . For each

the order of the matrix

A , c1 and c2 are nonnegative constants, the inertia weight w is a constant which lies

particle, initialize X , V , Pid , Pgd , randomly.

between 0.4 and 0.9 ② Calculate the fitness value of the new individuals. Step 6: Execute Step 4 repeatedly until the termination condition is met, choose the optimum solution as the result.

Step 2: For each particle: Update the velocity of each particle according to (1) Update the position of each particle according to (2) or (3) Step 3: Evaluate the fitness values of all particles according to (10) or (11). For each particle, compare its current fitness value with the fitness of its Pid . If current

D. PSO for Interpolation Polynomial The definition of the fitness function ① The interpolation polynomials in which no derivatives are interpolated Given the values yi  f ( xi ) at a set of distinct

value is better, then update

numbers within the interval [a, b], (i  0,1,

Pn ( x)  a0  a1 x  a2 x 2  The interpolation condition equation

, n) ,

 an x n .

Pn ( xi )  yi leads to the

Pid and its value. Furthermore,

determine the best particle of current population with the best fitness value. If the fitness value is better than the fitness of Pgd , then update and its fitness value with the position and objective value of the current best particle. Step 4: If the maximum number of iterations or any other predefined criterion is met, then go to Step5 ; other go back to Step 2. Step 5: Output Pgd and the interpolation polynomials, stop. IV . NUMERICAL SIMULATION

© 2010 ACADEMY PUBLISHER

JOURNAL OF COMPUTERS, VOL. 5, NO. 2, FEBRUARY 2010

To validate the feasibility and validity of the algorithms for numerical problems, here are some simulation examples. A. Evaluation of integrals EXAMPLE 1 Find the singular integral

 e ,0  x  1  f ( x)   e  x 2 ,1  x  2 e  x 3 ,2  x  3 

-3

x 10

2.5 Integration error

230

2

1.5

x

Solution The exact integral value is 1.546036 , in preference [3], using Artificial Neural Network gives us 1.5467 , here using PSO, let N  25, D  60 , we find 1.546032 . Thus, the algorithm perform very well on the singular integral problem, figure 1 shows the integration error change in the process of estimating the example 1 integral. Integration error

-5

1.5

1.4 1.35 1.3 1.25 1.2 1.15 1.1 1.05

0

0.5

0

0

10

20

30

40

50

60

70

80

90

100

Generation

Figure 2. The integration error change in the process of estimating example 2 integral

EXAMPLE 3 Find the numerical value of 1

1

0

0

 dx e

( x 2  y )

1

dy  (1  e 1 )  e  x dx [6] 2

0

Solution Here the integrand f ( x, y ) can not been found, the given exact value is 0.4720828. Here using PSO, let N  15, k  50 , the value of the integral basing on PSO and the duplicated trapezoid formula (DTF) are shown in Table 1.

x 10

1.45

1

1

10

20

30

40

50

60

70

80 90 100 Generation

Figure 1. The integration error change in the process of estimating example 1 integral

EXAMPLE 2 Find the oscillatory integral 5 8 0



sin(20x)dx

Solution The exact numerical value of this integral is 0.05 , using the duplicated trapezoid formula [4], we can find the approximate value, 503 nodal points are needed at least for the error not exceeding 10 -3, and this kind of work is a heavy load. In reference [5], using Artificial Neural Network gives us 0.04992 , here using PSO, let N  25, D  100 , we find 0.050038 , figure 2 shows the integration error change in the process of estimating the example 2 integral.

© 2010 ACADEMY PUBLISHER

Table 1. The integral results of the two methods Segment Segment numbers numbers Integration Value in x axis in y axis methods direction direction DTF 0.4719058 2 2 PSO 0.4720343 DTF 0.4720991 4 4 PSO 0.4720882 DTF 0.4720907 8 8 PSO 0.4720841 DTF 0.4720837 32 32 PSO 0.4720832

Error 0.0003749 0.0001028 0.0000345 0.0000114 0.0000167 0.000027 0.000020 0.0000008

B. Matrix Eigenvalue and Eigenvector Problem EXAMPLE 4 Determine the eigenvalues and eigenvectors of the matrix

 3 1 1 A   7 5 1  6 6 2  Solution In reference [8], a method of calculating the eigenvalue and eigenvector at the same time (CATST) is propose. Using PSO gives us the eigenvalues and eigenvectors of the matrix, here is a double eigenvalue in this example, we know from Table 2 that PSO is effective in finding multiplex eigenvalue, and the result is better, Table 2 shows the result using the three methods. Figure 3 and 4 show the fitness function value change in the

JOURNAL OF COMPUTERS, VOL. 5, NO. 2, FEBRUARY 2010

231

process of determining eigenvalues and eigenvectors.

 10  2.5 A 0  0

Fitness value

12

Eigenvalue 1 Eigenvalue 2 Eigenvalue 3

10

8

4

2

0

5

10

15

20

25

30

35

40 45 Generation

1 0

4 9  3 7 6 2  2 6

Solution In reference [8], a method of calculating the eigenvalue and eigenvector at the same time (CATST) is proposed. Using PSO gives us the eigenvalues and eigenvectors of the matrix, here is a double eigenvalue in this example, we know from table 3 that PSO is effective in finding multiplex eigenvalue, and the result is better, Table 3 shows the result using the three methods. Figure 5 and 6 show the fitness function value change in the process of determining eigenvalues and eigenvectors.

6

0

2.5 10

50

Figure 3. The fitness function value change of eigenvalues

Fitness value

15

Eigenvector 1 Eigenvector 2 Eigenvector 3 10

5

0

0

5

10

15

20

25

30

35

40

45

50

Generation

Figure 4. The fitness function value change of eigenvector

EXAMPLE 5 Determine the eigenvalues and eigenvectors of the matrix Table 2. The eigenvalues and eigenvectors in complex region Exact Value

Matlab [7]

CATST [8]

PSO 4.00000000000000

4

4.00000000000000

4

-0.00000000000000i

-2.00000000000013 eigenvalue

-2

-2.00000005285691

-2

+0.00000000000002i -1.99999999999992

-2 0   1 1  

eigenvector

-1.999999947143091 0   1 1  

-2

-0.00000000000006i

0   1 1  

 0.00000000003427     1.00000000000000   1.00000000035985   

1   1 0  

 1.00000000000000     1.00000000000000   0.00000005285691  

1   1 0  

1   1 0  

 1.00000000000000     1.00000000000000   -0.00000005285691  

1   1 0  

 1.00000000000000     1.00000000000001   0.00000000000012     1.00000000000000     1.00000000000001   -0.00000000000003   

Table 3. The eigenvalues and eigenvectors in complex region

© 2010 ACADEMY PUBLISHER

232

JOURNAL OF COMPUTERS, VOL. 5, NO. 2, FEBRUARY 2010

eigenvalue

Exact value 13.32279447 4.223060030 7.227072748  0.8038188903i

Languerre Algorithm [8] 13.322793 4.223060 7.227073  0.807819i

7.227072748  0.8038188903i

7.227073  0.807819i

 0.716542469     0.6895225072   0.1017511866     0.02779026158 

eigenvector

Netwon Algorithm [8] 13.322793 4.223060

4.223060

 0.8038188902674i

7.22707274830745



 0.5969098319     0.3306555246   0.6974076342     0.7849535101 

4.223060

PSO 13.3227944734401 4.223060029945 7.22707274830745  0.8038188902674i





 -0.59690983190000     -0.33065552216429   -0.69740763571435     0.78495351274334 



2

yi

30 Fitness value

 -0.71654246900000     -0.68952250289186   -0.10175118644602     -0.02779026144051 

Eigenvalue1 Eigenvalue2 Eigenvalue3 Eigenvalue4

25

1 -3

yi '

3

yi ''

20

2

Find the interpolation polynomials of y  f ( x ) .

15

Let N  50, D  5 , using the constructor method of interpolation polynomials based on improved PSO, we find the interpolation polynomials:

10

5

P2 ( x)  0.05 x 2  0.05 x  0.2 , it’s the same as using 0

0

5

10

15

20

25

30

35

40 45 Generation

50

Figure 5. The fitness function value change of eigenvalues

Hermite interpolation method in reference [9]. Figure 7 shows the error change in the process of constructing the example 5 interpolation polynomials. 2.5

Eigenvector1 Eigenvector2

80

x 10

6

error

Fitness value

90

2

70 60

1.5 50 40

1

30 20

0.5

10 0

0 0

5

10

15

20

25

30

35

40

45

50

0

50

100

150 Generation

Generation

Figure 6. The fitness function value change of eigenvector

C. Polynomial Interpolation EXAMPLE 5 Given a table of data points:

xi

0

© 2010 ACADEMY PUBLISHER

1

3

Figure 7. The error change in the process of constructing the example 5 interpolation polynomials

EXAMPLE 6 Given a table of data points based on temperature function y  f ( x ) :

xi

0.4

0.6

0.8

1.0

yi

1.5

1.8

2.2

2.8

JOURNAL OF COMPUTERS, VOL. 5, NO. 2, FEBRUARY 2010

233

REFERENCES Find the approximate value of f (0.5) and f (0.9) . In reference [4], using the Newton’s forward and backward interpolation formula, we find the approximate value: f (0.5)  N3 (0.5)  1.64375 ,

f (0.9)  N3 (0.9)  2.46875 . Let N  50, D  4 , using the constructor method of interpolation polynomials based on improved PSO, we find the value: f (0.5)  1.64374997 ;

[1] [2] [3]

[4] [5]

f (0.9)  2.46875 . Figure 8 shows the error change in the process of constructing the interpolation polynomials example 6.

[6]

error

4000

[7]

3500 3000

[8]

2500 2000 1500

[9]

1000 500 0 0

10

20

30

40

50

60

Kennedy J, Eberhart R C. Particle swarm optimization. Institute of Electrical and Electronics Engineers,1995, 4(27):1942-1948. Eberhart R C, Kennedy J.A new optimizer using particle swarm theory. Institute of Electrical and Electronics Engineers, 1995:39-43. Wang Xiao-hua, He Yi-gang, Zeng Zhe-zhao. Numerical Integration Study Based on Triangle Basis Neural Network Algorithm Journal of Electronics and Information Technology, 2004, 26(3): 394-399. Ding Li-juan, Cheng Qi-yuan. Numerical computation method. Beijing: Beijing University of Technology Press, 2005:165-204. Xu Li-ying, Li Li-jun. Neural Network Algorithm for Solving Numerical Integration. Journal of System Simulation, 2008,20(7):1922-1924 Lu Jia-lin, Jiang Ze-qu. Unequilateral Dual Dividing in Numerical computation to double integral. Journal of Sichuang institute of technology,1993, 3(12):145-164. Mathews J H, Fink K D. Numerical Methods Using MATLAB. 4th ed. Beijing: Publishing House of Electronics Industry, 2005. Yang Ting-jun. A New Method of Calculating the Eigenvalue and Eigenvector of a Matrix at the same time .Journal of Gansu Lian he University, 2006, 20(3):100-103. Chen Ji-ming. Numerical Computation Method. Shanghai: Shanghai University Press, 2007:115.

70

Generation

Figure 8. The error change in the process of constructing the example 6 interpolation polynomial

V. Conclusions Basing on PSO, some approaches are proposed to solve numerical optimization computation problems in this paper. The simulation examples show that the algorithm perform even better than the traditional numerical computation methods on these numerical problems. These algorithms have value in numerical calculation and engineering practice. ACKNOWLEDGMENT This work is supported by Grants 60461001 from NSF of China and the project Supported by Grants 0832082, 0991086 from Guangxi Science Foundation

© 2010 ACADEMY PUBLISHER

Yongquan Zhou Ph. D, Professor. His received the B.S. degree in mathematics from Xianyang Normal University, Xianyang, China. In 1983. the M.S. degree in computer from Lanzhou University , Lanzhou, China. In 1993, and Ph.D. in computation intelligence from Xidian University, Xian, China. His current research interests including computation intelligence and applies.

Xingqiong Wei received the M.S degree in computer from Guangxi University for Nationalities, Nanning , China. He current research interests including computation intelligence and applies.