Comparison of MLP and RBF neural networks using deviation ... - rtpis

24 downloads 0 Views 1MB Size Report
Using Deviation Signals for On-Line. Identification of a Synchronous Generator. Jung-Wook Park, Student Member, IEEE, R. G. Harley, Fellow, IEEE, and G. K. ...
Comparison of MLP and RBF Neural Networks Using Deviation Signals for On-Line Identification of a Synchronous Generator Jung-Wook Park, Student Member, IEEE, R. G. Harley, Fellow, IEEE, and G. K. Venayagamoorthy, Member, IEEE Abstract—This paper compares the performances of a multilayer perceptron network (MLPN) and a radial basis function network (RBFN), for the on-line identification of the nonlinear dynamics of a synchronous generator. Deviations of signals from their steady state values are used. The computational complexity required to process the data for online training, generalization, and on-line global minimum testing are investigated by time-domain simulations. The simulation results show that, compared to the MLPN, the RBFN is simpler to implement, needs less computational memory, converges faster, and global minimum convergence is achieved even when operating conditions change. Index Terms—Multilayer perceptron network (MLPN), nonlinear dynamic system, on-line identification, radial basis function network (RBFN), synchronous generator.

I. INTRODUCTION A turbogenerator in a power system is a nonlinear, fast acting, multiple-input multiple-output (MIMO) device [1], [2]. Conventional automatic voltage regulators and turbine governors are designed to control, in some optimal fashion, the turbogenerator around one operating point; and at any other point the generator’s performance is degraded. Adaptive controllers for turbogenerators are usually designed using linear models and traditional techniques of identification, analysis, and synthesis to achieve the desired performance. However, due to a synchronous generator’s wide operating range, its complex dynamics [3]-[5], its transient performance, and its nonlinearities, it cannot be accurately modelled as a linear device. Artificial neural networks offer an alternative to conventional controllers. They are able to adaptively model or identify a non-stationary nonlinear MIMO process/plant on-line while the process is changing, and thereby yield information that can be used by another neural network to

This work is supported by a Research Grant from the National Science Foundation, USA. J. W. Park and R G Harley are with the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0250, USA (e-mail: [email protected] and [email protected]). G. K. Venayagamoorthy is with the Department of Electrical and Computer Engineering, University of Missouri-Rolla, MO 65409-0249, USA (e-mail: [email protected]).

control the process [6], [7]. This paper concentrates on the type of neural network to use for the identification task. Researchers have until now used two different types of neural networks (NNs), namely, a multilayer perceptron network (MLPN) [3]-[5], [8]-[12] or a radial basis function network (RBFN) [13]-[19], both in single and multi-machine power system studies. Proponents of each type of NN have claimed advantages for their choice of NN, without comparing the performance of the other type for the same study. The applications of NNs in the power industry will expand, and at this stage there is no authoritative fair comparison between the MLPN and the RBFN. This paper therefore does such a thorough comparison. A follow-on study will compare their suitability as feedback controllers. Factors considered in this paper include speed of convergence, computational memory requirements, and global stability especially with continually on-line training. Only deviations of signals from their set points are used (as proposed by [3]-[5], [8]-[12], [14], [15], [17], [19]), and not their actual values. The process/plant modeling and the MLPN/RBFN are described in Section II. The system configuration using the nonlinear autoregressive with exogenous inputs (NARX) model is presented in Section III. The MLPN and RBFN are compared for several case studies in Section IV. Finally, the conclusion is given in Section V. II. ADAPTIVE NEURAL NETWORK IDENTIFIER A. Plant modeling The synchronous generator, turbine, exciter and transmission system connected to an infinite bus in Fig. 1 are the plant that has to be identified. Equations are used to describe the dynamics of the plant in order to generate the data for the NN identifiers. On a physical plant, this data would be measured. The generator (G) with its damper windings, is described by a seventh order d-q axis set of equations with the machine current, speed, and rotor angle as the state variables. In the plant, Pt and Qt are the real and reactive power at the generator terminals, respectively, Ze is the transmission line impedance, Pm is the mechanical input power to the generator, Vt is the terminal voltage, Vb is the

0-7803-7322-7/02/$17.00 © 2002 IEEE

274

infinite bus voltage, Vref is the exciter input voltage and Pin is the turbine input power. Note that no feedback control loops are used since the MLPN and RBFN are evaluated on the plant without controllers. The input power deviation (∆Pin) and exciter input voltage deviation (∆Vref) are generated as small pseudo-random binary signals (PRBSs) (as proposed by [8], [9], [16], [19]) to perturb the plant in order to measure its response at speed deviation (∆ω) and terminal voltage deviation (∆Vt), thus a two-input two-output system. The same sequence of PRBS signals is presented to both NNs in order to ensure a fair comparison. The magnitude of these random signals is limited to a maximum of ±10% of their nominal values. Plant P in

Σ

Prime Mover (Turbine)

Pm G (Generator)

Pt Q t ∆ω V t

TRANSMISSION SYSTEM, Ze

∆P in +

Exciter

Σ

-

Σ -

+

∆V t

Inf. bus Vb

V ref ∆V ref

following sigmoidal function (block MLPN in Fig. 2). 1 = 1 + exp ( − x) 





(1)

The output layer neurons are pure summations of inner products between the nonlinear regression vector from the hidden layer and output weights. The MLPN requires no offline training. During on-line training, the MLPN starts with random initial values for its weights, and then computes a one-pass backpropagation algorithm at each time step k, which consists of a forward pass propagating the input vector through the network layer by layer, and a backward pass to update the weights by the gradient descent rule. By trial and error, fourteen neurons in the hidden layer are optimally chosen for on-line identification. After having trained on-line for a period of time, the training error should have converged to a value so small that if training were to stop, and the weights frozen, then the neural network would continue to identify the plant correctly while the operating condition remains fixed; this means the NN is able to generalize well. Also, the training of NNs is said to have reached a global minimum when, after changing the operating conditions, as well as freezing the weights, the network’s response is still reasonably acceptable. C. Radial basis function network (RBFN)

NN Identifiers

Fig. 1. Plant model used for on-line identification: Synchronous machine connected to an infinite bus

B. Multilayer perceptron network (MLPN) In this paper, the MLPN consists of three-layers of neurons (input, hidden and output layer as shown in Fig. 2) interconnected by weights. The MLPN transforms n inputs to m outputs (also called input-output mapping) through some nonlinear function, fˆ : R n → R m . The weights of the MLPN MLPN

RBFN

A more detailed description is given of the RBFN since the MLPN is relatively well understood by researchers in power systems. Like the MLPN, the RBFN also consists of threelayers (Fig. 2). However, the input values are each assigned to a node in the input layer and passed directly to the hidden layer without weights. The hidden layer nodes are called RBF units, determined by a parameter vector called center and a scalar called width. The gaussian density function (block RBFN in Fig. 2) is used in the hidden layer as an activation function. The linear weights, wji between the hidden layer and output layers are solved or trained by a linear least squares optimization algorithm [6], [18], [20]. The overall input-output mapping, fˆ : X ∈ R n → Y ∈ R m is

1

as follows. h

y i = wo +

2

j =1

1

m

n

Hidden Layer

ji

exp − X − C j 

2

/ 

2 j

  

(2)

where X is the input vector, Cj ∈Rn is the jth center of RBF unit in the hidden layer, h is the number of RBF units, wo and wji are the bias term and the weight between the hidden and output layers, respectively, and yi is the ith output in the mdimensional space. Once the centers of BRF units are established, the width of the ith center in the hidden layer is calculated by (3).

3

Input Layer

∑w

Output Layer

Fig. 2. The three layers of a feedforward neural network which illustrates either a MLPN or RBFN

are solved or trained by the error backpropagation algorithm. The activation function for neurons in the hidden layer is the

1/ 2

1 h n  (3) = ( c ki − c kj )  i  h j=1 k =1  where cki and ckj are the kth value of the center of ith and jth RBF units. In (2) and (3), represents the euclidean ⋅

275



∑∑

norm. There are four different ways for input-output mapping using the RBFN, depending on how the input data is fed to the network [21]. 1) Batch mode clustering of centers and batch mode gradient decent for linear weights. 2) Batch mode clustering of centers and pattern mode gradient decent for linear weights. 3) Pattern mode clustering of centers and pattern mode gradient decent for linear weights. 4) Pattern mode clustering of centers and batch mode gradient decent for linear weights.





Weights: After the vector of cluster centers and the width of each center are calculated off-line, the values of the weights, wji are subsequently updated during on-line training by using the linear least square algorithm based on the pattern mode, or the pseudo-inverse technique based on the batch mode in the orthogonal least squares (OLS) algorithm [13], [17]. The pseudo-inverse technique takes less time for learning and might be more effective for onestep ahead prediction performance than the pattern mode least square algorithm. However, this technique has the following drawback for on-line identification when the NNs are tested for generalization or global minimum convergence without further weight updates. Consider the memory allocation structure given in Fig. 3. The window 1 contains the system data from time step nT to (n+p)T, and window 2 contains data from (n+p)T to (n+2p)T. When the pseudo-inverse technique is used, the RBFN identifier processes all the data in each window as a single set such that training for one-step ahead prediction is fast. The drawback of the pseudo-inverse technique is that it does not have a learning mechanism for generalization, because the weights calculated in each window have the system information for only that particular window.

(n+2p)T

2p

Window 2

p+2 p+1

(n+p)T

p





… …

Centers of a RBF unit: The pattern mode clustering of centers is not feasible for on-line identification because of excessive memory and computational complexity, since the center vectors and their widths have to be adapted with changing input patterns. Hence the batch mode k-means clustering algorithm is used for determining the centers offline, instead of the recursive k-means algorithm [18], [20]. The batch mode k-means clustering algorithm can be described as follows. Step (1): Initialize center of each cluster to a different randomly selected training pattern. Step (2): Assign each training pattern to the cluster nearest to it. This is done by calculating the euclidean distances between training patterns and centers. Step (3): When all training patterns are assigned, the average position is calculated for each cluster center. Step (4): Repeat step (2) and (3) until the cluster center changes become acceptably small by the previously chosen stopping criterion.

t

Window 1

2 1

nT

Fig. 3. Memory allocation structure considered for determination of output weights using the RBFN

In other words, information in one window for linear weights is lost in other windows. Moreover, the use of data in all windows is not possible for on-line identification because of memory limitations. Consequently, the batch mode k-means clustering algorithm for centers and the pattern mode gradient decent algorithm for weights are calculated off-line and on-line, respectively, for training. III. SYSTEM CONFIGURATION FOR ON-LINE IDENTIFICATION With the plant and NN identifiers described in Section II, the NARX model in Fig. 4 is now used to represent the MIMO mapping of the plant in Fig. 1, since this has been used by others as a benchmark structure for on-line identification [7, 8, 9, 12, 15, 17, 18, 19, 20, 22]. Ref(k)

Y (k) Plant

U(k) z -1

U(k-1) z -1

z -1

MLPN or RBFN

z -1

(Neural Network Identifier)

U(k-2) U(k-3) Y (k-3) Y (k-2)

- Σ+ Y net(k) E (k)

z -1

Y (k-1) z -1

Fig. 4. NARX model based structure for adaptive ANN identifier

The input vector, U(k) consists of the turbine input power deviation (∆Pin) and exciter input voltage deviation (∆Vref), that is, U(k) = [∆Pin(k), ∆Vref (k)], and is fed into the plant with the vector, Ref(k) = [Pin(k), Vref(k)]. The input signals of

276

U(k) are generated as 50Hz PRBSs which is same as the mains supply frequency used in this study. The output vector, Y(k) consists of the speed deviation (∆ω) and terminal voltage deviation (∆Vt), that is Y(k) = [∆ω(k), ∆Vt(k)]. The neural network output, net(k) = fˆ (X(k)), where X(k) is the

IV. COMPUTER SIMULATION ANALYSES To compare the performances of the proposed MLPN and RBFN on-line identifiers, the following evaluations are conducted by simulation. The computational complexity required to process the data set is compared. One-step ahead prediction is tested. Generalization properties and on-line global minimum are tested. Plant is subjected to a three phase short circuit. 







A. The computational complexity During 0.2 s training time, the computational complexity required for the identifier to process the data set is measured by calculating the number of floating-point operations (FLOPS), which are the basic arithmetic operations of addition/subtraction and multiplication/division. For the RBFN, the computational complexity depends on the numbers of RBF unit centers, which contain information about the entire operating range. In Table I, the RBFNs with 10, 12, 15 and 21 centers are denoted RBFN10, RBFN12, RBFN15, and RBFN21, respectively, for convenience of presentation. Simulation tests show that the large numbers of RBF unit centers do not necessarily yield better performance. Table I clearly shows that the RBFNs requires less FLOPS and less time than the MLPN. The absolute value of time used in a practical implementation by dedicated microprocessor hardware will be much less than the values shown in Table I. After trial and error, the RBFN12 is now chosen for the comparison with the MLPN in the following evaluations. THE

TABLE II OPERATING CONDITION DATA SET

Operating condition

A

B

Pt (pu)

0.2

0.3

Qt (pu)

0.0084

0.0240

Pin (pu)

0.2001

0.3002

(rad/s)

0.5250

0.7460

0.025+j0.75

0.025+j0.75



Ze (pu)

(MSEP) for on-line training, using the MLPN and the RBFN12, are shown in part (a) of Table III. After 1400 s of on-line training, the RBFN12 has a significantly smaller value (4.138×10-9) for MSEP of ∆Vt than the MLPN could achieve (2.922×10-7). In fact, the MLPN training had to be continued to 2800 s until its value for MSEP of ∆Vt decreased to the same order of magnitude as that of the RBFN12. This clearly shows that the learning capability of the RBFN12 (with 12 neurons in the hidden layer) is much stronger than that of the MLPN (14 neurons in the layer) in the given training time of 1400 s. To test the generalization performance of the two NNs, their weights were fixed after 2800 s for the MLPN, and 1400 s for RBFN12, of on-line training, and they were expected to continue to correctly identify the plant without changing the operating condition. The weights were frozen at t=0 s in Figs. 5 and 6, which illustrate generalization performances of the two identifiers for a further 30 s when subjected to PRBS inputs. G eneraliz ation tes t for speed deviation 0.01

P lant M LP N RB FN12

0.008 0.006 0.004 0.002

∆ω [pu]

input vector to the identifier, consisting of three time lags of system input and output, respectively, that is, X(k) = [Y(k-1) U(k-1) Y(k-2) U(k-2) Y(k-3) U(k-3)]T. The residual or error vector, E(k) used for updating weights during training is given by E(k) = Y(k) - net(k).

THE

0 -0.002 -0.004 -0.006 -0.008 -0.01 0

5

10

15 Tim e [s ]

20

25

Fig. 5. Generalization test: Speed deviation

TABLE I NUMBER OF FLOATING POINT OPERATIONS AND ELAPSED TIME REQUIRED DURING 0.2S

4

x 10

-3

G eneraliz ation tes t for term inal voltage deviation P lant M LP N RB FN12

3

RBFN10

RBFN12

RBFN15

RBFN21

22,920

14,180

15,700

17,980

22,540

Elapsed time

0.6510 s

0.5337 s

0.6210 s

0.6410 s

0.6510 s

2

1

t

MLPN

FLOPS

∆ V [pu]

Identifiers

30

B. Prediction during training and generalization The two identifiers were both trained with PRBSs at operating condition A in Table II. The one-step ahead prediction average absolute error (EP) and mean square error

0

-1

-2

-3

0

5

10

15 Tim e [s ]

20

25

Fig. 6. Generalization test: Terminal voltage deviation

277

30

The generalization average absolute error (EG) and mean square error (MSEG) at the end of the 30 s period appear in part (b) of Table III. The MLPN is slightly better in identifying ∆ω, and the RBFN is slightly better in identifying ∆Vt. These values are more scientific measures of error rather than simply examining the curves in Figs. 5 and 6.

results of this test in Figs. 9 and 10 clearly show that the RBFN12 is more accurately identifying both ∆ω and ∆Vt than the MLPN. This is further proof that the RBFN was closer to a global minimum than the MLPN when the weights were frozen.

C. On-line global minimum test In Section B above for testing the generalization performance, both identifiers could continue identifying correctly after their weights were frozen, but the operating condition was not changed during the 30 s. This section does a similar test, but changes the operating condition from A to B (in Table II) at t=10 s during the 30 s period. The results in Figs. 7 and 8 show that the RBFN12 does better than the MLPN in identifying both ∆ω and ∆Vt after the change at 10 s. The average absolute error (EM) and mean square error (MSEM) values appear in part (c) of Table III, and they confirm that the RBFN12 identifier is better than the MLPN.

THE

(a) One-step ahead prediction test

EP (%)

MSEP

EP (%)

MSEP

MLPN

2800 s

0.0314

1.746×10-7

0.0073

9.263×10-9

MLPN

1400 s

0.0450

3.202×10-7

0.0389

2.922×10-7

0.0519

-7

0.0051

4.138×10-9

RBFN12

1400 s

4.193×10

(b) Generalization test Identifiers

RBFN12

∆Vt

∆ω

On-line training time

EG (%)

MSEG

2800 s

0.0501 0.0830

1400 s

EG (%)

MSEG

3.891×10-7

0.0333

1.551×10-7

-6

0.0244

8.634×10-8

1.075×10

(c) On-line global minimum test

0 .0 1 0

Identifiers

-0. 0 1

∆ω [p u ]

∆Vt

∆ω

On-line training time

Identifiers

MLPN 0 .0 2

TABLE III COMPARISON OF PERFORMANCES IN TWO IDENTIFIERS

On-line training time

∆Vt

∆ω EM (%)

MSEM

EM (%)

MSEM

-5

0.4234

3.783×10-5

0.1189

2.963×10-6

-0. 0 2 -0. 0 3

MLPN

2800 s

0.2084

1.556×10

-0. 0 4

RBFN12

1400 s

0.2480

1.100×10-5

-0. 0 5 P la nt M LP N R B F N 12

-0. 0 6

0 .5 P la n t M LPN R B F N 12

0 .4

-0. 0 7 0

5

10

15 Tim e [s ]

20

25

30

0 .3

Fig. 7. On-line global minimum test: Speed deviation ∆ω [p u ]

0 .2

0.03 P lant M LP N RB FN 12

0.025

0 .1 0 -0 . 1 -0 . 2 -0 . 3

0.02

-0 . 5 0

t

∆ V [pu]

-0 . 4 0.015 0 .5

1

1 .5

2

0.01

2 .5 T im e [s ]

3

3 .5

4

4 .5

5

Fig. 9. Three phase short circuit test: Speed deviation 0.005

0

0 .0 5

-0.005 0

5

10

15 Tim e [s ]

20

25

0

30

-0 . 0 5

Fig. 8. On-line global minimum test: Terminal voltage deviation ∆ V [p u ]

-0 . 1

t

D. Three phase short circuit test The initial weights for both identifiers in this test have the same values as at t=0 s in Figs. 5 to 8, and they remain fixed throughout the test. The plant is operating in condition A given in Table II, when a three phase short circuit fault at the infinite bus is applied for 100 ms duration from time t = 0.5 s to t = 0.6 s. The sampling frequency remains 50Hz. The 278

-0 . 1 5 -0 . 2 -0 . 2 5 P la n t M LPN R B F N 12

-0 . 3 -0 . 3 5 0

0 .5

1

1 .5

2

2 .5 T im e [s ]

3

3 .5

4

4 .5

Fig. 10. Three phase short circuit test: Terminal voltage deviation

5

V. CONCLUSIONS This paper compared the performance of a multi layer perceptron network (MLPN) and a radial basis function network (RBFN), using the deviation signals in a continually on-line training mode to identify the dynamics of a synchronous generator connected to an infinite bus. Both neural networks were able to retain past learned information when they were tested for the generalization property. However, when the operating condition changed or a severe fault occurred during testing with frozen weights, the RBFN response showed that it had converged closer to a global minimum during the on-line training than the MLPN. The RBFN also required less training time to converge and fewer computational complexities to train the network. The general conclusion, therefore, is that the RBFN should be preferred to the MLPN for the on-line identification using the deviation signals. VI. REFERENCES [1] [2] [3]

[4]

[5]

[6] [7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

P.M. Anderson and A.A. Fouad, “Power system control and stability,” IEEE Press, New York, 1994,ISBN 0-7803-1029-2. B. Adkins and R.G. Harley, “The general theory of alternating current machines,” Chapman and Hall, London, 1975, ISBN 0-412-15560-5. Payman Shamsollahi and O.P. Malik, “Direct Neural Adaptive Control Applied to Synchronous Generator,” IEEE Trans. on Energy Conversion, Vol.14, No.4, pp. 1341-1346, December 1999. Young-Moon Park, Myeon-Song Choi, and Kwang Y. Lee, “A Neural Network-Based Power System Stabilizer using Power Flow Characteristics,” IEEE Trans. on Energy Conversion, Vol.11, No.2, pp. 435-441, June 1996. T. Kobayashi and A. Yokoyama, “An Adaptive Neuro-Control System of Synchronous Generator for Power System Stabilization,” IEEE Trans. on Energy Conversion, Vol.11, No.3, pp. 621-630, September 1996. S. Haykin, Neural Networks: A comprehensive Foundation, Prentice-Hall International, Inc. 1999, ISBN 0-13-908385-5. K.S. Narendra, and K. Parthasarathy, “Identification and Control of Dynamical Systems Using Neural Networks,” IEEE Trans. on Neural Networks, Vol.1, No.1, pp. 4-27, March 1990. G.K. Venayagamoorthy and R.G. Harley, “A Continually Online Trained Neurocontroller for Excitation and Turbine Control of a Turbogenerator,” IEEE Trans. on Energy Conversion, Vol.16, No.3, pp. 261-269, September 2001. G.K. Venayagamoorthy and R.G. Harley, “Implementation of an adaptive neural network identifier for effective control of turbogenerators,” in Proc.1999 Electric Power Engineering, IEEE Power Tech Conference, pp. 134-139. J. He and O.P. Malik, “An Adaptive Power System Stabilizer Based on Recurrent Neural Networks,” IEEE Trans. on Energy Conversion, Vol.12, No.4, pp. 413-418, December 1997. Young-Moon Park, Seung-Ho Hyun, and Jin-Ho. Lee, “A Synchronous Generator Stabilizer Design Using Neuro Inverse Controller and Error Reduction Network,” IEEE Trans. on Power Systems, Vol.11, No.4, pp. 1969-1975, November 1996. Q.H. Wu, G.W. Irwin, and B.W. Hogg, “A Neural Network Regulator for Turbogenerators,” IEEE Trans. on Neural Networks, Vol.3, No.1, pp. 95100, Jan 1992. R. Segal, M.L. Kothari, and S. Madnani, “Radial basis function (RBF) network adaptive power system stabilizer,” IEEE Trans. on Power Systems, Vol.15, No.2, pp. 722-727, May 2000. P. K. Dash, S. Mishra, and G. Panda, “A Radial basis function neural network controller for UPFC,” IEEE Trans. on Power Systems, Vol.15, No.4, pp. 1293-1299, November 2000. G. Ramakrishna and O.P. Malik, “Radial Basis Function Based Identifiers for Adaptive PSSs in a Multi-Machine Power System,” in Proc.2000 IEEE PES Winter Meeting, Vol.1, pp. 116-121. E. Swidenbank, S. McLoone, D. Flynn, G.W. Irwin, M.D. Brown, and B.W. Hogg, “Neural Network Based Control for Synchronous

[17]

[18]

[19]

[20]

[21]

[22]

Generators,” IEEE Trans. on Energy Conversion, Vol.14, No.4, pp. 1673-1678, December 1999. G. Ramakrishna and O.P. Malik, “RBF identifier and pole-shifting controller for PSS application,” in Proc.1999 IEEE International Electric Machines and Drives Conference, pp. 589-591. M.A. Abido and Y.L. Abdel-Magid, “On-line identification of synchronous machines using radial basis function neural networks,” IEEE Trans. on Power Systems, Vol.12, No.4, pp. 1500-1506, November 1997. D. Flynn, S. McLoone, G.W. Irwin, M.D. Brown, E. Swidenbank, and B.W. Hogg, “Neural control of turbogenerator systems,” Automatica, Vol.33, No.11, pp. 1961-1973, November 1997. S. Chen, S.A. Billings, and P.M. Grant, “Recursive hybrid algorithm for non-linear system identification using radial basis function neural networks,” Int. J. Control, Vol.55, No.5, pp. 1051-1070, 1992. Z. Uykan, C. Guzelis, M.E. Celebi, and H.N. Koivo, “Analysis of inputoutput clustering for determining centers of RBFN,” IEEE Trans. on Neural Networks, Vol.11, No.4, pp. 851-858, July 2000. C. Alippi and V. Piuri, “Experimental neural networks for prediction and identification,” IEEE Trans. on Instrumentation and Measurement, Vol.45, No.2, pp. 670-676, April 1996.

VII. BIOGRAPHIES Jung-Wook Park (S’00) was born in Seoul, South Korea, on July 18, 1973. He received the B.S. degree (Summa cum laude) from Department of Electrical Engineering, the Yonsei University, Seoul, South Korea, in 1999, and the M.S. degree in Electrical and Computer Engineering from the Georgia Institute of Technology, Atlanta, USA, in 2000, where he is currently pursuing the Ph.D. degree. His current research interests are in power system dynamics, FACTS (Flexible AC Transmission System), electric machines, power electronics, and application of artificial neural networks.

Ronald G. Harley (M’77-SM’86-F’92) was born in South Africa. He obtained a BScEng degree (cum laude) from the University of Pretoria in 1960, and a MScEng degree (cum laude) from the same University in 1965, and PhD from London University in 1969. In 1970 he was appointed to the Chair of Electrical Machines and Power Systems at the University of Natal in Durban, South Africa. He is currently at the Georgia Institute of Technology, Atlanta, USA. He has co-authored some 220 papers in refereed journals and international conferences. Altogether 9 papers attracted prizes from journals and conferences. Ron is a Fellow of the SAIEE, a Fellow of the IEE, and a Fellow of the IEEE. He is also a Fellow of the Royal Society in South Africa, a Fellow of the University of Natal, and a Founder Member of the Academy of Science in South Africa formed in 1994. He has been elected as a Distinguished Lecturer by the IEEE Industry Applications Society for the years 2000 and 2001. His research interests are in the dynamic and transient behavior of electric machines and power systems, and controlling them by the use of power electronics and control algorithms.

Ganesh K Venayagamoorthy (S’93-M’98) was born in Jaffna, Sri Lanka. He received a BEng (Honors) degree in Electrical and Electronics Engineering from the Abubakar Tafawa Balewa University, Nigeria, in 1994 and a MScEng degree in Electrical Engineering from the University of Natal, South Africa, in April 1999. He is completing his PhD degree in Electrical Engineering at the University of Natal, South Africa. He was a Research Associate at the Texas Tech University, USA in 1999 and currently at the University of Missouri-Rolla, USA.

279