A Cascade of Artificial Neural Networks to Predict ... - CiteSeerX

4 downloads 1444 Views 659KB Size Report
As part of a maintenance contract, the electric utility in the. Eastern province in Saudi Arabia requested a local transformer manufacturer to evaluate the status of ...

516

K. Shaban et al.: A Cascade of Artificial Neural Networks to Predict Transformers Oil Parameters

A Cascade of Artificial Neural Networks to Predict Transformers Oil Parameters Khaled Shaban Department of Computer Science and Engineering Qatar University, Doha, Qatar

Ayman El-Hag Department of Electrical Engineering American University of Sharjah, Sharjah, UAE and Andrei Matveev Department of Electrical and Computer Engineering University of Waterloo, Ontario, Canada

ABSTRACT In this paper artificial neural networks have been constructed to predict different transformers oil parameters. The prediction is performed through modeling the relationship between the insulation resistance measured between distribution transformers high voltage winding, low voltage winding and the ground and the breakdown strength, interfacial tension acidity and the water content of the transformers oil. The process of predicting these oil parameters statuses is carried out using various configurations of neural networks. First, a multilayer feed forward neural network with a back-propagation learning algorithm was implemented. Subsequently, a cascade of these neural networks was deemed to be more promising, and four variations of a three stage cascade were tested. The first configuration takes four inputs and outputs four parameter values, while the other configurations have four neural networks, each with two or three inputs and a single output; the output from some networks are pipelined to some others to produce the final values. Both configurations are evaluated using real-world training and testing data and the accuracy is calculated across a variety of hidden layer and hidden neuron combinations. The results indicate that even with a lack of sufficient data to train the network, accuracy levels of 84% for breakdown voltage, 95% for interfacial tension, 56% for water content, and 75% for oil acidity predictions were obtained by the cascade of neural networks. Index Terms – Transformer insulation aging, Megger test, transformer oil, multilayer feed forward artificial neural networks, back-propagation learning algorithm.

1 INTRODUCTION TRANSFORMER is one of the most important apparatuses in the power system network. During operation, electric partial discharge or even arcing may take place inside the transformer tank which may lead to the degradation of the transformer oil and eventually to the insulation failure. Both on-line and off-line monitoring of transformer insulation system is usually conducted using different techniques like dissolved gas analysis (DGA), partial discharge and dielectric response measurements [1]. While these techniques are justified for power transformers, the cost involved in them is not justified for distribution transformers. Manuscript received on 26 September 2008, in final form 20 December 2008.

Alternatively, standard tests for oil samples are usually conducted for both power and distribution transformers to evaluate the condition of the transformer oil. Examples of these tests are breakdown voltage, water content, total acidity and interfacial tension. Based on the values of these standard tests, a decision can be made to change, de-gas or use the same oil. As part of a maintenance contract, the electric utility in the Eastern province in Saudi Arabia requested a local transformer manufacturer to evaluate the status of many used distribution transformers. As part of the contract, it was requested to conduct several tests on the transformer oil along with other transformer routine tests. Testing transformer oil involves taking a sample of the oil and sending it to a laboratory for the results. The cost involved in each oil sample is around $200

1070-9878/09/$25.00 © 2009 IEEE Authorized licensed use limited to: UNIVERSITY OF IDAHO. Downloaded on May 06,2010 at 06:43:12 UTC from IEEE Xplore. Restrictions apply.

IEEE Transactions on Dielectrics and Electrical Insulation

Vol. 16, No. 2; April 2009

and since the number of transformers to be tested is between 100-150 transformers per year, the cost of only testing the oil will be high. Thus, it would be very helpful if the values of the transformer oil tests could be predicted. Most of the research involving transformer oil prediction is geared towards predicting the transformer insulation status based on DGA results [2]. Few studies have been conducted to estimate the transformer characteristics like water content and breakdown voltages. A polynomial regression model has been developed to predict the breakdown voltage as a function of the transformer service period, total acidity and water content [3][3]. Except for few cases, the percentage error between the actual and predicted values of transformer breakdown voltage was less than 10%. However, the model needs the water content and total acidity as an input to predict the breakdown voltage. Hence, while saving the cost of conducting the breakdown voltage test, there is still a need to conduct two other oil tests. Moreover, the values of the water content and total acidity need to be collected at different time intervals to formulate the mathematical model and predict the value of the transformer oil breakdown voltage. Such historical data for transformer oil tests might be available for power transformers as oil samples are usually collected on a regular basis to continuously monitor the transformer oil conditions. However, the high number of distribution transformers makes such practice very costly and usually historical data for distribution transformer oil tests are not available. Several studies have been performed using Artificial Neural Networks (ANNs) in the field of electrical insulation and high voltage measurements. For instance, in the field of outdoor insulators classification of leakage current waveforms measured in clean-fog test has been investigated. A backpropagation feed forward ANN has been used to categorize the leakage current into four classes, based on the magnitude of the fundamental and harmonic components of the leakage current. Another network has been trained to classify the waveform as sinusoidal, nonlinear, or containing discharge. A feed forward back-propagation ANN with two layers has been employed successfully for the classification [[4]]. The classification of surface conditions for polymeric materials, assessed in inclined plane test, has been studied by using ANN [[5]]. The process of identifying the surface condition of nonceramic insulators was automated in this study. An ANN based classifier was used to categorize the LC measured in the inclined plane test. A multilayer feed forward ANN with a back-propagation learning algorithm has been employed. Although the authors have mentioned that the classification was accurate, no comments have been made about the percentage error that was encountered during the study. By using the monthly rainfall values, a feed forward ANN has been employed to predict the flashover in 15 kV overhead lines [6]. Although the accuracy of the prediction was satisfactory for most of the cases, the error was more than 100% for certain cases. Also, ANN has been used extensively in the area of partial discharge classification and identification. Evagorou et al. used Probabilistic Neural Network (PNN) to identify different sources of partial discharge like corona discharge in air,

517

floating discharge in oil and internal discharges [7]. Needle shaped void samples, made from epoxy resin, were used to generate an electrical tree under AC voltage. The partial discharge patterns before and after the tree initiation were learned and identified by the neural network using the backpropagation method [8]. A method using wavelet packet transform and neural network has been used to separate the PD pulses from corona in air, which enables more accurate detection of insulation breakdown of gas-insulated substation (GIS) [9]. In this paper, we are estimating the values of different oil parameters using various configurations of neural networks for different distribution transformers. Unlike what has been reported in [3], the input data for the neural networks is the standard Megger test between the transformer high voltage winding, low voltage winding and ground. A similar attempt has been carried out through the development of polynomial regression modeling technique [10]. The modeling approach has achieved prediction values of 93% for the interfacial tension while the breakdown could be predicted with an accuracy of 84%. On the other hand, the prediction accuracy was around 35% for the transformer oil water content.

2 ARTIFICIAL NEURAL NETWORKS Artificial Neural Networks are a family of information processing techniques inspired by the way biological nervous systems process information. The fundamental concept of neural networks is the structure of the information processing system. Composed of a large number of highly interconnected processing elements or neurons, a neural network uses the human-like technique of learning by example to solve problems. A Neural network is often configured for a specific application, such as data classification or pattern recognition, through a learning process called training. Just as in biological systems, learning involves adjustments to the synaptic connections that exist between the neurons. Neural networks can differ based on the way their neurons are connected, the specific kinds of computations their neurons perform and the way they transmit patterns of activity throughout the network. Neural networks are being applied to an increasingly large number of real-world problems. Their primary advantage is that they can solve problems that are too complex for conventional technologies, problems that do not have an algorithmic solution, or for which an algorithmic solution is too complex to be defined. 2.1 MULTI-LAYER FEED FORWARD NEURAL NETWORKS In this study, multi-layer feed forward neural networks are utilized. In a multi-layer feed forward neural network, the artificial neurons are arranged in layers, and all the neurons in each layer have connections to all the neurons in the next layer. Associated with each connection between these artificial neurons, a weight value is defined to represent the connection weight. Figure 1 shows an architecture of a multi-layer feedforward neural network with an input layer, an output layer, and one hidden layer. The operation of the network consists of a forward pass through the network.

Authorized licensed use limited to: UNIVERSITY OF IDAHO. Downloaded on May 06,2010 at 06:43:12 UTC from IEEE Xplore. Restrictions apply.

518

K. Shaban et al.: A Cascade of Artificial Neural Networks to Predict Transformers Oil Parameters

Once the input vector is presented to the input layer it can calculate the input to the hidden layer as: NI

h Hj = b j + ∑ w ji xi

(1)

i =1

where h indicates a hidden layer input, i and j are indices of different neurons in the network, H denotes the hidden layer, b is a bias, NI is the size of the input vector, w is the weight, and x is the input element. H

Each neuron of the hidden layer takes its input h j and uses it as the argument for a function and produces an output Figure 1. A multi-layer feed forward ANN with one hidden layer.

All problems that can be solved by a neural network can be solved with only one hidden layer, but it is sometimes more efficient to use two or more hidden layers. Each neuron of a layer other than the input layer computes first a linear combination of the outputs of the neurons of the previous layer, plus a bias. The coefficients of the linear combinations plus the biases are called the weights. Neurons in the hidden layer (called hidden neurons) then compute a non-linear function of their input. Generally, the non-linear function is the sigmoid function. A sigmoid function is an S-shaped “squashing function” that maps a real value, which may be arbitrarily large in magnitude positive or negative to a real value lies within some narrow range. The result of this sigmoid function lies in the range of 0–1. In the neural computation literature, the sigmoid is sometimes also referred to as the logistic function. According to the requirement of this function; the original data need to be scaled into the range between 0 and 1. One pass over the entire training data set represents a training “epoch”. The criteria of convergence in training is based on minimizing the mean squared error (MSE) to a level where a satisfactory agreement is found between the training set results and the network result. Once the network is considered to be trained, testing data are presented to it and outputs are compared with the experimental or observed results. 2.2 BACK-PROPAGATION LEARNING ALGORITHM A number of learning rules are available to train neural networks. The Back Propagation (BP) learning algorithm is used in this study to train the multi-layer feed-forward neural network. Signals are received at the input layer, pass through the hidden layer, and reach to the output layer, and then fed to the input layer again for learning. The learning process primarily involves the determining of connection weights and patterns of connections. The BP neural network approximates the non-linear relationship between the input and the output by adjusting the weight values internally instead of giving the function expression explicitly. Further, the BP neural network can be generalized for the input that is not included in the training patterns. The BP algorithm looks for minimum of error function in weight space using the method of gradient descent. The combination of weights that minimizes the error function is considered to be a solution to the learning problem. The algorithm can be described in the following steps [[11]]:

y Hj given by: y Hj = f (h Hj )

(2)

Now the input to the neurons of the output layer is calculated as: NH

hko = bk + ∑ wkj y Hj

(3)

j =1

where k, j are indices of the network, o denotes the output layer, NH is the number of neurons in the hidden layer, and y is the hidden layer element. And the network output is then given by:

y k = f (hko ) where f represents the activation function.

(4)

An error vector can be defined as being the difference between the network output and the target output value: ek = t k − y k (5) Based on the error vector, the sum of squared error can be calculated as:

ε=

1 NO 2 ∑ ek 2 k −1

(6)

where NO is the number of neurons in the output layer. This is the cost function to be minimized during the learning process. The sum-squared error ε is a function of all the variables of the network. Using the chain rule, the gradient of the error with respect to the weight matrix connecting the hidden layer to the output layer can be calculated as follows:

∂ε ∂ek ∂yk ∂hko ∂ε = ∂wkj ∂ek ∂yk ∂hko ∂wkj

(7)

Computing each term of this expression yields:

∂ε = ek ∂ek

∂ek = −1 ∂yk

∂yk = f k' (hko ) o ∂hk

(8)

∂hko = y Hj ∂wkj

Combining the expressions above results in:

∂ε = −ek f k' (hko ) y Hj ∂wkj

Authorized licensed use limited to: UNIVERSITY OF IDAHO. Downloaded on May 06,2010 at 06:43:12 UTC from IEEE Xplore. Restrictions apply.

(9)

IEEE Transactions on Dielectrics and Electrical Insulation

correction Δwkj applied

The

to

the

weight

Vol. 16, No. 2; April 2009

matrix

connecting the hidden layer to the output layer is:

Δwkj = −α

∂ε ∂wkj

(10)

where α is a constant known as the step-size or the learning rate. To update the weights connecting the input layer to the hidden layer it is required to repeat the procedure above: H H ∂ε ∂ε ∂ek ∂yk ∂hko ∂y j ∂h j = ∂wkj ∂ek ∂yk ∂hko ∂y Hj ∂h Hj ∂wkj

(11)

After calculating each of the terms above, the correction to the weight matrix is written as:

Δwkj = −α∂ j xi

(12)

519

Table 1. Conducted tests and their corresponding standards. Test ASTM Accepted value for standard aged oil Dielectric breakdown (kV)( min.) D-877 26 Interfacial tension, (dynes/cm)( min.) D-971 24 Water content, (ppm)( max.) D-1583 35 Total Acidity (mg KOH/g oil)( max.) D-644 0.3 Transformer color D-1500 N/A

HV/ G GΩ

2.5 5.3 5.5 6.6 4.5

Table 2. Sample test results for Megger and transformer oil. HV/ LV/ Total Color BD Water Interfacial LV G Acidity (kV) content Tension (mg (ppm) (dynes/cm) GΩ GΩ KOH/g oil) 2.0 2.0 0.040 3 44 14.35 22.8 7.4 4.1 0.037 2.5 57 12.06 31.7 4.7 2.8 0.049 1.5 60 14.50 21.7 5.8 4.1 0.020 1 52 11.45 27.0 4.1 2.7 0.087 1.5 49 22.69 19.7

NO

Where

∂ j = f j' (h Hj )∑ ∂ k wkj

(13)

k −1

3 MATERIALS AND METHODS 3.1 EXPERIMENTAL SETUP Nearly forty different oil filled distribution transformers have been used in this study. All transformers have been installed and operated in the Eastern province of Saudi Arabia. The rating of the transformers range from 300-1000 kVA and their high voltage rating ranges from 13.8-34.5 kV. The units were manufactured between 1983 and 1994 by different transformers’ manufacturers and the oil used in all units is mineral oil. The units were taken from the distribution network to conduct routine maintenance to evaluate their working conditions. The electrical insulation between the high voltage-low voltage winding, the high voltage winding-ground and low voltage winding-ground were measured. A high voltage of 2 kV is applied and the insulation resistance is measured after 15 seconds. A schematic showing the connection of the Megger test to measure the electrical insulation between high voltage winding and ground resistance is shown in Figure 2. Also, an oil sample was taken from each transformer and the following tests have been conducted on each oil sample, Table 1. Samples of the measured values for both the Megger test and transformer oil are shown in Table 2.

Figure 2. Schematic diagram showing the measurement of electrical insulation resistance between high voltage winding and ground.

3.2 NEURAL NETWORKS SIMULATIONS Different configurations of neural networks have been constructed in this study. First, a multilayer feed forward neural network with a back-propagation learning algorithm was implemented. This configuration is depicted in Figure 3. Subsequently, a cascade of these neural networks was deemed to be more promising, and four variations of a three stage pipeline were tested. The single network configuration takes four inputs and outputs four parameter values, while the cascade configurations have four neural networks, each with two or three inputs and a single output; the output from some networks are pipelined to be inputs of successive networks to produce the final values. The Megger tests’ results as discussed earlier are used as an input along with the color test for the ANNs and other oil results are used in the training process.

Figure 3. Configuration #1: Basic feed forward back-propagation ANN.

Figure 4 shows the first cascade configuration that only utilizes the Megger test results as inputs. The purpose of the first stage of this configuration is to calculate the outputs that are known to have a high correlation with the voltage inputs; namely, these are the breakdown voltage and the interfacial tension [10]. The purpose of the second network is to predict the water content within the oil inside the transformer. The inputs to the second network are the outputs of first stage of the cascade. The justification for using breakdown voltage for deriving the water content is the strong correlation between the transformer oil breakdown voltage and its water content [12]. The usage of interfacial tension to derive the water content is apparent when considering that the value of the interfacial tension at the interface of the water and oil is

Authorized licensed use limited to: UNIVERSITY OF IDAHO. Downloaded on May 06,2010 at 06:43:12 UTC from IEEE Xplore. Restrictions apply.

520

K. Shaban et al.: A Cascade of Artificial Neural Networks to Predict Transformers Oil Parameters

dependent on the water level inside the oil. The third stage of the cascade consists of a neural network for deriving the acidity of oil based on the breakdown voltage and the water content. The color of the oil inside the transformer is not used in this first cascade setting.

Figure 7. Configuration #5: A cascade of ANNs and Oil Color is used to derive the Water Content and Acidity Level.

4 RESULTS AND DISCUSSIONS Figure 4. Configuration #2: A cascade of ANNs with No Oil Color Input.

The color of the transformer oil can be interpreted as an indication of transformer oil aging. The color of the transformer oil changes from a bright clear color (for new oil) to a dark one (for aged oil). According to ASTM-D 1500 the transformer color is represented as a number that starts from 0.5 for the bright new oil and ends at 4 for the dark old oil. This color change is essentially due to the oxidization of oil in service which, consequently, leads to the formation of the acidic products [13]. Also, the introduction of material pigments such as paint, varnish, and other similar material may causes a color change. Furthermore, the age of the oil could have a potential impact on how much water can coexist with the oil. Figures 5-7 depict the variations of the cascade of ANNs that integrate the oil color as an input.

The analysis for each cascade configuration was carried out in two phases. First, for each configuration a simulation was run for the 324 possible topology parameter combinations consisting of the values specified in Table 3. Second, for the topology parameter combination that yielded the best results, run an additional 10 times to see how good of a value could be achieved. It should be noted that for the first phase of this analysis the order of selection of data for testing and training was randomized. This was done so that none of the 324 configurations would have an advantage over the other by being better at interpreting a fixed combination of data. The second phase, however, was conducted for a fixed set of training and testing data (hence the order was the same for all of the 10 times). This was done so that it would be possible to “drill down” within a specific network configuration for a specific dataset partitioning to see how good the network can get. For all of the runs, a ratio of 2/3 of the data available was used for training and 1/3 was used for testing. Table 3. Topology Parameters used for Simulation. Values (yielding 324 combinations) Hidden Layers – Stage 1 1, 2 Hidden Layers – Stage 2 & Stage 3 1, 2 Hidden Neurons – Stage 1 3, 5, 10 Hidden Neurons – Stage 2 & Stage 3 3, 5, 10 Epochs – Stage 1 500, 1000, 5000 Epochs – Stage 2 & Stage 3 500, 1000, 5000

Figure 5. Configuration #3: Color is used to derive the Water Content.

As Table 3 demonstrates, each cascade configuration was essentially divided into two parts – stage 1 which used first hand derivation (based on inputs values) and stages 2 & 3 which used the produced values of stage 1 for derivations. This was done in order to simplify the problem and also limit the simulation time for the possible configurations. In the following subsections the results of each ANN configuration will be discussed. 4.1 CONFIGURATION N0. 1: BASIC FEED FORWARD BACK-PROPAGATION ANN

Figure 6. Configuration #4: A cascade of ANNs and Oil Color is used to derive the Acidity Level.

As previously discussed a simple feed forward back propagation as shown in Figure 3 has been implemented to forecast the transformer oil parameters. The configuration

Authorized licensed use limited to: UNIVERSITY OF IDAHO. Downloaded on May 06,2010 at 06:43:12 UTC from IEEE Xplore. Restrictions apply.

IEEE Transactions on Dielectrics and Electrical Insulation

Vol. 16, No. 2; April 2009

tuned parameters values and results of the first ANN are shown in Tables 4 and 5 respectively. It can be noticed that the level of accuracy achieved using this configuration is acceptable for both the breakdown voltage and interfacial tension. These results are comparable with the ones reported in [10]. This confirms the observation previously mentioned that there is a strong correlation between the transformer insulation resistance measurement and both transformer oil breakdown and interfacial tension. On the other hand and similarly to what has been reported in [10], a very weak correlation between transformer insulation resistance and water content exists as depicted in Table 5. Water in the oil can be a product of oxidation or insulation paper degradation [12]. The water in paper/oil insulation system is always in a state of temperature dependent equilibrium. This means that moisture moves from the insulation paper to the oil as the temperature increases and vice versa. The electrical insulation resistance measurement will be affected by the water content in the transformer oil and paper. However, since the oil was sampled at different ambient temperatures, this may result in different percentage of water in transformer oil/paper which is reflected in the weak correlation between the transformer oil water content and insulation resistance measurement. Oil acidity has a moderate correlation with the electrical insulation measurement. Oxidation of the insulation and oil forms acids as the transformer ages. Acid attacks cellulose paper and accelerates their aging process and hence contributes to the reduction in insulation resistance. Table 4. Optimal Network Parameters used for configuration No. 1. Values (yielding 324 combinations) Hidden Layers 1, 2 Hidden Neurons 3, 5, 10 Epochs 500, 1000, 5000 Table 5. Transformer oil prediction accuracy results for Configuration No. 1. Value Breakdown Voltage 84 % Interfacial Tension 94 % Water Content in Oil 34 % Oil Acidity 62 %

4,2 CONFIGURATION NO. 2: A CASCADE OF ANNS WITH NO OIL COLOR INPUT It is apparent from the previous discussion that the prediction accuracy of both breakdown voltage and interfacial tension can be used instead of measuring their value. However, both oil water content and acidity prediction value need to be improved. In the cascade configuration, both the predicted breakdown voltage and interfacial tension values are used as an input to predict the other two oil parameters, i.e. water content and acidity. In this configuration oil color was not used as an input to predict any of the transformer oil parameters. Tables 6 and 7 show the optimal network parameters and the predicted values of transformer oil respectively. It is evident from Tables 5 and 7 that using the cascade configuration increased the accuracy to predict the transformer oil water content from 34.3% to 51.4%. Such improvement

521

could be attributed to the fact that the breakdown voltage and the interfacial tension both are used as inputs to the second stage ANN. The breakdown voltage is sensitive to the presence of physical contaminant and water in the oil and hence it is correlated with the water content. Also, the interfacial tension is a measure of the strength of the interface between oil and water and hence it is correlated with the transformer oil water content. Conversely, little improvement in the total acidity prediction has been achieved which reflects less correlation between transformer total acidity and oil breakdown voltage and interfacial tension. Table 6. Optimal Network Parameters used for configuration No. 2. Value Hidden Layers – Stage 1 2 Hidden Layers – Stage 2 & Stage 3 2 Hidden Neurons – Stage 1 5 Hidden Neurons – Stage 2 & Stage 3 10 Epochs – Stage 1 500 Epochs – Stage 2 & Stage 3 1000 Table 7. Transformer oil prediction accuracy results for Configuration No. 2. Value Breakdown Voltage 84 % Interfacial Tension 94 % Water Content in Oil 51 % Oil Acidity 70 %

4.3 CONFIGURATIONS NOS. 3, 4 AND 5: A CASCADE OF ANNS WITH COLOR AS AN INPUT Color as an input to the ANN is implemented in configurations no. 3-5 as shown in Figures 4-6. The objective of this step is to examine the effect of including the transformer oil color as an input on the accuracy of predicting the water content and total acidity. Tables 8 and 9 show the optimal network parameters and the predicted values of transformer oil for configurations no. 3-5. It is evident from Table 9 that including the transformer color contributed slightly to the improvement in transformer oil acidity prediction accuracy and the best accuracy achieved level is around 75%. The results of the transformer oil acidity measured value for all transformers are shown in Figure 8-a. All tested oil samples are either much below the acceptable limit (for most cases) or much higher than the acceptable limit (for one case only). The ultimate goal of the measurement of the transformer oil parameters is to investigate if the transformer parameters are exceeding the allowed limits or not. So having 75% accuracy of predicting the transformer oil acidity will be sufficient and no need to conduct the measurement. However, the level of accuracy of predicting water content is still poor compared to the other transformer oil parameters. Also, it can be noticed from Figure 8-b that water content of several oil samples is close to the maximum allowed limit. Having an oil water content accuracy of only 56% makes it unacceptable to use the prediction as an alternative to the water content measurement. This low level of accuracy could be improved by providing more samples for better ANN training.

Authorized licensed use limited to: UNIVERSITY OF IDAHO. Downloaded on May 06,2010 at 06:43:12 UTC from IEEE Xplore. Restrictions apply.

K. Shaban et al.: A Cascade of Artificial Neural Networks to Predict Transformers Oil Parameters

522

Table 8. Optimal Network Parameters used for configurations Nos. 3, 4 and 5. Config. Config. Config No. 3 No. 4 No. 5 Hidden Layers – Stage 1 1 1 1 Hidden Layers – Stage 2 & Stage 3 1 1 2 Hidden Neurons – Stage 1 3 3 10 Hidden Neurons – Stage 2 & Stage 3 3 5 10 Epochs – Stage 1 500 500 1000 Epochs – Stage 2 & Stage 3 5000 1000 1000 Table 9. Transformer oil prediction accuracy results for Configurations Nos. 3, 4 and 5. Config. Config. Config. No. 3 No. 4 No. 5 Breakdown Voltage 83 % 84 % 84.0 % Interfacial Tension 94 % 94 % 95 % Water Content in Oil 53 % 53 % 56 % Oil Acidity 70 % 74 % 75 %

statuses was performed using different configurations of artificial neural networks. First, a multilayer feed forward neural network with a back-propagation learning algorithm was implemented. Then, a cascade of these neural networks was deemed to be more promising, and four variations of a three stage cascade were tested. The first configuration takes four inputs and outputs four parameter values, while the other configurations has four neural networks, each with two or three inputs and a single output; the output from some networks were pipelined to some others to produce the final values. Both configurations were evaluated using real-world training and testing data and the accuracy was considered over a variety of hidden layers and hidden neurons arrangements. The results indicated that even with a lack of sufficient data to train the network, accuracy levels of 83.96% for breakdown voltage, 94.60% for interfacial tension, 56.35% for water content, and 75.38% for oil acidity predictions were obtained by the cascade of neural networks.

ACKNOWLEDGMENT The authors would like to thank the Saudi Transformer Co. located in Dammam, Saudi Arabia for providing the test results used in this study.

REFERENCES [1]

(a)

(b) Figure 8. Transformer oil testing values for all transformers; (a) Total acidity number, (b) Water content.

5 CONCLUSION The value of oil break down, interfacial tension, acidity and water content was predicted using various configurations of artificial neural networks technique. The values of the transformer Megger tests along with the transformer oil color have been used as the input to the neural networks. Despite the fact that the number of samples were relatively low, the results revealed that there is a strong correlation between the transformer electrical insulation and both the transformer oil breakdown and interfacial tension. The prediction process of these oil parameters

B. H. Ward, “A Survey of New Techniques in Insulation Monitoring of Power Transformers”, IEEE Electr. Insul. Mag., Vol. 17, No. 3, pp.16–23, 2001. [2] S. A. Ward, “Evaluating transformer condition using DGA oil analysis”, IEEE Conf. Electr. Insul. Dielectr. Phenomena (CEIDP), pp. 463-468, 2003. [3] M. A. Wahab, M. M. Hamada, A. G. Zeitoun and G. Ismail, “Novel modeling for the prediction of aged transformer oil characteristics”, Electric Power System Research, Vol. 51, pp. 61-70, 1999. [4] M. A. R. M. Fernando, and S. M. Gubanski, “Leakage current patterns on contaminated polymeric surfaces”, IEEE Trans. Dielectr. Electr. Insul, Vol. 6, pp. 688-694, 1999. [5] R. Sarathi and S. Chandrasekar, “Diagnostic study of surface condition of the insulation structure using wavelet transform and neural networks”, Electric Power System Research, No. 68, pp. 137-147, 2004. [6] A. D. Tsanakas, G. I. Papaefthimiou, and D. P. Agoris “Pollution flashover fault analysis and forecasting using neural network”, CIGRE, Paper 15-105, 2002. [7] D. Evagorou , A. Kyprianou, P. Lewin, A. Stavrou, V. Efthymiou and G. E. Georghiou, “Classification of Partial Discharge Signals using Probabilistic Neural Network”, IEEE Intern. Conf. Solid Dielectrics, Winchester, pp. 609-612, UK, 2007. [8] N. Hozumi, T. Okamoto and T. Imajo, “Discrimination of Partial Discharge Patterns Using a Neural Network”, IEEE Trans. Electr. Insul., Vol. 27, pp. 550-556, 1992. [9] C. Chang, J. Jin, C. Chang, T. Hoshino, M. Hanai, and N. Kobayashi, “Separation of Corona Using Wavelet Packet Transform and Neural Network for Detection of Partial Discharge in Gas-Insulated Substations”, IEEE Trans. Power Delivery, Vol. 20, pp.1363–1369, April 2005. [10] K. Assaleh and A. El-Hag, “Estimating Transformer Oil Parameters Using Polynomial Networks”, Intern. Conf. Condition Monitoring and Diagnosis, Beijing, China, pp. 1335-1338, 2008. [11] S. Haykin, Neural Networks a Comprehensive Foundation, 2nd ed., Prentice Hall, New Jersey, pp. 161-175, 1999. [12] NYNAS, Transformer oil handbook, June 2004. [13] S. Abdi, A. Boubakeur, and A. Haddad, “Influence of thermal ageing on transformer oil properties”, IEEE Intern. Conf. Dielectric Liquids, pp. 14, 2008.

Authorized licensed use limited to: UNIVERSITY OF IDAHO. Downloaded on May 06,2010 at 06:43:12 UTC from IEEE Xplore. Restrictions apply.

IEEE Transactions on Dielectrics and Electrical Insulation

Vol. 16, No. 2; April 2009

Khaled Bashir Shaban received the Ph.D. degree in 2006 from the Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Canada. His research experience in academic and industrial institutions covers a variety of domains in intelligent systems application and design. He is currently an Assistant Professor in the Department of Computer Science and Engineering, College of Engineering, Qatar University, Doha, Qatar. Ayman H. El-Hag (S’99-M’04- SM’08) received the B.S. and M.S. degrees from King Fahd University of Petroleum and Minerals and the Ph.D. degree from the University of Waterloo in 1993, 1998 and 2003, respectively. He joined the Saudi Transformer Co. as a Quality Control Engineer from 1993 to 1999. From January till June 2004 Dr. El Hag worked as a Postdoctoral Fellow at the University of Waterloo then he joined the University of Toronto as a Postdoctoral Fellow from 2004 to 2006. Currently Dr. El-Hag is an Assistant Professor in the electrical engineering department at the American University of Sharjah. Dr. El-Hag main areas of interest are condition monitoring and diagnostics of electrical insulation and pulse power applications in biological systems.

523 Andrei Matveev graduated from the University Of Waterloo in Waterloo, Ontario, Canada with a B.A.Sc. degree in the Honours Computer Engineer Co-Op program. Currently employed at Google Inc. in New York, NY, USA as a Software Engineer in Test. Responsibilities at Google include developing testing frameworks for distributed software systems, as well as amending product designs with the goal of increasing testability. Other interests include researching how to best solve real world problems with established artificial intelligence techniques.

Authorized licensed use limited to: UNIVERSITY OF IDAHO. Downloaded on May 06,2010 at 06:43:12 UTC from IEEE Xplore. Restrictions apply.

Suggest Documents