Modelling of Gas Liquid Separation through Stacked ...

2 downloads 0 Views 976KB Size Report
In recent years few mechanistic models have been proposed for design optimization and predicting the performance of reserve flow cyclonic separators.
Modelling of Gas Liquid Separation through Stacked Neural Network Dr.Nadeem Qazi (Corresponding Author) Intelligent System Research Centre University of Ulster Londonderry, UK Address: 10 Ernest Street, Londonderry, UK BT480HB Email: [email protected]

Professor Dr.Hoi Yeung Department of Process and System Engineering Cranfield University, Bedfordshire, MK430AL, UK

Modelling of Gas Liquid Separation through Stacked Neural Network 1

Nadeem Qazi 1,Hoi Yeung 2 University of Ulster UK , 2 Cranfield University UK

In recent years few mechanistic models have been proposed for design optimization and predicting the performance of reserve flow cyclonic separators. However, there is not much literature available on design and performance evaluation of axial flow compact separator. The work presented in this paper demonstrates the use of the stacked neural network to model the nonlinear process of gas liquid separation through a novel designed axial flow compact separator (called as I-SEP in this study). We trained several standard back propagation training algorithms to predict the separation efficiency of the I-SEP, these base models are then combined to form a single composite model. Principal component regression techniques are used to estimate the weight of the participant trained neural networks in the composite model. It has shown in this work that combination of trained neural network improves the prediction accuracy of gas liquid separation as compared to participant individual neural network models. The Stacked neural network model produced satisfactory prediction on unseen experimental data and may be helpful to the operators in effectively controlling the separator by predicting the separation efficiency at the changing operating inlet condition. Keywords: Gas liquid cyclone, separation efficiency, Principle Component Regression, Generalization, Stacked Neural Network, Non-Linear modeling 1. Introduction The use of bulky gravity separator for multiphase separation is common in Oil and Gas industry, however industry is exploring more economically viable options to tackle the issues of capital and operational costs, equipment weight and space utilization related to bulky gravity separators. One such option is the use of compact separators with acceptable separation efficiency. In recent years few mechanistic models have been proposed for design optimization and predicting the performance of reserve flow cyclonic separators (Movafaghian et al, 2000). However, there is not much literature available on design and performance evaluation of axial flow compact separator called (I-SEP) which is used in this study. It is one of the major

requirements to be able to predict the accurate efficiency of the I-SEP at different inlet operating conditions in order to size and design the system for a particular application. However, the existing models of gas liquid separators are not applicable to this separator due to its special design. Artificial neural network is a fairly new predicting and forecasting tool that has successfully been used in solving problems related to empirical modeling, classification, pattern recognition and automatic control. Their applications cover research in aerospace (Kim Calise,2008), banking (Adewole Adetunji Philip et al ,2011), telecommunications (Klevecka,2010) , image segmentation (Sanggil Kang,2009), image enhancement (Yinghua Li et al, 2010) and in image compression (Dinesh,2007). More Recently in robotics, (Hachour Ouarda,2010) proposed a neural network based navigation for intelligent autonomous mobile robots. Following this trend, artificial neural network is now gradually occupying its place as a modeling tool in solving problems related to oil and gas industry. The use of Self Organizing map in multiphase flow metering by (Baleny & Yeung,2007), prediction of porosity and permeability in gas reservoir by (Olson,1998)], the prediction of PVT properties for crude oil system as reported by (Gharbi et al,1999) zone identification in complex reservoir by (White et al ,1995), two phase flow pattern (Hemant B et al,2010) and most recently prediction of two phase gas /solid flow regime identification(Hu et al,2011) are one of the few examples in this domain. In this paper, we present an ensemble model consisting of several trained neural networks to predict the gas liquid separation efficiency of an axial flow cyclonic separator at varying inlet conditions. It is shown in this paper that combination of the trained neural network can be used to improve the accuracy of an empirical model. The paper is divided into six sections. The compact separator (I-SEP) used in this study along with the process rig and brief description of the conducted experiments is discussed in Section 2. Section 3 discusses in detail the proposed neural network modeling in estimating non-linear process of gas liquid separation through compact axial flow cyclone. A new technique to improve the generalize error of neural network is also presented in this section. The linear regression technique to combine the several

neural networks for improving accuracy is presented in the fourth section followed by result and conclusion in the final section of the paper.

2. The Compact Separator Rig The โ€œI-SEPโ€ as shown in (Figure 1) is the name given to a novel axial-flow cyclonic separator patented by its inventors Caltec Ltd. UK. It is suitable for a wide range of gas-liquid, liquidliquid and solid-liquid separation applications, and presently it is mainly targeted to the offshore oil and gas industry. The fluids enter the I-SEP through an involute inlet path where it is made to spin, producing high โ€˜gโ€™ forces, which makes it progress up to a compact separating chamber. In the separation chamber, the gas-liquid separation takes place and heavier fluid moves radially outwards through the tangential outlet, while the lighter fluid moves inwards and also axially upward towards another outlet, also known as the overflow. The separation efficiency of ISEP may be defined as a ratio of mass flow rate of the lighter and heavier phase at axial and tangential outlet respectively to the total input mass of the both liquid and gas. The separation efficiency and hence the performance of the separator is thus measured by the proportion of the liquid coming out along gas through the axial outlet percentage commonly called as Liquid carry over LCO and proportion of gas coming along liquid through tangential outlet commonly called as gas carry under GCU. Therefore GCU and LCO can be regarded as main performance measuring parameters along with the pressure drop between the inlet and axial outlet.

Figure 1. A novel axial-flow cyclonic separator (I-Sep)

The I-SEP used in this study was connected serially on its axial and tangential ends with two gravity separators as shown in the (Figure 2). The water was pumped from water tank to the rig; the flow rate of the water was adjusted using a manual valve. After metering the gas and liquid inlet streams separately by these V-cone gas and liquid flow meters, they are then commingled to form a gas-liquid (G-L) mixture. After passing through the I-SEP, this mixture is separated into a liquid-rich stream and a gas-rich stream at the tangential and axial outlets of I-SEP respectively. The gas and liquid flow rates were measured through the single phase liquid and gas flow meters connected at the outlets of both gravity separators. The experiments, covering mainly slug flow conditions, were performed at room temperature in two sets of constant gas volume fractions and constant mixture velocities, with gas volume fraction (GVF) between 80 to 95%, and the mixture

Figure 2. I-Sep Connected with gravity separator in two phase flow rig velocity between 5m/s to 60 m/s. A data acquisition system (DAS) using LABVIEW was developed to acquire the data from the two-phase multiphase rig. The acquired data consisted of pressure measurements at different locations of the rig, gas and liquid flow rates at the inlets and outlets of the I-SEP and the two gravity separators. The separation performance parameters such as LCO, GCU, pressure at the tangential outlet (Pt) and pressure at axial outlet (Pa) were then calculated using the acquired data. However these performance parameters were found to have

nonlinear relationship with the inlet operating condition i.e. mixture velocity, GVF, inlet pressure etc. It is one of the major requirements to be able to predict the accurate efficiency of the I-SEP at different inlet operating conditions in order to size and design the system for a particular application. However, the existing models of gas liquid separators are not applicable to this separator due to its special design. Neural Network because of their tendency to model the nonlinearity was hence used in this study to model the nonlinear relationship between gas and liquid separation efficiency with the inlet conditions.

3. Neural Network Modelling The first step in modeling the separation efficiency of the I-SEP is the selection of appropriate input and output variables. Gas volume fraction, mixture velocity and pressure at the inlet Pin was taken as the input to train several neural networks, whereas the GCU, LCO, pressure at the tangential and axial outlet of I- SEP represented by Pt and Pa respectively were taken as the output of these model.

The modeling process for gas liquid separation through multiple

artificial neural networks is shown in (Figure 3) and discussed in detail in next section.

3.1. Artificial Neural Networks Artificial neural networks are mathematical models which can be trained to perform a specific task based on available experiential knowledge. An artificial neural network model usually consists of several layers of neurons as shown in (Figure 4). Each neuron is connected with each other neuron with a weighted connection and a linear or nonlinear algebraic function is used to produce the output of a single neuron. When a pattern is presented to the network, weights and biases are adjusted to obtain the desired output this process is called as training of the neural network. A trained neural network after a satisfactory level of training can be used on unseen data for pattern classification or prediction.

Figure 3. Model flow diagram for gas liquid separation using artificial neural network In this study we have selected Feed forward neural network architecture for training the neural networks as this type of architecture can approximate any function with a finite number of discontinuities (Sandhya, 2007). This type of NN consists of input hidden and output layers. Each of these layers of receive input from the previous layer, computes the product of weight and given inputs, add bias and forward it to next layer through a nonlinear transfer function. The general equation for the feed forward neuron is given by the equation (Martin, 1996).

๐‘Ž๐‘–+1 = ๐‘“ ๐‘–+1 (๐‘Š ๐‘–+1 ๐‘Ž1 + ๐‘ ๐‘–+1 ), ๐‘– = 0,1,2 โ€ฆ . ๐‘š โˆ’ 1

(1)

Where ๐‘š is the number of layers in the network, ๐‘“ ๐‘–+1 are the transfer functions, ๐‘Š ๐‘–+1 are neuron weights and ๐‘ ๐‘–+1 are neuron biases and ๐‘Ž๐‘–+1 is the neuron output at ith layer.

The architecture of feed forward neural network used in this study is consist of one input layer, one hidden layer with log sigmoid transfer function ๐‘“ 1 (๐‘ ) = 1โ„(1 + ๐‘’ โˆ’๐‘  ) and one output layer with pureline transfer function ๐‘“ 2 (๐‘ ) = ๐‘ 

3.2. Determining Number of Hidden Neurons in Hidden Layer The number of hidden neurons in the hidden layer is proportional to the complexity of the relationship between input and output and is one of the critical tasks in determining the neural network architecture of a given problem. The hidden neurons in the hidden layer were tackled by following the constructive algorithms approach. The smallest possible network with one hidden neuron in the hidden layer was used at the start of the training and then numbers of neuron were increased in the hidden layer to improve the performance. The number of neurons initially had varied from 1 to 50 however settled to 20 hidden neurons as after this number the validation and effective parameters did not change much as shown in Figure 5.

Figure 4. An artificial neural network model

Figure 5. Relationship between hidden neurons and effective parameters

3.3. Training Neural Network The whole process of training the feed forward NN to predict the separation efficiency was accomplished in several stages starting from feeding the input pattern to the network followed by weight and bias initialization, propagating the error and finally adjustment of the weight and biased to minimize this error. These stages are discussed in detail in the next section.

3.3.1.

Stage 1: Data preprocessing and handling Generalization

A total number of 241 data sets were collected during the experiments. The experimental data was pre-processed before passing it to a neural network to remove any noise and outliers from the data. It was then scaled down using the Z score normalization followed by process component analysis to remove multicollinearity and redundant data. Generalization or overfitting is one of the common problems occur during training of a neural network. An overtrained network shows large error on unseen data, one of the solutions is to reduce the size of the network but it is hard to know beforehand the optimal size of the network. The critical issue

in developing a neural network is generalization which means how well the network will make predictions on the unseen data set. Early stopping and Bayesian regularization are two most common techniques used to overcome this problem. The Early stopping technique requires testing training and validation data set. This technique continuously monitors the validation error and stops the training if validation error begins to rise. The Bayesian regularization, on the other hand, modifies the performance function. This research work, however, followed a relatively new approach by combining Bayesian regularization with early stopping to handle the generalization. The Early stopping technique was implemented by dividing the dataset into three sets of training, testing and validation data sets. One-half of the data was set for the training and remaining half was divided into validation and testing data set.

3.3.2. Stage 2: Feeding input data to the neural network The training dataset consisting of selected input variable pairs ๐‘ž (๐บ๐‘‰๐น, ๐‘‰๐‘š๐‘–๐‘ฅ, ๐‘ƒ๐‘–๐‘›) representing gas volume fraction, inlet mixture velocity and inlet pressure at I-Sep respectively were given as external input to the log sigmoid function of the input layer of the neural network during the training phase of the model development.

๐บ๐‘‰๐น1 ๐‘ž = [๐‘‰๐‘š๐‘–๐‘ฅ1 ๐‘ƒ๐‘–๐‘›1

๐บ๐‘‰๐น2 โ€ฆ . ๐บ๐‘‰๐น๐‘› ๐‘‰๐‘š๐‘–๐‘ฅ2 โ€ฆ . ๐‘‰๐‘š๐‘–๐‘ฅ๐‘› ] ๐‘ƒ๐‘–๐‘›2 โ€ฆ . ๐‘ƒ๐‘–๐‘›๐‘›

3.3.3. Stage 3: Weight Initialization The weights and bias initialization at the start of the training were determined using the Nguyen-Wirdow method. Following this method input vectors were normalized within the range of +1 and -1. The biases were initialized with a random value in the range of a scale factor F = 0.7h1/i/R, Where i is the number of input, h is the hidden neuron, and R is a range of input vector. Finally, the weights of the network were initialized with random values between -0.5 and +0.5 values of scaling factor F.

3.3.4. Stage 4: Weight Adjustment through back propagation After the initialization of the weights, the network output (a) was computed using the equation 1 and was compared with the desired output. Following the back propagation training algorithm the error difference between the target output (t) and actual neural output (a) was propagated back from output nodes to the inner nodes for every neuron j in the output layer with an updated weight and bias at every kth iteration using the equation (2) and (3) respectively. 1

2 ๐‘†๐ธ = โˆ‘๐‘ j=1(๐‘Ž โˆ’ ๐‘ก)

(2)

๐‘Š๐‘˜+1= ๐‘š๐‘Š๐‘˜ +โˆ๐‘˜ ๐‘”๐‘˜

(3)

2

Where; ๐‘Š is weight vector, ๐‘š is momentum constant, โˆ is the learning rate and ๐‘” represents the error gradient. It should be worthwhile to mention here that back propagation training algorithm since adopts gradient descent algorithm for weight adjustment hence converges slowly and may be trapped in local minima. Therefore both heuristic and standard numerical optimization techniques were used to speed up the back propagation training algorithm. The heuristic category improves the performance by adaptively changing the optimal learning rate during the training process thus avoiding local minima. Gradient descent with adaptive learning rate, gradient descent with momentum, gradient descent with momentum and adaptive learning rate, and the resilient algorithm are commonly used approaches for this purpose. This study used an adaptive learning rate of 0.03 along with a momentum constant of 0.7 under the MATLAB training function traingdx. The training was stopped when the performance was minimized to set goal of 0.01. The numerical optimization approach to improve the convergence performance includes conjugate gradient, quasi-Newton, and Levenberg-Marquardt (LM) algorithm. Conjugate gradient algorithms convergence faster than heuristic methods, however, slower than QuasiNewton as later updates an approximate Hessian matrix rather than calculating a second order derivate at each iteration. The LM method combines the best features of the Gauss-Newton technique and the steepest-descent method through the Jacobian matrix that requires less

computation than the Hessian matrix. Therefore for the purpose improve convergence performance both Levenberg-Marquardt (LM) and Conjugate gradient was used to estimate the weight of network with equation (4) and equation (5) respectively.

๐‘Š๐‘˜+1 = ๐‘Š๐‘˜ โˆ’ [๐ฝ๐‘‡ ๐ฝ + ๐‘ข๐ผ]โˆ’1 ๐ฝ๐‘‡ ๐‘’

(4)

Where ๐ฝ is the Jacobian matrix that contains first derivatives of the network errors with respect to the weights and biases, and ๐‘’ is a vector of network errors.

๐‘Š๐‘˜+1 = ๐‘Š๐‘˜ +โˆ ๐‘ƒ๐‘˜

(5)

Where ๐‘ƒ๐‘˜ is the conjugate and was determine using the fletehcer method.

3.3.5. Stage 5: Testing the Neural network on unseen data All above mentioned three training algorithms were implemented using MATLAB tool box. The testing data set was then used to evaluate the performance of these trained model. Absolute percentage error (๐ด๐ด๐‘ƒ๐ธ) was used to evaluate the performance of the trained networks.

๐ด๐ด๐‘ƒ๐ธ =

(Experimental_valuesโˆ’predicted_values)โˆ—100 ๐‘…๐‘Ž๐‘›๐‘”๐‘’

(6)

4. Combining Neural Networks through Linear Regression The performance comparison of individual neural network based on AAPE% as shown in (Table 1) revealed that a single candidate neural network was not enough to extract all the relevant information from the data for all of the neural network outputs. It is, however, possible that aggregating these networks can improve the prediction accuracy of the model. The combination of the neural network is based on the idea that different neural network capture different aspect of process behavior and their aggregation would reduce the uncertainty and provide a more robust and accurate model. The overall output of the stacked neural network model ๐น(๐‘ฅ) is determined

Figure 6.

Combination of neural network models

as a weighted combination of the output of the individual neural network i.e. ๐น(๐‘ฅ) = ๐‘ค๐‘– ๐‘“๐‘– (๐‘ฅ) Where ๐‘“๐‘– (๐‘ฅ) represents the ith trained neural network as shown in the (Figure 6). Let ๐‘Œ = [๐‘ฆ1 ๐‘ฆ2 ๐‘ฆ3 ] represents the output vector consisting of predicted output vectors of the three trained neural networks respectively then the predicted output of the stacked or combined neural network ๐‘Œ๐‘ can be represent by following equation:

๐‘Œ๐‘ = wY = w1 y1 + w2 y2 + w3 y3 The determination of the weights (๐‘ค1 , ๐‘ค2 , ๐‘ค3 )for individual networks can be determined using more than one ways, of which the simplest one is the equal weight technique in which each participant neural network is weighted equally given by wi ๏€ฝ

1 where N is the total number N

of the participant neural network in the combined model. (AHMAD Zainal , 2006) has used combination of neural network to model the non linear relationship between the inlet water flow rate and water level in the tank using the equal weight technique. The stacked neural network

model in this research has however used linear regression estimation techniques to calculate the weight of the individual neural network. The mathematical equation used for determination of weight vector is given below.

๐‘ค = (๐‘Œ ๐‘ก ๐‘Œ)โˆ’1 ๐‘Œ ๐‘ก ๐‘ฆ

(7)

Where Y is the output vector of the individual network, y is the measured output vector.

5. Result The performance accuracy in term of AAPE for all individual neural network and the stacked neural network is shown in the (Table 1). It can be seen that combination of trained neural networks gave a better result. The comparison of actual and predicted values of GCU, LCU ,Pa and Pt as predicated by the stacked neural network is presented in the form of cross plot as shown in (Figure 7) and it can be seen majority of the prediction lies with 5% for GCU an LCO and with 1% to 2 % for Axial and tangential inlet pressure. 6. Conclusion

The research work described in this paper has demonstrated that the separation performance of compact axial flow cyclonic separator can be modeled by using the neural network and a stacked neural network produced a better result than individual trained neural networks. It was also observed that the Generalization of the neural network may be improved by combining Bayesian regularization with the early stopping technique

Figure 7. Comparison of predicated and actual output through stacked neural network model . Table 1. Accuracy Comparison of individual and Stacked Neural Network

Training Algorithm

AAPE% Efficiency

Pressure

GCU

LCO

Pa

Pt

Trainlm

4.5

8.5

2.2

6.5

Traingdx

5.2

6.6

3.1

4.2

Traincfg

4.89

7.0

2.8

5.4

Stacked Neural

5.43

8.29

2.12

5.42

Network

Acknowledgements. This research work was carried in Department of Process and System Engineering at School of Engineering, Cranfield University UK under the supervision of Professor Dr.Hoi Yeung.

References: [1] Adewole Adetunji Philip,Akinwale Adio,Taofiki Akintomide Ayo Bidemi,2011, โ€œArtificial Neural Network Model for Forecasting Foreign Exchange Rateโ€, World of Computer Science and Information Technology Journal (WCSIT)Vol. 1, No. 3,110-118 [2] Ahmad Zainal , J. Z. (2006), "A Nonlinear Model Predictive Control Strategy Using Multiple Neural Network Models", Lecture notes in computer science, vol. 3972, pp. 943-948. [3] Anandarajan, M., Lee, P. and Anandarajan, A. (2001), โ€œBankruptcy prediction of financially stressed firms: an examination of the predictive accuracy of artificial neural networksโ€, International Journal of Intelligent Systems in Accounting, Finance & Management, Vol.. 10, no. 2, pp. 69-81. [4] Battiti, R. (1994), โ€œUsing mutual information for selecting features in supervised neural net learningโ€, Neural Networks, IEEE Transactions, vol. 5, no. 4, pp. 537-550. [5] Blaney, S., Yeung, H. (2007) โ€œGamma Radiation Methods for Cost-effective Multiphase Flow Meteringโ€, 13th International Conference on Multiphase Production Technology, 13-15, Edinburgh [6] Dinesh K. Sharma, Loveleen Gaur, Daniel Okunbor, 2007, โ€œImage compression and feature extraction using Kohonen's self-organizing map neural network", Journal of Strategic ECommerce, [7] Florin Leon and Mihai Horia Zaharia,2010, โ€œStacked Heterogeneous Neural Networks for Time Series Forecastingโ€,Mathematical Problems in Engineering [8] Florin Leon, Ciprian George Piuleac, Silvia Curteanu, 2010, โ€œStacked Neural Network Modeling Applied to the Synthesis of Polyacrylamide-Based Multicomponent Hydrogels,Macromolโ€ React. Eng. Vol 4, pp 591โ€“598 [9] Gharbi, R. B., Elsharkawy, A. M. and Karkoub, M. (1999), โ€œUniversal Neural-NetworkBased Model for Estimating the PVT Properties of Crude Oil Systemsโ€, Energy & Fuels, vol. 13, no. 2, pp. 454-458. [10] Khairiyah Mohd. Yusof1 & Ani Idris, 2008, โ€œUtilization OF Stacked Neural Network For Pore Size Prediction OF Asymmetric Membraneโ€, Jurnal Teknologi, 49(F) pp 251โ€“260 [11] Hachour Ouarda (2010), โ€œA Neural Network Based Navigation for Intelligent Autonomous Mobile Robotsโ€, International Journal of Mathematical Models and Methods in Applied Sciences,Issue 3, Volume 4,177-186 [12] Hemant B. Mehta,Manish P. Pujara,Jyotirmay Banerjee, 2013, โ€œPrediction of Two Phase Flow Pattern using Artificial Neural Networkโ€, International Conference on Chemical and Environmental Engineering Johannesburg (South Africa) [13] Hu,H.L,Dong,J.,Zhang,J.,Cheng,Y.J.,Xu, , 2011, โ€œ identification of gas/solid two-phase flow regimes using electrostatic sensors and neural-network techniquesโ€. T.M.Flow Measurement and Instrumentation , Volume 22 (5) Elsevier

[14] I. Klevecka ,2010, โ€œForecasting Traffic Loads:Neural Networks vs Linear Modelโ€ ,Computer Modelling and New Technologies Vol.14, No.2, 20โ€“28 [15] Kouba, G.E, Shoham, O., Shirazi, S., 1997. โ€œDesign and performance of gas liquid cylindrical cyclone separatorsโ€. In: Proceedings of the BHR Group 7th International Meeting on Multiphase Flow, Cannes, France, 307-327 [16]Kim, N. and Calise, A. J. (2008), "Neural network based adaptive output feedback augmentation of existing controllersโ€, Aerospace Science and Technology, vol. 12, no. 3, pp. 248-255. [17] Movafaghian, S. Jaua-Marturet, J., Mohan, R., Shoham, O. and Kouba, K., 2000, โ€œThe Effects of Geometry, Fluid Properties and Pressure on the Hydrodynamics of Gas-Liquid Cylindrical Cyclone Separatorsโ€, Int. J. Multiphase Flow, 26, no. 6, pp 999-1018 [18] Martin t.Hagan,Howard B.Demut, 1996, โ€œNeural Network Designโ€ [19] Olson, T. (1998), โ€œPorosity and Permeability Prediction in Low-Permeability Gas Reservoirs From Well Logs Using Neural Networksโ€, Society of Petroleum Engineers, SPE 3996485, pp. 563-572. [20] O.A. Hodhod, H.I. Ahmed (2013) , โ€œDeveloping an artificial neural network model to evaluate chloride diffusivity in high performance concrete HBRC Journal [21] Sandhya S. (2007), โ€œNeural Networks for Applied Sciences and Engineering: From Fundamentals to Complex Pattern Recognitionโ€, Published by CRC Press, ISBN 084933375X. . [22] S Paranjape and M Ishii (2006) โ€œFast classification of two-phase flow regimes based on conductivity signals and artificial neural networksโ€ Meas. Sci. Technol. 1511โ€“1521 [23] Sanggil Kang, Sungjoon Park (2009). โ€œA fusion neural network classifier for image classification Pattern Recognition Lettersโ€ (2009),Vol 30 , pp 789-793 [24] Sirilak Areerachakul,Prem Junsawang,Auttapon Pomsathit (2011), โ€œPrediction of Dissolved Oxygen Using Artificial Neural Networkโ€,2011 International Conference on Computer Communication and Management, Proc .of CSIT Vol.5 [25] Trappenberg, T., Ouyang, J. and Back, A. (2006), โ€œInput variable selection: mutual information and linear mixing measuresโ€, Knowledge and Data Engineering, IEEE Transactions on, Vol. 18, no. 1, pp. 37-46 [26] White, A. C., Molnar, D. and Aminian, Mohaghegh, S. K., Ameri, S. and Esposito, P. (1995), โ€œThe application of ANN for zone identification in a complex reservoirโ€, Proceeding of SPE Eastern Regional Conference, Morgantown, WV, SPE 30977 [27] Ye, Jiamin; Peng, Lihui ,2012, โ€œFlow regime identification of gas liquid two phase flow in vertical tube with small diameterโ€ , American Institute of Physics Conference Proceedings , Volume 1428 [28] Yinghua Li, Tian Pu, Jian Cheng, 20l0, โ€œA biologically Inspired Neural Network for Image Enhancementโ€,International Symposium on Intelligent Signal Processing and Communication Systems .

[29] Zainal ahmad, Hafizan Jhahir,2006,โ€Neural Model for prediction of Discharged from Catchments of Langat riverโ€ ,Malaysia ,IIUM Engineering, Journal ,Vol 7,No 1