Neural Network based Stochastic Particle Swarm ... - Semantic Scholar

1 downloads 0 Views 467KB Size Report
Neural Network based Stochastic Particle Swarm. Optimization for Prediction of Minimum Miscible. Pressure. Mohammad Ali Ahmadi. Petroleum University of.
International Journal of Computer Applications (0975 – 8887) Volume 34– No.1, November 2011

Neural Network based Stochastic Particle Swarm Optimization for Prediction of Minimum Miscible Pressure Mohammad Ali Ahmadi

Petroleum University of Technology, Ahwaz Faculty of Petroleum Engineering, Kut Abdollah, Ahwaz, P.O.Box:63134 Iran

Seyed Reza Shadizadeh

Petroleum University of Technology, Abadan Faculty of Petroleum Engineering, North Bavardeh, Abadan, Iran

ABSTRACT

Miscible gas injection processes are among the effective methods for enhanced oil recovery. A key parameter in the design of gas injection project is the minimum miscibility pressure (MMP), whereas local displacement efficiency from gas injection is highly dependent on the MMP. Because experimental determination of MMP is very expensive and timeconsuming, searching for fast and robust mathematical determination of gas-oil MMP is usually requested. In this work, the model based on a feed-forward artificial neural network (ANN) optimized by Stochastic Particle Swarm Optimization (SPSO) to estimate gas-oil MMP is proposed. Stochastic Particle Swarm Optimization is used to decide the initial weights of the neural network. The performance of the SPSO-ANN model is compared with experimental gas-oil MMP and the calculated results for the common gas-oil MMP correlations. The results demonstrate the effectiveness of the SPSO-ANN model.

Mahdi Hasanvand

Arvandan Oil and Gas Company (AOGC) Khoramshahr, P.O. Box:63134 Iran

Because such experiments are very expensive and timeconsuming, searching or developing a high accuracy mathematical determination of the gas-oil MMP is usually requested. Therefore, this paper presents a new developed model to determine the gas-oil MMP for miscible displacement based on neural network and Particle Swarm Optimization. A soft sensor is a conceptual device whose output or inferred variable can be modeled in terms of other parameters that are relevant to the same process [1]. The determination of network structure and parameters are very important; some evolutionary algorithms such as Genetic Algorithm (GA) [2], Back Propagation (BP) [3], Pruning Algorithm [4], Simulated Annealing [5] can be used for this determination.

General Terms

In the present work, SPSO is proposed for optimizing the weights of feed-forward neural network. Then Simulation results demonstrate the effectiveness and potential of the new proposed network for prediction experimental data of gas-oil MMP reported in literature [6-15] and from one of the northern Persian Gulf oil fields of Iran compared with experimental data and the calculated results for the common gas-oil MMP correlations.

Keywords

2. ARTIFICIAL NEURAL NETWORK

Evolutionary Algoriths, Artificial Intelligence

Minimum Miscible Pressure, Neural Network, Particle Swarm Optimization

1. INTRODUCTION

Gas injection above the minimum miscible pressure (MMP) is a broadly means for improving oil recovery in many reservoirs. The minimum miscibility pressure is the lowest for which gas can develop miscibility through a multicontact process with a given reservoir oil at reservoir temperature. The reservoir to which the process is applied must be operated at or above the MMP to develop multicontact miscibility. Reservoir pressures below the MMP result in immiscible displacement and subsequently lower oil recoveries. The primary available experimental methods to evaluate miscibility under reservoir condition are the slim tube displacement, the rising bubble apparatus and the pressure composition diagrams. Rao and Lee [1] reported the direct measuring interfacial tension of an oil-solvent mixture at reservoir condition could provide a rapid means of determining MMP.

Artificial neural networks are parallel information processing methods which can express complex and nonlinear relationship use number of input-output training patterns from the experimental data. ANNs provides a non-linear mapping between inputs and outputs by its intrinsic ability [16] .The success in obtaining a reliable and robust network depends on the correct data preprocessing, correct architecture selection and correct network training choice strongly [17]. The most common neural network architecture is the feedforward neural network. Feed-forward network is the network structure in which the information or signals will propagates only in one direction, from input to output. A three layered feedforward neural network with back propagation algorithm can approximate any nonlinear continuous function to an arbitrary accuracy [18, 19]. The network is trained by performing optimization of weights for each node interconnection and bias terms; until the values output at the output layer neurons are as close as possible to the

15

International Journal of Computer Applications (0975 – 8887) Volume 34– No.1, November 2011 actual outputs. The mean squared error of the network (MSE) is defined as: 1

𝑀𝑀𝑀𝑀𝑀𝑀 = ∑𝐺𝐺𝑘𝑘=1 ∑𝑚𝑚 𝑗𝑗 =1 �𝑌𝑌𝑗𝑗 (𝑘𝑘) − 𝑇𝑇𝑗𝑗 (𝑘𝑘)� 2

2

(1)

Where m is the number of output nodes, G is the number of training samples, 𝑌𝑌𝑗𝑗 (𝑘𝑘) is the expected output, and 𝑇𝑇𝑗𝑗 (𝑘𝑘) is the actual output. The data are split into two sets, a training data set and a validating data set. The model is produced using only the training data. The validating data are used to estimate the accuracy of the model performance. In training a network, the objective is to find an optimum set of weights. When the number of weights is higher than the number of available data, the error in-fitting the non-trained data initially decreases, but then increases as the network becomes over-trained. In contrast, when the number of weights is smaller than the number of data, the over-fitting problem is not crucial.

3. STOCHASTIC PARTICLE SWARM OPTIMIZATION 3.1 Restricts of the Conventional PSO The conventional PSO updating principle with inertia weight is expressed as follows: 𝑑𝑑 (𝑛𝑛) − 𝑋𝑋𝑖𝑖𝑖𝑖 (𝑛𝑛)� + 𝑣𝑣𝑖𝑖𝑖𝑖 (𝑛𝑛 + 1) = 𝑊𝑊𝑖𝑖 𝑣𝑣𝑖𝑖𝑖𝑖 (𝑛𝑛) + 𝑐𝑐1 𝑟𝑟1𝑖𝑖𝑖𝑖 (𝑛𝑛) �𝑃𝑃𝑖𝑖𝑖𝑖

𝑔𝑔 𝑐𝑐2 𝑟𝑟2𝑖𝑖𝑖𝑖 (𝑛𝑛)(𝑃𝑃𝑖𝑖𝑖𝑖 (𝑛𝑛) − 𝑋𝑋𝑖𝑖𝑖𝑖 (𝑛𝑛)) (2) 𝑋𝑋𝑖𝑖𝑖𝑖 (𝑛𝑛 + 1) = 𝑋𝑋𝑖𝑖𝑖𝑖 (𝑛𝑛) + 𝑣𝑣𝑖𝑖𝑖𝑖 (𝑛𝑛 + 1) (3) where d = 1, 2, … ,D, D is the dimension of the solution space, vi represents current velocity of particle i, Xi = [ xi1, xi2, xiD ]T represents current position of particle i. The second part on the right side of updating of vi (n + 1) is named the “cognitive” component, which represents the personal thinking of each particle. The third part is named the “social” component, which represents the collaborative behavior of the particles to find the global optimal solution. Obviously the random exploration 𝑔𝑔 ability is determined by 𝑃𝑃𝑖𝑖 (𝑛𝑛) − 𝑋𝑋𝑖𝑖 (𝑛𝑛)and𝑃𝑃𝑖𝑖𝑑𝑑 (𝑛𝑛) − 𝑋𝑋𝑖𝑖 (𝑛𝑛). This induces a drawback about PSO exploration that the intension of exploration behavior is totally determined by the rate of 𝑔𝑔 decreasing of 𝑃𝑃𝑖𝑖𝑑𝑑 (𝑛𝑛) − 𝑋𝑋𝑖𝑖 (𝑛𝑛) and 𝑃𝑃𝑖𝑖 (𝑛𝑛) − 𝑋𝑋𝑖𝑖 (𝑛𝑛). Therefore for a high dimensional optimization problem, such as NN training, when PSO converges quickly, exploration behavior is also weaken so quickly that particles may not search sufficient information about solution space, and they may converge to a suboptimal solution. Since such relatively low exploration ability is induced by constraints of direction and intension of the cognitive and the social components, a method to overcome the constrained exploration behavior is adding a random exploration velocity to updating principle which is independent on positions. Based on explicit representation (ER) of PSO [20], we propose a new stochastic PSO (SPSO) represented by the following definition.

3.2 Definition of Stochastic PSO (SPSO)

A stochastic PSO (SPSO) is described as follows: Given a swarm including M particles, the position of particle i is defined as Xi = [ xi1, xi2 ,… xiD ]T , where D represents the dimension of

swarm space. The updating principle for individual particle is defined as vid (n + 1) = ε(n) �vid (n) + c1 r1id (n) �Pidd (n) − Xid (n)� +

g c2 r2id (n) �Pid (n) − Xid (n)� + ξid (n)� (4) 𝑋𝑋𝑖𝑖𝑖𝑖 (𝑛𝑛 + 1) = 1−𝛼𝛼 𝑑𝑑 (𝑛𝑛)� + 𝛼𝛼𝑋𝑋𝑖𝑖𝑖𝑖 (𝑛𝑛) + 𝑣𝑣𝑖𝑖𝑖𝑖 (𝑛𝑛 + 1) + (𝑐𝑐1 𝑟𝑟1𝑖𝑖𝑖𝑖 (𝑛𝑛) �𝑃𝑃𝑖𝑖𝑖𝑖 ∅𝑖𝑖𝑖𝑖 (𝑛𝑛)

𝑔𝑔 𝑐𝑐2 𝑟𝑟2𝑖𝑖𝑖𝑖 (𝑛𝑛) �𝑃𝑃𝑖𝑖𝑖𝑖 (𝑛𝑛)�) (5) where d = 1, 2, … ,D, c1 and c2 are positive constants; 𝑃𝑃𝑖𝑖𝑑𝑑 (𝑛𝑛)represents the best solution found by particle i so far ; 𝑔𝑔 𝑃𝑃𝑖𝑖 (𝑛𝑛)represents the best position found by particle i’s neighborhood; ∅𝑖𝑖 (𝑛𝑛) = ∅1𝑖𝑖 (𝑛𝑛) + ∅2𝑖𝑖 (𝑛𝑛), where ∅1𝑖𝑖 (𝑛𝑛) = 𝑐𝑐1 𝑟𝑟1𝑖𝑖 (𝑛𝑛), ∅2𝑖𝑖 (𝑛𝑛) = 𝑐𝑐2 𝑟𝑟2𝑖𝑖 (𝑛𝑛). If the following assumptions hold, 1. ξ𝑖𝑖 (n) is a random velocity with constant expectation, 2. 𝜀𝜀(𝑛𝑛) → 0 With n increasing, and 3. 0 < 𝛼𝛼< 1, 4. 𝑟𝑟1𝑖𝑖𝑖𝑖 (𝑛𝑛) and 𝑟𝑟2𝑖𝑖𝑖𝑖 (𝑛𝑛) are independent variables satisfying continuous uniform distribution in [0, 1], whose expectations are 0.5, Then the updating principle must converge with probability one. 𝑃𝑃 ∗ = 𝑖𝑖𝑖𝑖𝑖𝑖𝜆𝜆∈�𝑅𝑅 𝐷𝐷 � 𝐹𝐹(𝜆𝜆) Represent the unique optimal position in solution space. Then swarm must converge to 𝑃𝑃∗ if 𝑔𝑔 𝑙𝑙𝑙𝑙𝑙𝑙𝑛𝑛 𝑃𝑃𝑖𝑖𝑑𝑑 (𝑛𝑛) → 𝑃𝑃 ∗ and 𝑙𝑙𝑙𝑙𝑙𝑙𝑛𝑛 𝑃𝑃𝑖𝑖 (𝑛𝑛) → 𝑃𝑃 ∗.

3.3 Properties of SPSO

Property 1: Inherent Exploration Behavior There is a threshold denoted by Nk, such that when n < Nk , the individual updating principle is non convergent, so that particle will move away from the best position recorded by itself and its neighborhood. But during this divergent process, the particle is still recording its individual best solution and exchanging information with its neighborhood. Hence this phenomenon can be viewed as a strong exploration that during a period shortly after the beginning, i.e., n < Nk, all particles wander in the solution space and record the best solution found so far. And when n > Nk, the swarm starts to aggregate by interaction among particles. Property 2: Controllable Exploration and Convergence ξ(n) in SPSO is a stochastic component which can be designed freely. Obviously without the additional stochastic behavior, or ξ(n) = 0, the SPSO behaves much like the conventional PSO with relatively fast convergence rate, so that intension of exploration behavior is weaken quickly. To maintain exploration ability, a nonzero ξ(n) is very useful, which makes particles be more efficient to escape from local minima. Moreover in applications ξ(n) with zero expectation is more preferable than nonzero one, because ξ(n) with zero expectation makes particles have similar exploration behavior in all directions. In the description of SPSO the only requirement of ξ(n) is that its expectation is constant. But there is no restriction about its bound! That implies a very useful improvement that the bound of ξ(n) can be time-varying. If the bound of ξ(n) is constant, as n increases, ε(n)ξ(n) may be kept relatively strong enough to overwhelm convergence behavior brought by cognitive and

16

International Journal of Computer Applications (0975 – 8887) Volume 34– No.1, November 2011 social components, so that the convergence of SPSO would be delayed significantly. To overcome this drawback, a timevarying bounded ξ(n) is proposed instead of the constant one, which is expressed as ξ(n) = w(n)ξ� (n), where ξ� (n)represents a stochastic velocity with zero expectant and constant value range, w(n) represents a time-varying positive coefficient, whose dynamic strategy can be designed freely. For example the following strategy of w(n) looks very reasonable to balance exploration and convergence behaviors of SPSO. 3

1, 𝑛𝑛 < 𝑁𝑁𝑏𝑏 4 𝑤𝑤(𝑛𝑛) = � 3 η𝑤𝑤(𝑛𝑛 − 1), 𝑛𝑛 ≥ 𝑁𝑁𝑏𝑏 (6)

𝑅𝑅2 value greater than 0.9 indicates a very satisfactory model performance, while a 𝑅𝑅 2 value in the range 0.8–0.9 signifies a good performance and value less than 0.8 indicate an unsatisfactory model performance. Figures 4 to 7 show the extent of the match between the measured and predicted gas-oil MMP values by SPSO-ANN and Alston et al., Emera and sarma, and Youan et al. model in terms of a scatter diagram, respectively. Table. 1. Comparison between the performances of SPSOANN and common models

4

Where Nb represents the maximal number of iterations, and

SPSOANN

is

Yuan et al.

3

a positive constant less than 1. Hence when𝑛𝑛 < 𝑁𝑁𝑏𝑏 , a relatively 4 strong velocity is applied to the particles to increase their exploration ability. And in the last quarter of iterations, the range of ξ(n) deceases iteration by iteration, so that the stochastic behavior brought by ξ(n) will become trivial finally. In a sense during the last part of iterations, such a weakened ξ(n) benefits particles to explore the vicinity around the best solution carefully. Since SPSO has strong exploration ability than the conventional PSO, it implies that using SPSO, we can accomplish training of NN with relatively fewer particles to reduce computational cost.

Emera and Sarma

Alston et al.

MSE

0.00296

0.903254 0.95249 0.72354

𝑹𝑹𝟐𝟐

0.99422

0.92477

0.87992 0.93144

4. SPSO-ANN MODEL

In this study, an arti ficial neural network was used to build a model to predict gas-oil MMP by using the data sets reported in literature and experimental data gives from one of the northern Persian Gulf oil fields of Iran. The best ANN architecture was: 9-4-10-1 (9 input units, 4 neuron in first hidden layer, 10 neuron in second hidden layer, 1 output neuron). ANN model trained with back propagation network (Fig. 1) was trained by Levenberg-Marquardt to predict gas-oil MMP using nine parameters: 1-reservoir temperature 2-oil compositions such as MW of C5+, intermediate molar percent, 3-Co2 composition 4H2S composition 5-C1 composition 6-C2-C4 composition 7-N2 composition as inputs. The transfer functions in hidden and output layer are sigmoid and linear, respectively. SPSO is used as neural network optimization algorithm and The Mean Square Error (MSE) used as a cost function in this algorithm. The goal in proposed algorithm is minimizing this cost function. Every weight in the network is initially set in the range of [-1, 1] and every initial particle is a set of weights generated randomly in the range of [-1, 1]. We used 200 data samples were chosen by a random number generator for network training. The remaining 100 samples were put aside to be used for testing the network’s integrity and robustness. The gas-oil MMP prediction in the training and test phase are shown in Figures 2 and 3, respectively. The simulation performance of the SPSO-ANN model was evaluated on the basis of mean square error (MSE) and efficiency coefficient𝑅𝑅 2 . Table 1 gives the MSE and 𝑅𝑅 2 values for the different models of the validation phases. It can be observed that the performance of SPSO-ANN model is better than other models. In general, a

Fig. 1. Architecture of three layers ANN Training Data

1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1

20

40

60

100 120 80 time (samples)

140

160

180

200

17

International Journal of Computer Applications (0975 – 8887) Volume 34– No.1, November 2011 Fig. 2. Comparison between measured and predicted gas-oil MMP (SPSO-ANN) Training

Regression: R2=0.93144 Data Fit Y=T

35

Testing Data

1

30

Alston et al. Prediction

0.8 0.6 0.4 0.2 0

25

20

15

-0.2 10

-0.4 -0.6

10

15

-0.8 -1

10

20

60 50 40 time (samples)

30

70

80

90

Data Fit Y=T

35

30

Emera and Sarma Prediction

Data Fit Y=T

0.4 0.2

25

20

15

0 10

-0.2 -0.4

10

15

-0.6 -0.8 -0.5

0 Experimental

20 25 Experimental

30

35

Fig. 6. 𝑹𝑹𝟐𝟐 Emera and Sarma model

0.5

Regression: R2=0.92477

Data Fit Y=T

35 2

Test: R =0.99442 30

0.6

Data Fit Y=T

Yuan et al. Prediction

0.8

0.4 SPSO-ANN Output

SPSO-ANN Output

35

Regression: R2=0.87992

Train: R2=0.99446

0.6

30

Fig. 5. 𝑹𝑹𝟐𝟐 Alston et al. model

100

Fig. 3. Comparison between measured and predicted gas-oil MMP (SPSO-ANN) Testing

0.8

20 25 Experimental

0.2 0

25

20

15

-0.2 -0.4

10

-0.6 10

-0.8 -0.5

0 Experimental

0.5

Fig. 4. 𝑹𝑹𝟐𝟐 SPSO-ANN model

15

20 25 Experimental

30

35

Fig.7. 𝑹𝑹𝟐𝟐 Youan et al. model

5. CONCLUSIONS

18

International Journal of Computer Applications (0975 – 8887) Volume 34– No.1, November 2011 Based on results obtained from this work: A new model has been developed to predict the impure and pure Co2-oil MMP. A comparision between its predicted values against experimental data, and the widely used impure and pure Co2-oil MMP correlations have been carried out. Based on the results of this new model the following conclusions are drawn: 1.

2. 3. 4.

The results of this study suggest that the SPSO-ANN model is more reliable than other conventional methods for predicting MMP. Especially, under conditions with limited field information, the PSOANN approach could produce a higher accuracy than other forecasting methods. The new model is strictly valid only for C1, N2, H2S and C2-C4 contents in the injected Co2 stream. Stochastic Particle Swarm Optimization is a powerful optimization technique, especially when the objective function has several local minima. Other evolutionary algorithms combined with Stochastic Particle Swarm Optimization can be used as soft sensor performance is better.

6. ACKNOWLEDGMENTS The authors would like to acknowledge the Petroleum University of Technology (PUT) for support throughout this study.

7. REFERENCES

parameter in design of Co2 miscible flood, J. Pet. Sci. Eng. 46, pp.37-52. [9]. Dong, M., (1999). Task 3-minimum miscibility pressure (MMP) studies, in the technical report: potential of greenhouse storage and utilization through Enhanced Oil Recovery. Petroleum research center, saskatchewan research council (SRC publication No. P-10-468-C-99). [10]. Dong, M., Huang, S., Dyer, S.B., Mourits, F.M., (2001),A comparison of Co2 minimum miscibility pressure determination for weyburn crude oil. Journal of petroleum science and engineering, 31, pp.13-22. [11]. Dong, M., Huang, S., Srivastava, R., (2000),Effect of solution gas in oil on Co2-oil minimum miscibility pressure, J. Can. Pet. Technol. 39 (11), pp.53-61, [12]. Rathmell, J.J., Stalkup, F.J., Hassinger, R.C., 1971. A laboratory investigation of miscible displacement by carbon dioxide. SPE paper 3483, Annual Fall Meeting of the Society of Petroleum Engineering of AIME, New Orleans, La., pp. 1-16. [13]. Yelling, W.F., Metcalfe, R.S., (1980), Determination and prediction of Co2 minimum miscibility pressure. J. Pet. Technol., pp.160-168. [14]. Metcalfe, R.S., (1982) Effects of impurities on minimum miscibility pressure and minimum enrichment levels for Co2 and rich-gas displacement. Society of petroleum engineering journal, (4), pp.219-225.

[1]. Rallo, R., Ferre-Gin, J., Arenas, A., & Giralt, F. (2002), Neural virtual sensor for theinferential prediction of product quality from process variables. Computers andChemical Engineering, 26, pp.1735–1754.

[15]. Eakin, B.E., and Mitch, F.J., (1988),Measurement and correlation of miscibility pressures of reservoir oils. SPE paper 18065, Annual Technical Conference and Exhibition, Houston, TX, pp.75-81.

[2]. Qu1, X., Feng, J., Sun, W., (2008),IEEE Intl. Conf. on Intelligent Information Hiding and Multimedia Signal Processing, pp.897-900

[16]. Hornik, K., Stinchcombe, M., White, H., (1990), “Universal approximation of an unknown mapping and its derivatives using multilayer feed forward networks.” Neural Net w, 3(5), pp.551–60.

[3]. Tang, P., and Xi, Z., (2008),Second Intl. Symp. on Intelligent Information Technology Application, IITA '08, Vol.2, Dec., pp.13 – 16. [4]. Reed R., Pruning algorithms-a survey, IEEE Transaction on Neural Networks, (1993), 4, 740-747. [5].

Souto, M.C.P.de, Yamazaki, A., Ludernir, T.B., “Optimization of neural network weights and architecture for odor recognition using simulated annealing”, Proc. 2002 Intl. Joint Conf. on Neural Networks,Vol.1, pp.547552.

[6]. Alston, R.B., Kokolis, G.P., James, C.F., (1985) Co2 minimum miscibility pressure: a correlation for impure Co2 streams and live oil systems. Society of Petroleum Engineering journal, (4), , pp.268-274. [7]. Emera, M.K., Sarma, H.K., (2005), Use of genetic algorithm to predict minimum miscibility pressure between flue gases and oil in design of flue gas injection project, paper SPE 93478, Middle East Oil & gas show and conference, Bahrain, 12-15 March.

[17]. Garcia-Pedrajas, N., Hervas-Martinez, C., & Munoz-Perez, J. COVNET, (2003), “A cooperative co-evolutionary model for evolving artificial neural networks.” IEEE Transaction on Neural Networks, 14, pp.575–596. [18]. Brown, M., & Harris, C. (1994), “Neural fuzzy adaptive modeling and control.” Englewood Cliffs, NJ: PrenticeHall. [19]. Hornick, K., Stinchcombe, M., White, H. Multilayer feed forward networks are universal approximators. Neural Networks, (1989), 2, 359–366 [20]. Clerc, M. and Kennedy, J. (2002): The Particle Swarm: Explosion, Stability, and Convergence in a MultiDimensional Complex Space. IEEE Transactions on Evolutionary Computation, 6(1) 58-73

[8]. Emera, M.K., Sarma, H.k., (2004),Use of genetic algorithm to estimate Co2-oil minimum miscibility pressure- a key

19