Data-Driven Evolutionary Algorithm For Oil Reservoir Well-Placement and Control Optimization
Data-Driven Evolutionary Algorithm For Oil Reservoir Well-Placement and Control Optimization
Abstract
Optimal well placement and well injection-production are crucial for the reservoir
development to maximize the financial profits during the project lifetime. Meta-
heuristic algorithms have showed good performance in solving complex, nonlinear and
simulation runs are involved during the optimization process. In this work, a novel and
(PNN) is adopted as the classifier to select informative and promising candidates, and
the most uncertain candidate based on Euclidean distance is prescreened and evaluated
with a numerical simulator. Subsequently, local surrogate model is built by radial basis
function (RBF) and the optimum of the surrogate, found by optimizer, is evaluated by
the numerical simulator to accelerate the convergence. It is worth noting that the shape
factors of RBF model and PNN are optimized via solving hyper-parameter sub-
expensive optimization problem. The results show the optimization algorithm proposed
1 Introduction
Obtaining the optimal oil reservoir development plan, such as determining the locations
of wells and the flow rates or bottom-hole-pressure of wells, plays an important role in
optimization can maximize the financial profits during the oilfield development period
location and well-control scheme can be impacted by many factors, such as the
heterogeneous permeability of the reservoir, the properties of fluids and the developing
problem and the decision variables are integers [6]. Such optimization problems are
Reynolds [13] used a gradient-based algorithm to solve the joint optimization of well
number and locations and controls. However, these algorithms are easy to get stuck into
local optima and the gradient calculated by adjoint or finite-difference method may not
be precise for such discrete problems. Derivative-free algorithms, such as genetic
evolution (DE) [21, 22] and covariance matrix adaptation-evolution strategy (CMA-ES)
[2, 23], have been commonly used for well-placement optimization problems, as these
heuristic methods are able to jump out from the local optimum and converge to global
optimum [24-26]. Beckner and Song [24] adopted simulated annealing method to
determine optimal economic well control and placement. Burak et al. [14] presented
Onwunalu and Durlofsky [19] adopted PSO for well placement and type. Ding et al.
[18] proposed a hybrid objective function with PSO as optimizer for well-placement
problems. Awotunde et al. [2] used CMA-ES as the optimizer to determine the well
algorithms need to consume a large number of simulation runs, especially for high-
prohibitive [21].
Surrogate model (also called proxy) has gained increasing attention recently due to the
promising ability on reducing the number of simulation runs during the optimization
model (CRM) [31] and machine learning methods [32-34]. Jansen and Durlofsky [30]
developed proper orthogonal decomposition to reduce the order of model and accelerate
the computation of single simulation. However, ROM needs to obtain the explicit
information of the simulator which is impossible for commercial simulator like Eclipse.
degree of fluid storage between wells. Zhao et al. proposed a physical-based interwell-
equations for the control units [32]. However, INSIM and CRM are less faithful and the
Machine learning methods, which are computationally cheap mathematical models, can
approximate the input/output relationship between the decision variables and the
objective function [27, 35]. Commonly used machine learning methods as surrogate are:
Gaussian process (GP) [36, 37], radial basis function (RBF) [38], polynomial regression
surface (PRS) and support vector machine (SVM) [39, 40]. Taking the prediction and
the uncertainty of sample points into consideration, kriging-based infill criteria, such as
lower confidence bound (LCB) [37], expected improvement (EI) and probability of
improvement (PoI), can balance the exploration and the exploitation. Bernardo et al.
problems. Guo and Reynolds [42] proposed a workflow with support vector regression
researchers have made efforts to solve this problem recently. Liu et al. [43] proposed
learning PSO algorithm (SACOSO) to cooperatively guide the search. Yu et al. [45]
algorithm for 30-100 dimensional benchmark functions. Wang et al. [46] developed an
optimizer on solving 20-100 dimensional benchmark functions. Cai et al. [47] proposed
prescreening strategy. Li et al. [48] used a boosting strategy for model management and
a localized data generation method to alleviate small sample problem on solving 10-
strategy and local acceleration strategy on solving 20-100 dimensional problems, and
Wei et al. [49] proposed a classifier-assisted level-based learning swarm optimizer (CA-
LLSO) using the gradient boosting gradient classifier and the level-based learning
swarm optimizer for 20-300 dimensional benchmark functions. However, when dealing
with high-dimensional expensive problems, these methods still suffer from the curse of
dimensionality.
The objective of this study is to use machine learning methods guiding the search of
alleviate the curse of dimensionality and accelerate the convergence during the
differential evolutionary algorithm (GDDE) and consists of two stages. In the first stage,
promising candidates, and the most uncertain candidate based on Euclidean distance
evaluated with simulator. In the second stage, surrogate with RBF is built in a small
promising area and the optimum of the surrogate is evaluated by numerical simulator
to accelerate the convergence. The shape factor of RBF model is optimized via solving
The main contributions of this work are: 1) The study adopts both classification model
identify the promising individuals from the offsprings. The most uncertain individual
is selected from the promising individuals to evaluate with real function evaluation.
Besides, RBF is adopted to approximate the landscape of the objective function at the
local promising area to accelerate the convergence. 2) To obtain the optimal shape
validation error. After getting the optimal shape factor, the accuracy of the local
surrogate can be improved greatly and the convergence speed can be accelerated
significantly.
The rest of this paper is organized as follows. Firstly, the well-placement optimization
and joint optimization problems are introduced in section 2. Then the related techniques
In recent years, joint optimization of well-placement and control problems have gained
increasing interest to identify the optimal well locations and corresponding control
scheme [50, 51]. Traditional methods on such problems use a sequential way to
determine the optimal development scheme which is easy to suffer from local optimum
[52, 53]. Li and Jafarpour [54] proposed to use derivative-free random stochastic search
method for well-placement and gradient-based algorithm for well control problem
alternatively.
In this study, a joint rather than a sequential manner is adopted to achieve better water-
flooding reservoir development plan and maximize the profit during the development
period by improving the sweep efficiency of the reservoir [55, 56].
The joint optimization of well placement and control problem can be defined as follows:
subject to
x 2 l x 2 x 2 u , x 2 R n2 (3)
where NPV (x1 , x2 ) denotes the objective function of the problem; x1 denotes the
of well locations; x 2 denotes the continuous vector of well control (i.e., the flow rate
or the bottom hole pressure of each well); n2 =nt (np ni ) is the number of variables of
well control (i.e., the number of time steps multiply the number of wells); x1l and x1u
are the lower and upper boundaries of well locations, respectively; x 2l and x2u are
the lower and upper boundaries of well control variables, respectively. Note that all the
wells to be optimized are vertical and only bound constraints are considered in this
k [Qo ,t ro Qw,t rw Qi ,t ri ] t
NPV (x1 , x 2 ) (4)
t 1 (1 b) pt
where k denotes the number of time steps; t is the length of each time step; Qo ,t ,
Qw,t and Qi ,t are oil production rate, water production rate and water injection rate at
tth time step, respectively; ro is the oil price while rw and ri are the costs of water removal
and water injection, respectively; b is the discount rate; pt is the elapsed time in years.
3 Related techniques
Probabilistic neural network (PNN) is a kind of feed forward neural network consisting
of four layers, i.e., input layer, pattern layer, summation layer and output layer [57].
The first layer is input layer, used to receive the value from sample points and pass to
the network. The number of the neurons of input layer equals the dimension of the
optimization problem. The second hidden layer uses RBF as kernel function with
training sample points as the neuron center to calculate the distance between input
vector and neuron center. The input/output relationship from vector x to the jth neuron
1 ( x xij )T ( x xij )
ij ( x) exp[ ] (5)
d
2 2
(2 )
2 d
where d is the dimension of the input vector, is the shape factor and xij is the
neuron vector.
Summation layer can estimate the probability density of each mode with the help of the
calculated by averaging the value of neuron that is in the same class. The value of the
1 Ni
( x xij )T ( x xij )
pi ( x) d exp[ 2 2
] (6)
d j 1
Ni (2 )
2
selected as the output. The output layer uses competitive neurons as classifier.
Pattern Hidden
layer layer
Input Input
layer Summation layer
layer
x1 Output x1 Output
layer layer
x2 ŷ x2 ŷ
…
…
xd xd
optimization problems, because it is relatively not sensitive to the dimension and the
the landscape of objective function [27]. Suppose {(xi , f (xi )), i 1,2,..., n} is the
training sample points, xi R d and f (xi ) R are the decision vectors and objective
n
fˆ (x) ωi ( x xi ) (7)
i 1
where fˆ is the mathematical model of RBF, i is the ith weight parameter and
is the kernel function. In this work, Gaussian kernel is adopted as basis function:
x2
( x) exp( ) (8)
2
where is the shape parameter. The weight parameters can be calculated as follows:
ω φ-1f (9)
PRS can approximate the objective function y f (x) in the following form (second
d d d
fˆ (x) 0 + i xi + ij xi x j (10)
i 1 i 1 i 1
where β [0 , 1 ,..., d , 11 ,..., dd ]T is the coefficient vector which can be calculated
as follows:
DE, a population-based heuristic algorithm, has been widely used in many complex
P [x1 , x2 ,..., xNP ] and each individual is a vector with dimension d, e.g., the ith
DE/best/1
vi xbest Mu (xi1 xi2 ) (12)
DE/current-to-best/1
where vi is the ith donor vector, xi denotes the ith individual, Mu is the mutation
operator, and i1 , i2 [1, NP] are randomly generated and mutually different integers. In
v i , if (rand CR j jrand )
j
u j
i
j
(14)
xi , otherwise
where uij denotes the jth dimension of the ith trial vector, CR denotes the crossover
operator, rand is a random number between 0 and 1, and jrand is a random integer
from 1 to d.
4 Methodology
In this study, a novel data-driven evolutionary algorithm called GDDE is proposed for
The generic diagram of the proposed GDDE is shown in Fig. 3, and the pseudo-code of
generate initial sample points. Numerical simulation is used to conduct real function
evaluation, and the sample points are added into database D. GDDE consists of two
stages. In the first stage, PNN is adopted as classifier to select informative and
promising candidates, and the most uncertain candidate based on Euclidean distance is
evaluated with simulator. In the second stage, surrogate with RBF is built in a small
promising area and the optimum of the surrogate is evaluated by numerical simulator
to accelerate the convergence. The shape factor of RBF and PNN models are optimized
promising individual can be prescreened by Euclidean distance and PNN to explore the
search space, while the optimum of the local surrogate can exploit the most promising
area to accelerate the convergence. After combining two strategies, the proposed
algorithm can balance the exploration and the exploitation and enhance the search
Database
xkbest f ( x)
fˆ (x)
xˆ best
xkbest
+1
from the decision space, conduct function evaluation with numerical simulation, and
03: Select NP best sample points P {x1, x2 ,..., xNP } from database D;
04: Sort the population P into two classes: individuals as the first class {xk }1 ,
05: Train the PNN as classifier with sorted population {xk }1 and {xk }2 ;
06: Obtain the optimal shape factor of PNN based on Algorithm 4 (see section
4.4);
08: Prescreen the promising sample points belonging to the first class {uk }1 ;
09: Prescreen the most uncertain individual uunc from the promising individuals
12: Choose best sample points x1 , x2 ,..., x as training sample points from
database D;
13: Construct a local RBF model fˆl with selected sample points, calculate
4.4);
15: Locate the optimum of fˆl with DE optimizer in the local space;
19: Output
introduced.
candidates, and the most uncertain candidate based on Euclidean distance is evaluated
with simulator. The pseudo-code of this stage is shown in Algorithm 2. Firstly, select
NP best sample points P {x1 , x2 ,..., xNP } from the database D as the population. To search
for promising individuals, the population P is classified into two classes: individuals
as the first class {xk }1 , NP individuals as the second class {xk }2 , and PNN is trained
solutions u1 , u2 ,..., uNP . Afterward, a classifier is used to prescreen the promising sample
points belonging to the first class {uk }1 , and Euclidean distance is used to prescreen the
where dis(uk , x) is the Euclidean distance between trial vector uk and training
where uunc denotes the most uncertain individual from promising solutions {uk }1 .
Subsequently, evaluate the fitness value of uunc with numerical simulator and record
Input: Database D, the population size NP, the size of the first class , mutation
01: Select NP best sample points P {x1, x2 ,..., xNP } from database D;
02: Sort the population P into two classes: individuals as the first class {xk }1 ,
03: Train the PNN as classifier with sorted population {xk }1 and {xk }2 ;
05: Prescreen the promising sample points belonging to the first class {uk }1 ;
06: Prescreen the most uncertain individual uunc from the promising individuals
Output: Database D.
accelerate the convergence. The shape factor of RBF model is optimized via solving a
of this stage is provided in Algorithm 3. Firstly, best sample points x1 , x2 ,..., x are
selected as training sample points from database D. Then a local RBF model is
constructed to approximate the landscape of the small promising area. The range of the
where lbl i and ubl i is the lower and upper boundaries of the ith variable of the local
surrogate, xi is the ith variable of the th solution. After determining the local range,
the surrogate fˆl (x) is built in the local space. To accelerate the convergence during
the optimization process, the optimum of the local surrogate x̂best is supplied by DE
Then evaluate the fitness value of the pseudo-optimal solution x̂best with numerical
simulator and record the fitness value of x̂best into database D. It is worth noting that
the promising local space will gradually reduce to the area nearby the optimum until
01: Choose best sample points x1 , x2 ,..., x as training sample points from
database D;
02: Construct a local RBF model fˆl with selected sample points, calculate
03: Locate the optimum of fˆl with DE optimizer in the local space;
Output: Database D
For a set of predefined database D {(xi , f (xi )), i 1,2,..., N} , the approximation
accuracy of RBF and the classification accuracy of PNN are impacted by the shape
factor value of the kernel functions. Fig. 4 indicates the prediction impact of using
shape factor value. The pseudo-code of this section is shown in Algorithm 4. The
N N
e( ) ei ( f (xi ) fˆ (xi )) 2 (19)
i 1 i 1
where ei is the prediction error at the ith sample point. Therefore, the sub-optimization
0 (21)
expensive since each function evaluation needs to train the model and calculate the
corresponding errors {e(1 ), e( 2 ),..., e( m )} according to Eq. (19). Subsequently, the
coefficient vector β [0 , 1 , 11 ]T of the PRS can be calculated by Eq. (11). Thus, the
optima of the surrogate is 21 , and the optimal shape factor is chosen as:
*
11
Input: Number of training shape factors m, upper bound max and lower bound
01: Select m training shape factors {1 , 2 ,..., m} uniformly from [ min , max ] ;
11
05: Choose the optimal shape factor min{max{ * , min }, max } ;
5 Case study
a joint optimization problem of Egg model. The proposed algorithm is also compared
with commonly used differential evolution (DE), GDDE with only classification-based
prescreening strategy (namely Classifier) and GDDE with only local search strategy
on well-placement optimization problems. The model contains 100 100 1 grid blocks
with the size of each grid block 100 100 20ft 3 . Fig. 5 presents the permeability
distribution of the 2D reservoir model. The reservoir contains several distinct high-
permeability channels. There are totally 10 wells, with 5 water-injection wells and 5
production wells. Since the reservoir has no aquifer or gas cap, primary oil production
is negligible. The initial reservoir pressure is 6000 psi. The porosity of the reservoir is
0.2. The initial water saturation is 0.2. The compressibility of the reservoir is
6.9 105 psi1 . The viscosity of oil is 2.2 cp. The control scheme for each injection and
production well is fixed. The injection rate for each injection well is set to 1000
STB/day, while the bottom-hole-pressure for each production well is set to 3000 psi.
The lifetime of the project is 7200 days, and each time step is 360 days, which means
there are totally 20 time steps. The main goal is to determine the locations of 5 water-
injection wells and 5 production wells to further maximize the profits in the lifetime of
the reservoir project. Therefore, there are totally 20 variables for the well-placement
optimization problem. The oil price is set to 80 USD/STB, the cost of water injection
is 5 USD/STB, and the cost of water treatment is 5 USD/STB. The discount rate is set
to 0%.
(mDarcy)
Fig. 6 NPV vs. number of simulation runs by DE, Classifier, Local search and GDDE
for case 1
To build surrogate model accurately, the initial sample points provided by LHS is set to
100 for case 1. Since the heuristic algorithms are stochastic, 5 independent runs are
obtained by DE, Classifier, Local search and GDDE for case 1 are shown in Fig. 6.
Local search and GDDE converge fast in the early stage of optimization, while
curves for case 1 with 5 independent runs. Classifier tends to consume more real
strategy, GDDE converges efficiently and effectively. Fig. 8 presents the optimal well-
placement provided by DE, Classifier, Local search and GDDE and corresponding oil
saturation fields after 720, 3600 and 7200 days. As shown in Fig. 8, 3 production wells
are set to the medium high-permeability channel and no injection well is set to the
channel, which can reduce the water production and increase oil production. The well
locations provided by GDDE can achieve more oil production and higher NPV. It is
worth noting that some well locations are quite close, or even overlap (Fig. 8). Such
scheme of well locations may not be optimal. If the developer wishes to avoid such
NPV / USD
NPV / USD
Fig. 7 The distributions of convergence curves for case 1 with 5 independent runs
DE
Injection
well
Production
Classifier well
LS
GDDE
Fig. 8 The optimal well-placement by DE, Classifier, Local search and GDDE and
corresponding oil saturation fields after 720, 3600 and 7200 days
5.2 Case 2: Joint optimization of well-placement and control scheme (Egg model)
Egg model is chosen as the 3D reservoir model for well-placement and production joint
optimization problem. Egg model has been extensively used for well-placement and
production optimization [7, 37, 38]. The permeability distribution of Egg model is
shown in Fig. 9. The model has 60 60 7 grid blocks, with the size of each grid block
30 30 10m3 , and 18,853 are active grid blocks. The detailed description of Egg model
can be found in Jansen et al [59]. For this case, the life cycle of the project is 3600 days,
and each time step is 360 days, for a total of 10 time steps. The main goal is to determine
the locations of 8 water injection wells and 4 production wells and the control scheme
of 12 wells on each time step to maximize the profits in the lifetime of the Egg model.
Therefore, there are totally (8 4) 2 (8 4) 10 144 variables for the well-placement
and production joint optimization problem. The lower and upper bounds of injection
wells are 0 m3/day and 80 m3/day, respectively. The lower and upper bounds of
production wells are 0 m3/day and 120 m3/day, respectively. The oil price, the cost of
water injection and treatment, and the discount rate are the same as case 1.
Permeability(md)
In case 2, the outline of the reservoir is not regular. When new well locations of
candidate solutions generated by heuristic operators are not at active grid blocks, the
well locations will be moved to the nearest active grid block. The initial sample points
provided by LHS is set to 200 for the case. 5 independent runs are performed to show
the statistic performance of the algorithms. The optimization results provided by DE,
Classifier, Local search and GDDE for case 2 are shown in Fig. 10. In case 2, Local
search and GDDE also converge fast in comparison with Classifier and DE. Fig. 11
shows the distributions of convergence curves for case 2 with 5 independent runs. In
efficient. The performance of Local search is better than Classifier, because Classifier
consumes most function evaluations on exploring uncertain areas, with sparse sample
uncertainty distribution of optimization for Classifier and Local search is high (Fig. 11).
strategy, the proposed GDDE converges efficiently, and the final result after 1000
Fig. 10 NPV vs. number of simulation runs by DE, Classifier, Local search and
NPV / USD
(a) DE (b) Classifier
NPV / USD
NPV / USD
Fig. 11 The distributions of convergence curves for case 2 with 5 independent runs
Fig. 12 presents the optimal well control schemes by DE, Classifier, Local search and
GDDE for Egg model. Fig. 13 indicates the simulation results of optimal control
calculated by DE, Classifier, Local search and GDDE. As illustrated in Fig. 13, the
developing plan of Local search can provide more oil in comparison with other methods.
However, local search also leads to more cumulative water injection and treatment,
which will increase the cost of reservoir development and lower the NPV. The
in more water output from the production wells, but no increase in cumulative oil
production. The developing plan obtained by GDDE cannot significantly increase oil
production compared to Local search, but can substantially reduce the water injection
and water production, thereby saving development costs and improving the NPV of the
(a) Cumulative oil production (b) Cumulative water injection (c) Cumulative water production
Fig. 13 Simulation results of optimal control calculated by DE, Classifier, Local search
and GDDE
6 Conclusion
GDDE adopts PNN as classifier to select informative and promising candidates, and
the most uncertain candidate selected by Euclidean distance is evaluated with numerical
simulations. Moreover, local surrogate is adopted to accelerate the convergence. To
obtain high-quality approximation model and classification model, the shape parameter
assisted with PRS. It is worth noting that the proposed optimization framework can
incorporate other evolutionary algorithms as optimizer, such as PSO, GA, and artificial
bee colony.
problem and the Egg model joint optimization problem in comparison with other
of surrogate-assisted methods are significantly better than DE. After combining sub-
and local search strategy, the proposed GDDE converges efficiently and effectively, and
the final optimization results on two cases are also promising. Therefore, the
computational cost of the optimization process can be saved significantly, and the NPV
of the project throughout the life cycle can be maximized. In future, the focus will be
type of wells.
Nomenclature
CR = crossover operator
d = number of variables in vector
D = Database
e = validation error
Mu = mutation operator
nt = time-step number
Qi ,t = injection-flow-rate, STB/day
u = trial vector
v = donor vector
β = coefficient vector
= shape factor
ω = weight parameters
= basis function
Superscript
Subscripts
g = global surrogate
i = offspring index
l = local surrogate
Acknowledgement
This study was supported by RAE Improvement Fund of the Faculty of Science, The
University of Hong Kong, the grants from the Research Grants Council of Hong Kong
Special Administrative Region, China, (Project No. 17303519 and 17307620). The
code of the algorithm is open for access by sending Guodong Chen email.
References
[1] Bangerth W, Klie H, Wheeler MF, Stoffa PL, Sen MK. On optimization
Geosciences 2006;10(3):303-19.
[6] Wang H, Ciaurri DE, Durlofsky LJ, Cominelli A. Optimal well placement under
2012;17(01):112-21.
[7] Chen G, Zhang K, Xue X, Zhang L, Yao C, Wang J, et al. A radial basis function
[9] Zandvliet MJ, Handels M, Essen GMv, Brouwer DR, Jansen JD. Adjoint-Based
2008;13(04):392-9.
[10] Wang Y, Alpak F, Gao G, Chen C, Vink J, Wells T, et al. An Efficient Bi-
2022;27(01):364-80.
2002.
Order, and Controls Given a Set of Potential Drilling Paths. SPE Journal
2020;25(3).
Geosciences 2010;14(1):183-98.
[21] Chen G, Zhang K, Zhang L, Xue X, Yang Y. Global and Local Surrogate-Model-
[24] Beckner BL, Song X. Field Development Planning Using Simulated Annealing
2020;264:116758.
2013;18(06):1003-11.
[30] Jansen JD, Durlofsky LJ. Use of reduced-order models in well control
[31] Yousef AA, Gentil PH, Jensen JL, Lake LW. A Capacitance Model To Infer
[32] Zhao H, Kang Z, Zhang X, Sun H, Cao L, Reynolds AC. A Physics-Based Data-
Driven Numerical Model for Reservoir History Matching and Prediction With
[33] Xue X, Zhang K, Tan KC, Feng L, Wang J, Chen G, et al. Affine
[34] Tang H, Durlofsky LJ. Use of low-fidelity models with machine-learning error
2021.
[35] Islam J, Nazir A, Hossain MM, Alhitmi HK, Kabir MA, Jallad A-H. A Surrogate
2020;192:107192.
Engineering 2020;185:106633.
[40] Ahmadi M-A, Bahadori A. A LSSVM approach for determining well placement
[41] Horowitz B, Afonso S, Mendona CVP. Using Control Cycle Switching Times
[46] Wang X, Wang GG, Song B, Wang P, Wang Y. A novel evolutionary sampling
[48] Li JY, Zhan ZH, Wang C, Jin H, Zhang J. Boosting Data-Driven Evolutionary
Computation 2020;PP(99):1-.
[49] Wei FF, Chen WN, Yang Q, Deng J, Zhang J. A Classifier-Assisted Level-Based
[52] Han X, Zhong L, Wang X, Liu Y, Wang H. Well Placement and Control
[53] Islam J, Vasant PM, Negash BM, Laruccia MB, Myint M, Watada J. A holistic
[55] Wang X, Haynes RD, Feng Q. A multilevel coordinate search algorithm for well
2016;95:75-96.
[56] Bellout MC, Echeverría Ciaurri D, Durlofsky LJ, Foss B, Kleppe J. Joint
2012;16(4):1061-79.
Network.
[58] Rippa S. An algorithm for selecting a good value for the parameter c in radial
1999;11(2):193-210.
[59] Jansen J-D, Fonseca R-M, Kahrobaei S, Siraj MM, Van Essen GM, Van den Hof
PMJ. The egg model–a geological ensemble for reservoir simulation.