Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
17 views14 pages

Hybrid-DNNs-Hybrid Deep Neural Networks For Mixed Inputs

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 14

Hybrid-DNNs: HDNNs for Mixed Inputs

Hybrid-DNNs: Hybrid Deep Neural Networks for Mixed Inputs


Zhenyu Yuan1*, Yuxin Jiang2, Jingjing Li3, Handong Huang1
1
China University of Petroleum-Beijing, Beijing, China.
2
PST Service Corporation, Beijing, China.
3
Beijing Power Concord Technology Co. Ltd., Beijing, China.

Abstract
Rapid development of big data and high-performance computing have encouraged explosive studies of deep
learning in geoscience. However, most studies only take single-type data as input, frittering away invaluable multi-
source, multi-scale information. We develop a general architecture of hybrid deep neural networks (HDNNs) to
support mixed inputs. Regarding as a combination of feature learning and target learning, the new proposed
networks provide great capacity in high-hierarchy feature extraction and in-depth data mining. Furthermore, the
hybrid architecture is an aggregation of multiple networks, demonstrating good flexibility and wide applicability.
The configuration of multiple networks depends on application tasks and varies with inputs and targets. For
reservoir production prediction, a specific HDNN model is configured. Considering their contributions to
hydrocarbon production, core photos, logging images and curves, geologic and engineering parameters can all be
taken as inputs. After preprocessing, the mixed inputs are prepared as regular-sampled structural and numerical
data. For feature learning, convolutional neural networks (CNN) and multilayer perceptron (MLP) network are
configured to separately process structural and numerical inputs. Learned features are then concatenated and fed
to subsequent networks for target learning. Comparison with typical MLP model and CNN model highlights the
superiority of proposed HDNN model with high accuracy and good generalization.

Keyworks
Deep Learning; Hybrid Networks; Deep Neural Networks; CNNs; Mixed Inputs; Geoscience

Highlights
 We develop a novel architecture for hybrid deep neural networks (HDNNs) to support mixed inputs.
 Mixed inputs indicate data of various types or formats, provide more aspects of features.
 The combination of feature learning and target learning provides great capacity in high-hierarchy feature
extraction and in-depth data mining.
 The aggregation of multiple networks demonstrates good flexibility and wide applicability.
 The application to reservoir production prediction highlights the accuracy and generalization of proposed
HDNNs.

* Corresponding author. zhenyuyuan@outlook.com.

1
Hybrid-DNNs: HDNNs for Mixed Inputs

1. Introduction

With the rapid development of big data and high-performance computers, it is achievable to extract more
information and gain new insights from extensive datasets. Techniques from the rapidly evolving field of machine
learning play a key role in this effort. Machine learning provides scientists with a set of tools for discovering new
patterns, structures, and relationships in scientific datasets that are not easily revealed through conventional
techniques (Bergen et al., 2019). Nowadays, machine learning is widely applied in various industries to facilitate
tasks such as data analysis, pattern recognition, target prediction, and so on. For oil and gas industry, machine
learning techniques were introduced to help geoscientists and engineers answering persistent questions about how
to locate and develop economic hydrocarbon resources. Bergen et al. (2019) have reviewed applications of
machine learning for data-driven discovery in solid earth geoscience. Focus on petrophysics, Xu et al. (2019)
investigated the capacity and performance of machine learning dealing with big data.

Efforts to understand the solid earth are challenged by the fact that nearly all of earth’s interior is, and remains,
inaccessible to direct observation (Bergen et al., 2019). Instead, knowledge of interior properties and processes are
based on measurements taken at or near the surface and discovered by solving inverse problems connecting
measurements and targets. Due to the heterogeneity and complexness of sedimentary deposits and limitation of
measurements, the solutions of these inverse problems are often indeterminate (Koltermann and Gorelick, 1996).
However, the largest obstruction is not from our inability to solve the equations, but from knowing what the interior
structure of the earth is really like and the parameters that should go into those equations (Bergen et al., 2019).
Theoretical knowledge is still incomplete, lots of tasks are difficult for humans to perform or explain. Commonly
used physics-driven methods are often assumptions-based and data-restricted, restraining their applicability and
generalization. In comparison, machine learning takes advantage of big data and can excavate complex
relationships between measurements and observations. Therefore, it is well suited to address those problems.

Over the past decade, the amount of data available to geoscientists has grown enormously, through larger
deployments of traditional sensors and through new data sources and sensing modes (Bergen et al., 2019). Xu et
al. (2019) have summarized typically acquired data from various sources in petroleum industry, including core
measurements, wellbore measurements, remotely sourced measurements and reservoir performances. These data
can be categorized into different categories such as geological, geophysical, petrophysical, or reservoir engineering.
Furthermore, the types of above data include numerical value, category, image, text and so on. In this “big data”
world, we’re often presented with an abundance of features that could be used to do machine learning. Some will
be more useful than others, and some will be basically useless noise. Even if the data consists of only a few features,
we may have found that two or more are highly correlated. In a situation like this, it’s a common practice to skip
other correlative features and use only one for modeling purposes. In problems where dozens, hundreds, or even
thousands of possible features exist, statistical techniques are usually utilized to decide which features are the most
important. Taking a machine learning regression application as example, Yuan et al. (2018b) performed feature
representing under principles of high contribution, good consistency and strong orthogonality, through single-
attribute analysis and multi-attribute analysis. Instead of explicit statistical techniques, appropriate architectures
involved in deep neural networks (DNNs) can effectively extract features and their corresponding weights through
representation learning (Bengio et al., 2013). As a subfield of machine learning, deep learning, usually by DNNs,

2
Hybrid-DNNs: HDNNs for Mixed Inputs

uses multiple layers to progressively extract higher-level features from raw inputs (Deng and Yu, 2014).

During the past few years, deep learning has been introduced in various aspects of geoscience applications. Here
we highlight some of them in exploration geophysics and petrophysics. For exploration geophysics, deep learning
techniques have been introduced into seismic processing for first-break picking (Duan et al., 2018; Hu et al., 2019;
Yuan et al., 2018a), data regularization (Lu et al., 2019a; Wang et al., 2019b), denoising and image enhancement
(Dong et al., 2019; Dutta et al., 2019; Halpert, 2018; Siahkoohi et al., 2019; Sun and Demanet, 2018; Sun et al.,
2019; Zhang et al., 2019a; Zhang et al., 2019b), velocity modeling (Li et al., 2018; Park and Sacchi, 2019; Wang
and Ma, 2019; Wu and Lin, 2019) and imaging (Herrmann et al., 2019). In addition, there were some seismic
interpretation studies, including fault detection (Huang et al., 2017; Wu et al., 2019a; Wu et al., 2019b; Wu et al.,
2019c; Xiong et al., 2018; Yuan et al., 2019), seismic facies segmentation (Duan et al., 2019; Krasnov et al., 2018;
Mukhopadhyay and Mallick, 2019; Pham et al., 2019; Titos et al., 2019; Waldeland et al., 2018; Zhao, 2018; Zhao
et al., 2016), automatic horizon picking (Di et al., 2019; Yang and Sun, 2019) as well as seismic inversion (Biswas
et al., 2019; She et al., 2019; Wang et al., 2019a). In the petrophysics discipline, there were deep learning
applications for permeability prediction (Zhong et al., 2019), reservoir thickness estimation (Lu et al., 2019b) and
lithology facies recognition (Jaikla et al., 2019; Zhang et al., 2018). Above studies have made considerable
progresses in certain tasks, but they commonly consider only one single type of data as input. In practice, multiple
types of data may contribute to the performance of target. Therefore, it is preferable to take more types of
measurements into consideration for deep learning. Accordingly, some advanced network architectures are
required.

It is well accepted that CNNs play an important role in learning excellent features for image processing tasks.
However, in tradition they only allow adjacent layers connected, limiting integration of multi-scale or mixed-type
information. Li et al. (2017) presented a framework concatenating multi-scale features by shortcut connections to
the latter fully-connected layer and achieved better results than traditional convolutional neural networks (CNNs)
on image classification and recognition. Considering the target features submerging in complex background loads,
Wu and Wang (2019) developed a concatenate convolutional neural network to separate the feature of the target
load from the load mixed with the background. In this paper, we develop a type of hybrid architecture to construct
DNNs to handle mixed inputs. In the following sections, we will firstly introduce the general architecture of
proposed hybrid deep neural networks (HDNNs) and some relevant theories. Then a specific network for reservoir
production prediction is presented and its application to a practical survey is demonstrated. Finally, we draw some
conclusions and give some suggestions.

2. Architecture of Hybrid Deep Neural Networks

2.1 General architecture

In common, things are interconnected with each other and the output performance is determined by more than one
single factor. Taking multiple factors into consideration, a general architecture of HDNNs is proposed to evaluate
the influence of each factor to the target. The general architecture is shown in Figure 1, which can be treated as a
combination of two parts, namely feature learning and target learning.

3
Hybrid-DNNs: HDNNs for Mixed Inputs

Figure 1 General architecture of hybrid deep neural networks, where f and g denote nonlinear equations defined by certain neural networks,

θ and φ indicate corresponding weights learned by deep learning, number varying from 1 to N indicates index of input data.

Multiple inputs from different measurements are usually of different types or formats. It is not available or
convenient to input all these data into a single network. The proposed HDNNs separately handle each input with
an appropriate network to perform feature learning. Then all learned features are concatenated into an ensemble
feature. This ensemble feature contains valid information from different inputs and then is fed to subsequent neural
network to perform target learning.

As a general framework to handle mixed-input tasks, the inputs and neural networks are not restricted. For different
applications, the inputs can be video, image, audio, text, numerical value or categorical tag. The dimension of
input data varies from point, 1D, 2D, 3D to even much higher. What’s more, the sampling index of each input
could be either continuous or discrete. To handle various inputs, corresponding neural networks could be multiple-
layer perceptron (MLP), CNNs, recurrent neural networks or their multiple variations. Furthermore, the so-called
neural networks could be traditional machine learning algorithms as well, such as support vector machine, random
forests or others.

2.2 Network representation

Considering N types of data as inputs, the mapping equation from inputs to learned features is expressed as

Ti = f i ( Xi ) , (1)
θ
where Xi and Ti indicate certain type of input and its learned feature, corresponding network algorithm is denoted
as fθi, θ indicates model weights, i varies from 1 to N. It should be noted that the algorithm fθi varies according to
the type or format of Xi and Ti. For instance, if the input data is a 1D series, the corresponding algorithm could be
1D CNN or RNN. While if the input data is image or video, the corresponding algorithm should better be 2D or
3D CNN.

Concatenating all learned features, an ensemble feature Z is achieved.

Z = concat (T1 , T2 , , TN ) , (2)

where “concat” indicates concatenate operation. The concatenate operation could be channel-wise or feature-wise,

4
Hybrid-DNNs: HDNNs for Mixed Inputs

depending on the formats of inputs. As an illustration, if there were inputs of point dimension, we should better
concatenate the learned features in the feature-wise way.

After concatenate operation, the ensemble feature is taken as input to the target learning model gφ. Since the target
label can be either continuous value or categorical class, the target learning model applies to both regression and
classification applications. Indicating target label as Y, it is derived from

Y = g (Z) . (3)

Based on above derivation, the proposed HDNNs realize an end-to-end deep learning model, expressed as

Y = F ( X1 , X2 ,..., X N ) , (4)

where F is an integrated function representing the nonlinear mapping from multiple inputs to target label. It is a
combination of fθi, concatenate operation and gφ. Model weights θ and φ are determined by solving optimization
problem through deep learning.

2.3 Optimization expression

A loss function is requisite to perform optimization and varies according to the category of task. For regression
application, mean squared error (MSE) loss function is commonly adopted, expressed as
M
1
 F (X , X ,..., X
2
MSE = 1
j
2
j
N
j ) − Yj , (5)
M j =1

where j indicates the instance index, varying from 1 to M.

For classification application, the cross-entropy measure is taken as an example of loss function,
M C
CE = − Y jk log( p jk ) , (6)
j =1 k =1

where k indicates class index, varying from 1 to C. Yjk indicates binary indicator of class k for instance j, pjk
indicates predicted probability of class k for instance j. The calculation of probability is subject to the selection of
F.

3. Application on Reservoir Production Prediction

For oil and gas industry, there are multiple sources of measurements, such as core measurements, wellbore
measurements, remotely sourced measurements and reservoir performances, contributing to the discovery and
evaluation of hydrocarbon resources. To evaluate the potential of underground reservoirs, hydrocarbon production
is often regarded as one essential property. Only if the productivity is higher enough to cover the exploitation cost,
a reservoir is economically exploitative.

3.1 Related works on production prediction

To predict hydrocarbon production, many researches have been done considering not only the storage and
permeability of reservoir formations (Cheng et al., 1999; Hogg et al., 1996; Liu et al., 2000), but also the
engineering factors such as hydraulic fracturing (Chen et al., 2019; Huang et al., 2015). However, physics-based

5
Hybrid-DNNs: HDNNs for Mixed Inputs

methods often apply to certain type of reservoirs and require in-depth geological understanding to achieve better
predictions. Taking advantage of machine learning, Pan et al. (2015) and Hu et al. (2018) have introduced neural
networks into production prediction. It should be noted that production is a criterion of reservoir formation.
However, the aforementioned methods took average values of log curves as inputs, discarding structural
characteristics of reservoir formations. In contrast, our new developed HDNNs architecture is quite appropriate
for production prediction.

3.2 Specific HDNN architecture

Measurements including cores, well logging, well test and engineering operations may have influences on reservoir
productivity. The data types of above measurements include images, sequences, numerical values and categorical
tags. To be specific, image data could be image logging (i.e. formation microimager (FMI)), core photos or
scanning electron microscopy images. Well logging provides a variety of sequential curves. Some engineering
parameters are numerical values, for example propping agent volume in hydraulic fracturing. Well test or geologic
analysis provides categorical tags such as fluid type, lithology type or facies description. The sampling presentation
of various data is also different. Numerical values and categorical tags are discrete, representing integral effect of
reservoir formations. While images and sequences are presented as structural data, reflecting more details within
the formations.

Taking all these mixed data as inputs, a specific HDNN architecture (shown as Figure 2) for production prediction
is proposed. This architecture is represented as a combination of MLP and CNN. Specifically, MLP is adopted to
deal with numerical and categorical inputs, CNN is applicable to extract high-hierarchy features from structural
data. MLP consists of multiple fully connected (FC) layers. CNN is composed of several convolutional network
units. After feature leaning for different formats of data seperately, outputs of MLP and CNN are concatenated
feature-wise to achieve a final evaluation of production.

Figure 2 Architecture of HDNN for productivity prediction, referred to Yuan et al. (2020).

6
Hybrid-DNNs: HDNNs for Mixed Inputs

Data preparation and preprocessing are essential for deep-learning production prediction. The inputs to HDNN
should be firstly gathered from various wells. Then some preprocessing steps are taken to prepare well-sampled
dataset. For categorical data, one-hot encoding is utilized to perform quantitative transformation. Log curves are
extracted according to the depth range of each reservoir formation of each well, so do FMI images and core photos.
Since the thicknesses of various formations are generally different, resampling is required to obtain uniform-
sampled data. Furthermore, numerical and structural data are normalized to avoid the effect of inconsistent scales.
Besides features, hydrocarbon production of each formation is set as target label. Finally, the prepared dataset is
separated into training data and test data for model training and performance evaluation.

Through deep learning, the HDNN model for production prediction can be well trained, revealing the complex
relationship between various measurements and target production. Then the well-trained model is applied to pre-
prepared test data to evaluate its generalization performance. Finally, possible productions of some unknown wells
can be predicted, and further guiding well location deployment and development engineering.

3.3 Data preparation and analysis

The proposed HDNN model is applied to an oil development block for production prediction. The block contains
180 development wells. Seven types of log curves, namely caliper, acoustic time, gamma ray, spontaneous
potential, shale content, deep and shallow lateral resistivity, are available for all wells. Initial oil production is set
as target label. Besides log curves, some formation and enigneering attributes related to initial production are also
taken into consideration. These attributes include formation thickness, formation median depth, perforation
thichness and perforation number. Table 1 shows available attributes and their corresponding data types.
Representations of these attributes for a example formation are also demonstrated.

Table 1 Features for deep learning productivity prediction.

Type Numerical Data Sequential Log Curves

Formation Formation Perforation


Attribute Perforation
Thickness Depth Thickness CAL AC GR LLD LLS SP VSH
Name Number
/m /m /m

Example

Attribute 96.1 2355.95 15.9 5

Presentation

Figure 3 shows the relationship of formation and perforation attributes with oil production. For log curves, average
values of curves within certain reservoir formation are computed to generally inspect their correlation with target
production, shown in Figure 4. From both Figure 3 and Figure 4, we know that these discrete attributes have weak
correlation to target oil production. That is to say, it is not practicable to predict produciton from discrete numerical
attributes. In contrast, further incoporating structral log curves by HDNNs considers the spatial correlation and

7
Hybrid-DNNs: HDNNs for Mixed Inputs

variation of reservoir formations, thus may enhance performance of production prediction.

Figure 3 Crossplots of oil production varying with perforation thickness, perforation number, formation depth and formation thickness.

Figure 4 Crossplots of oil production varying with average values of log curves within specific reservoir formation. Log curves are caliper,

acoustic time, gamma ray, deep lateral resistivity, shallow lateral resistivity, spontaneous potential and shale content in sequence.

3.4 Model configuration and training

Due to the restriction of available measurements, only discrete numerical values and sequential log curves are
provided here for initial oil production prediction. A customized model with only 1D CNN and MLP networks is
adopted, taking advantage of both discrete numerical values and sequential log curves. Advanced regularization
techniques such as batch normalization (Ioffe and Szegedy, 2015) and dropout (Srivastava et al., 2014) are adopted
to facilitate deep learning. Batch normalization (BN) draws its strength from performing normalization for each
training mini-batch. Dropout is efficient for reducing overfitting by randomly dropping units from the neural
network during training. Both techniques are appropriate for improving the speed, performance, and stability of
deep neural networks. Rectified linear units (ReLU), which enables better training of deeper networks, is used as
an activation function.

In addition to the HDNN model, an MLP model and a CNN model are also adopted to make comparison. The MLP
and CNN models share same architectures with the assembled MLP and CNN modules in HDNN. For MLP or
CNN model training, single type of data is taken as input, and the corresponding network module is directly
connected to the last FC layers to achieve output, without concatenate operation. For the MLP model, 11 numerical

8
Hybrid-DNNs: HDNNs for Mixed Inputs

attributes are prepared as inputs, where sequential log curves are averaged as additional seven attributes. During
model training, same optimizer (adaptive moment estimation (Kingma and Ba, 2014)) and loss function (MSE)
are adopted to perform deep-learning optimization.

Figure 5 Training performance of MLP model, CNN model and HDNN model in sequence.

Figure 5 shows the training performance of MLP, CNN and HDNN model for production prediction. The
performance of MLP model indicates that it runs into overfitting, which may be caused by low correlation of
features to target label and relatively deeper network layers. However, CNN model and HDNN model both show
better convergence, with training error and validation error declining. Comparatively, HDNN model exhibits
smoother fluctuation, better validation error descent and lower mean absolute error. That is to say, considering
comprehensive mixed inputs, HDNN model performs better than typical CNN model.

Following model training, some test data are utilized to evaluate the model’s performance. As showed in Figure 6,
measured and predicted oil productions from three DNN models are demonstrated in crossplots. Meanwhile,
squared correlation coefficient (r2) is adopted as quantitative criterion and displayed. In accordance with the
analyses from training performance, the HDNN model is superior to the typical CNN and MLP model,
demonstrating best correlation with highest r2. Rather than taking statistic averages as inputs, both CNN and
HDNN model handle log curves with convolution operation and present good accuracy and generalization. It can
be concluded that structural log curves contribute significantly to the hydrocarbon production.

Figure 6 Crossplots of predicted production versus measured production for MLP, CNN and HDNN model.

3.5 Production prediction

The well-trained HDNN model is finally used to three new wells to evaluate their production potentials. Three
new wells are deployed and drilled based on available geologic evaluation, well logging is also implemented. From

9
Hybrid-DNNs: HDNNs for Mixed Inputs

above analyses, we know structural log curves indicate possible geological capacity, while engineering parameters
like perforation settings impact its presentation. Setting probable target formation and corresponding perforation
parameters, the well-trained HDNN model is adopted to predict possible production. Meanwhile oil test
engineering is performed. Information including predicted and measured initial oil productions are showed in
Table 2. Though the predicted productions are a bit larger than the measured ones, the predictions basically indicate
the potentials of each formation of various wells.

Table 2 Predicted and measured oil production of three new wells.

Well Name Formation Range Perforation Thickness Predicted Production Tested Production
Perforation Number
/m /m t/d t/d

W1 2056.1-2058.8 2.7 2 33 20

W2 1669.2-1676.5 4 4 120 105

W3 1665.8-1997.5 3.7 3 20 5

4. Conclusions
Taking mixed data from various sources as inputs, we developed a general architecture of HDNNs. The
consideration of mixed data takes full advantage of big data and is suitable for discovering more accurate
relationship between measurements and target. In addition, the proposed HDNN model realizes an end-to-end
learning, avoiding tedious works such as feature extraction and selection. The general HDNNs can be regarded as
an aggregation of multiple network modules. The configuration of these modules depends on target application
and varies with the mixed inputs and target labels. This innovative architecture provides HDNNs with good
flexibility and wide applicability. The principle that multiple factors contribute to an outcome and the availability
of diverse measurements support the versatility of proposed HDNNs for diverse deep learning applications.
Concentrating on hydrocarbon production prediction, the HDNN model takes images, curve logging, geologic
analyses and engineering parameters into account, instead of only statistic averages of log curves. The mixed data
provides more aspects of features. In particular, structural images and curves contain more details and are fit for
exploiting cumulative effect within reservoir formations. The HDNN model was applied to an oil development
block to predict production, with an MLP model and a CNN model as comparisons. Results highlighted the
superiority of HDNN model over the MLP or CNN model, with better optimization convergence and generalization
performance. In addition, the performance of the HDNN and CNN model demonstrated that structural logs are
especially suitable for productivity evaluation. Further application to three new wells for production prediction
validated the feasibility of HDNN model in practice. It can be used to predict production potential of target
formations or target wells, and further guide engineering operation of oil exploitation. In conclusion, the proposed
HDNNs is conducive to perform incisive data mining, especially for tasks providing mixed inputs.

Acknowledgement
The authors acknowledge the support and permission of PST Service Corporation to publish this paper. We also
thank J. Qiu for valuable discussions on production characterization of the adopted oil block.

References
Bengio, Y., Courville, A., Vincent, P., 2013. Representation learning: A review and new perspectives. IEEE

10
Hybrid-DNNs: HDNNs for Mixed Inputs

Transactions on pattern analysis and machine intelligence 35, 1798-1828.


Bergen, K.J., Johnson, P.A., de Hoop, M.V., Beroza, G.C., 2019. Machine learning for data-driven discovery in
solid Earth geoscience. Science 363, eaau0323.
Biswas, R., Sen, M.K., Das, V., Mukerji, T., 2019. Pre-stack and Post-stack inversion using a Physics-Guided
Convolutional Neural Network. Interpretation 7, 1-76.
Chen, Y., Ma, G., Jin, Y., Wang, H., Wang, Y., 2019. Productivity evaluation of unconventional reservoir
development with three-dimensional fracture networks. Fuel.
Cheng, M.L., Leal, M.A., McNaughton, D., 1999. Productivity Prediction From Well Logs In Variable Grain Size
Reservoirs Cretaceous Qishn Formation, Republic Of Yemen. The Log Analyst 40, 9.
Deng, L., Yu, D., 2014. Deep learning: methods and applications. Now Publishers.
Di, H., Li, Z., Maniar, H., Abubakar, A., 2019. Seismic stratigraphy interpretation via deep convolutional neural
networks, SEG Technical Program Expanded Abstracts 2019. Society of Exploration Geophysicists, pp. 2358-
2362.
Dong, X.T., Li, Y., Yang, B.J., 2019. Desert low-frequency noise suppression by using adaptive DnCNNs based
on the determination of high-order statistic. Geophysical Journal International 219, 1281-1299.
Duan, X., Zhang, J., Liu, Z., Liu, S., Chen, Z., Li, W., 2018. Integrating seismic first-break picking methods with
a machine learning approach, SEG Technical Program Expanded Abstracts 2018, pp. 2186-2190.
Duan, Y., Zheng, X., Hu, L., Sun, L., 2019. Seismic facies analysis based on deep convolutional embedding
clustering. Geophysics 84, 1-45.
Dutta, P., Power, B., Halpert, A., Ezequiel, C., Subramanian, A., Chatterjee, C., Hari, S., Prindle, K., Vaddina, V.,
Leach, A., 2019. 3D Conditional Generative Adversarial Networks to enable large-scale seismic image
enhancement. arXiv preprint arXiv:1911.06932.
Halpert, A.D., 2018. Deep learning-enabled seismic image enhancement, SEG Technical Program Expanded
Abstracts 2018. Society of Exploration Geophysicists, pp. 2081-2085.
Herrmann, F.J., Siahkoohi, A., Rizzuti, G., 2019. Learned imaging with constraints and uncertainty quantification.
arXiv preprint arXiv:1909.06473.
Hogg, A.J.C., Mitchell, A.W., Young, S., 1996. Predicting well productivity from grain size analysis and logging
while drilling. Petroleum Geoscience 2, 1-15.
Hu, G., Zhao, Y., Wang, L., Li, T., Tang, Z., Guo, D., 2018. Application of BP neural network model in productivity
prediction and evaluation of CBM wells fracturing. IOP Conference Series: Materials Science and Engineering
397, 012070.
Hu, L., Zheng, X., Duan, Y., Yan, X., Hu, Y., Zhang, X., 2019. First arrival picking with a U-net convolutional
network. Geophysics 84, 1-58.
Huang, L., Dong, X., Clee, T.E., 2017. A scalable deep learning platform for identifying geologic features from
seismic attributes. The Leading Edge 36, 249-256.
Huang, S.J., Zhang, J., Cheng, L.S., 2015. A new formula for calculating the productivity of fracturing directional
wells in low permeability reservoirs. Journal of Xian Shiyou University.
Ioffe, S., Szegedy, C., 2015. Batch normalization: Accelerating deep network training by reducing internal
covariate shift, International conference on machine learning, pp. 448-456.
Jaikla, C., Devarakota, P., Auchter, N., Sidahmed, M., Espejo, I., 2019. FaciesNet: Machine Learning Applications

11
Hybrid-DNNs: HDNNs for Mixed Inputs

for Facies Classification in Well Logs, NeurIPS 2019.


Kingma, D.P., Ba, J., 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Koltermann, C.E., Gorelick, S.M., 1996. Heterogeneity in Sedimentary Deposits: A Review of Structure-Imitating,
Process-Imitating, and Descriptive Approaches. Water Resources Research 32, 2617-2658.
Krasnov, F., Butorin, A., Sitnikov, A., 2018. Automatic Detection of Channels in Seismic Images via Deep
Convolutional Neural Networks Learning. International Journal of Open Information Technologies 6, 20-26.
Li, Y., Zhang, T., Liu, Z., Hu, H., 2017. A concatenating framework of shortcut convolutional neural networks.
arXiv preprint arXiv:1710.00974.
Li, Z., Zhang, J., Liu, Z., Liu, S., Chen, Z., Li, W., 2018. Characterizing the near-surface velocity structures by
applying machine learning, SEG Technical Program Expanded Abstracts 2018, pp. 2712-2716.
Liu, C.B., Schwab, K., Lin, X.J., Chun, K.L., 2000. Layer Productivity Prediction Based on Wireline Logs and
Formation Tester Data, International Oil and Gas Conference and Exhibition in China. Society of Petroleum
Engineers, Beijing, China, p. 8.
Lu, P., Xiao, Y., Zhang, Y., Mitsakos, N., 2019a. Deep learning for 3D seismic compressive-sensing technique: A
novel approach. The Leading Edge 38, 698-705.
Lu, P., Zhang, Y., Yu, H., Morris, S., 2019b. Reservoir Characterizations by Deep-Learning Model: Detection of
True Sand Thickness. arXiv preprint arXiv:1909.06005.
Mukhopadhyay, P., Mallick, S., 2019. Bayesian deep learning for seismic facies classification and its uncertainty
estimation, SEG Technical Program Expanded Abstracts 2019. Society of Exploration Geophysicists, pp. 2488-
2492.
Pan, B., Shi, Y., Jiang, B., Liu, D., Zhang, H., Guo, Y., Yang, X., 2015. Research on Gas Yield and Level Prediction
for Post-Frac Tight Sandstone Reservoirs. Journal of Jilin University: Earth Science Edition 45, 649-654.
Park, M.J., Sacchi, M.D., 2019. Automatic velocity analysis using Convolutional Neural Network and Transfer
learning. Geophysics 85, 1-45.
Pham, N., Fomel, S., Dunlap, D., 2019. Automatic channel detection using deep learning. Interpretation 7, 1-41.
She, B., Wang, Y., Liu, Z., Cai, H., Liu, W., Hu, G., 2019. Seismic impedance inversion using dictionary learning-
based sparse representation and nonlocal similarity. Interpretation 7, SE51-SE67.
Siahkoohi, A., Verschuur, D.J., Herrmann, F.J., 2019. Surface-related multiple elimination with deep learning, SEG
Technical Program Expanded Abstracts 2019. Society of Exploration Geophysicists, pp. 4629-4634.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R., 2014. Dropout: A simple way to
prevent neural networks from overfitting. The Journal of Machine Learning Research 15, 1929-1958.
Sun, H., Demanet, L., 2018. Low frequency extrapolation with deep learning, SEG Technical Program Expanded
Abstracts 2018. Society of Exploration Geophysicists, pp. 2011-2015.
Sun, J., Slang, S., Elboth, T., Larsen Greiner, T., McDonald, S., Gelius, L.-J., 2019. A convolutional neural network
approach to deblending seismic data. Geophysics 85, 1-57.
Titos, M., Bueno, A., García, L., Benítez, C., Segura, J.C., 2019. Classification of Isolated Volcano-Seismic Events
Based on Inductive Transfer Learning. IEEE Geoscience and Remote Sensing Letters, 1-5.
Waldeland, A.U., Jensen, A.C., Gelius, L.-J., Solberg, A.H.S., 2018. Convolutional neural networks for automated
seismic interpretation. The Leading Edge 37, 529-537.
Wang, W., Ma, J., 2019. Velocity model building in a cross-well acquisition geometry with image-trained artificial

12
Hybrid-DNNs: HDNNs for Mixed Inputs

neural networks. Geophysics 85, 1-42.


Wang, Y., Ge, Q., Lu, W., Yan, X., 2019a. Seismic impedance inversion based on cycle-consistent generative
adversarial network, SEG Technical Program Expanded Abstracts 2019. Society of Exploration Geophysicists, pp.
2498-2502.
Wang, Y., Wang, B., Tu, N., Geng, J., 2019b. Seismic Trace Interpolation for Irregularly Spatial Sampled Data
Using Convolutional Auto-Encoder. Geophysics 85, 1-84.
Wu, Q., Wang, F., 2019. Concatenate Convolutional Neural Networks for Non-Intrusive Load Monitoring across
Complex Background. Energies 12, 1572.
Wu, X., Geng, Z., Shi, Y., Pham, N., Fomel, S., Caumon, G., 2019a. Building realistic structure models to train
convolutional neural networks for seismic structural interpretation. Geophysics 85, 1-48.
Wu, X., Liang, L., Shi, Y., Fomel, S., 2019b. FaultSeg3D: using synthetic datasets to train an end-to-end
convolutional neural network for 3D seismic fault segmentation. Geophysics 84, 1-36.
Wu, X., Shi, Y., Fomel, S., Liang, L., Zhang, Q., Yusifov, A.Z., 2019c. FaultNet3D: Predicting Fault Probabilities,
Strikes, and Dips With a Single Convolutional Neural Network. IEEE Transactions on Geoscience and Remote
Sensing.
Wu, Y., Lin, Y., 2019. InversionNet: An Efficient and Accurate Data-driven Full Waveform Inversion. IEEE
Transactions on Computational Imaging, 1-1.
Xiong, W., Ji, X., Ma, Y., Wang, Y., BenHassan, N.M., Ali, M.N., Luo, Y., 2018. Seismic Fault Detection With
Convolutional Neural Network. Geophysics 0, 1-28.
Xu, C., Misra, S., Srinivasan, P., Ma, S., 2019. When Petrophysics Meets Big Data: What can Machine Do?, SPE
Middle East Oil and Gas Show and Conference. Society of Petroleum Engineers.
Yang, L., Sun, S.Z., 2019. Seismic horizon tracking using a deep convolutional neural network. Journal of
Petroleum Science and Engineering, 106709.
Yuan, S., Liu, J., Wang, S., Wang, T., Shi, P., 2018a. Seismic waveform classification and first-break picking using
convolution neural networks. IEEE Geoscience and Remote Sensing Letters 15, 272-276.
Yuan, Z., Huang, H., Jiang, Y., Tang, J., 2018b. Multiattribute reservoir parameter estimation based on a machine
learning technique, SEG Technical Program Expanded Abstracts 2018, pp. 2266-2270.
Yuan, Z., Huang, H., Jiang, Y., Tang, J., Li, J., 2019. An Enhanced Fault Detection Method based on Adaptive
Spectral Decomposition and Super-Resolution Deep Learning. Interpretation 7, T713–T725.
Yuan, Z., Jiang, Y., Huang, H., Li, J., 2020. Reservoir productivity prediction based on a hybrid deep neural
network, 82nd EAGE Conference and Exhibition 2020.
Zhang, G., Wang, Z., Chen, Y., 2018. Deep learning for seismic lithology prediction. Geophysical Journal
International 215, 1368-1387.
Zhang, M., Liu, Y., Bai, M., Chen, Y., 2019a. Seismic Noise Attenuation Using Unsupervised Sparse Feature
Learning. IEEE Transactions on Geoscience and Remote Sensing, 1-15.
Zhang, Y., Lu, P., Yu, H., Morris, S., 2019b. Enhancement of seismic imaging: An innovative deep learning
approach. arXiv preprint arXiv:1909.06016.
Zhao, T., 2018. Seismic facies classification using different deep convolutional neural networks, SEG Technical
Program Expanded Abstracts 2018, pp. 2046-2050.
Zhao, T., Zhang, J., Li, F., Marfurt, K.J., 2016. Characterizing a turbidite system in Canterbury Basin, New Zealand,

13
Hybrid-DNNs: HDNNs for Mixed Inputs

using seismic attributes and distance-preserving self-organizing maps. Interpretation 4, SB79-SB89.


Zhong, Z., Carr, T.R., Wu, X., Wang, G., 2019. Application of a Convolutional Neural Network (CNN) in
Permeability Prediction: A Case Study in the Jacksonburg-Stringtown Oil Field, West Virginia, USA. Geophysics
84, 1-46.

14

You might also like