Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
A Stand-Alone Smart Camera System for Online Post-Earthquake Building Safety Assessment
Next Article in Special Issue
Multimodal Emotion Evaluation: A Physiological Model for Cost-Effective Emotion Classification
Previous Article in Journal
Multi-Field Interference Simultaneously Imaging on Single Image for Dynamic Surface Measurement
Previous Article in Special Issue
A Robust Nonrigid Point Set Registration Method Based on Collaborative Correspondences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weak Signal Enhance Based on the Neural Network Assisted Empirical Mode Decomposition

1
National Demonstration Center for Experimental Electrical & Electronic Education, Yangtze University, Jingzhou 434023, China
2
School of Computer Science, Yangtze University, Jingzhou 434023, China
3
School of Electronic and Information, Yangtze University, Jingzhou 434023, China
4
Key Laboratory of Exploration Technologies for Oil and Gas Resources (Yangtze University), Ministry of Education, Wuhan 430100, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2020, 20(12), 3373; https://doi.org/10.3390/s20123373
Submission received: 9 April 2020 / Revised: 12 June 2020 / Accepted: 13 June 2020 / Published: 15 June 2020
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)

Abstract

:
In order to enhance weak signals in strong noise background, a weak signal enhancement method based on EMDNN (neural network-assisted empirical mode decomposition) is proposed. This method combines CEEMD (complementary ensemble empirical mode decomposition), GAN (generative adversarial networks) and LSTM (long short-term memory), it enhances the efficiency of selecting effective natural mode components in empirical mode decomposition, thus the SNR (signal-noise ratio) is improved. It can also reconstruct and enhance weak signals. The experimental results show that the SNR of this method is improved from 4.1 to 6.2, and the weak signal is clearly recovered.

1. Introduction

Traditional signal analysis and processing starts from the theory of Fourier analysis. However, Fourier analysis is only applicable to the global transformation of stationary and linear signals. Therefore, it is impossible to describe the time-frequency local characteristics of signals. Especially in the case of strong noise, the traditional Fourier transform is difficult to deal with the weak signal in the background of strong noise under the premise of ensuring high fidelity. Recently, some scholars have proposed a non-linear method of wavelet threshold [1,2], but the wavelet basis function is fixed, thus it cannot match all real signals. To be precise, wavelet analysis does not have the characteristics of self-adaptation. Once a wavelet basis function is selected, it will be used to analyze all data, if the selected wavelet decomposition is inappropriate, it will limit its denoising effect. Thus, it is very difficult to enhance the weak signal in a low SNR environment [3,4,5]. In 1998, Huang proposed a Hilbert–Huang transform (HHT) [6] signal processing method based on the Hilbert transform (Hilbert–Huang transform is based on two parts. The first part is empirical mode decomposition, and the second part is Hilbert spectrum analysis) This method has improved the processing of non-linear and non-stationary signals [7,8]. The key part of the HHT method is the EMD (empirical mode decomposition), which decomposes the signal into a series of sub-signals called IMFs (intrinsic mode functions). It can separate the intrinsic modal component from the pseudo component or the noise concentration, but it cannot select the IMF adaptively and enhance the weak signal adaptively, thus it is easy to cause modal anomalies, but the Hilbert–Huang transformation is an empirical approach with many deficiencies [9,10], such as mode mixing, boundary effects, and instantaneous frequency errors, etc. In order to resolve these defects, our paper proposes a method of fast enhancement of weak signals based on the optimized EMD and neural network (EMDNN) model. The EMDNN method can adaptively select effective intrinsic [11] modal components and adaptively enhance instantaneous amplitude to reconstruct weak signals. With the increasing pace of people’s exploration of the objective world, the 3D seismic exploration technology is also gradually in-depth, but if the SNR of 3D seismic signal is low, it will affect the follow-up signal processing and interpretation work, which may lead to a large miscalculation. Therefore, it is of practical significance to study efficient 3D seismic signal denoising algorithm [12].

2. Weak Signal Reconstruction Method under EMDNN Model

This paper presents the EMDNN model, which is mainly used to detect and reconstruct the weak signals in strong noise background, and reduce the problem of mode mixing [13,14] (when the signal is screened, some IMF components with different time scales will appear, which is called mode mixing). This method decomposes the modal function from high-frequency to low-frequency distribution [15], thereby reducing the loss of effective information. By applying this method to weak signal processing and reconstruction, we cannot only get rid of the constraints of weak signal linearity and stationarity but also achieve good accuracy in both time and frequency [16]. At the same time, it also has complete self-adaptiveness, and on the basis of suppressing all kinds of strong and weak noises, effective signals are highlighted to improve the accuracy of reconstructed signals. In addition, LSTM (long short-term memory) [17], GAN (generative adversarial networks) [18,19] are used to assist this method, which further improves the recovery and enhancement of weak signals.
The weak signal data and the processing flow of the algorithm bring great challenges in the weak signal enhancement method proposed in our paper. In this section, the first step is to pre-process the original weak signal. The second step is to use the complementary set empirical mode decomposition method to decompose the pre-processed signal and obtain the intrinsic modal component. In the third step, the LSTM model is trained by correlation computation to select the effective intrinsic modal components adaptively, and the instantaneous amplitude and phase of the instantaneous frequency are obtained by the Hilbert transform. In the fourth step, we reconstruct the instantaneous phase and amplitude to recover the weak signal without enhancement. Finally, the GAN model was trained to generate enhanced weak high-definition signals. Figure 1 shows the flow chart of our method. We used the three-dimensional seismic data [20] as the input signal. The three-dimensional seismic signal is the three-dimensional spatial data obtained from the propagation route and time of the artificially excited seismic wave in the underground strata [21,22].

2.1. Original Signal Pre-Processing

We first pre-process the input signal. In order to prevent the common denoising method from destroying the smoothness and continuity of the horizontal slice of the seismic signal, we use non-linear wavelet transform threshold value method to denoise the seismic signal to eliminate the regular noise in the original signal [23].

2.2. Empirical Mode Decomposition

After we get the one-dimensional signal after pre-processing, we start empirical mode decomposition. Adding different amplitudes of Gaussian white noise λ0w(i) to the original signal x (λ0 is the amplitude, w(i) is the white noise that achieves a zero average unit variance), x(i) = x + λ0w(i), I = 1, 2, 3, …, n or the actual data is subjected to the empirical mode decomposition (EMD), The integral set components of the first intrinsic mode function EMD are obtained by averaging the components of the first intrinsic mode function (IMF) d1(i):
s ˜ 1 = 1 n i = 1 n d 1 ( i ) = s ¯ 1
Let Ej (*) is the operator of the j-th mode generated after EMD decomposition, and the first remaining component can be calculated from the first step r1:
r 1 = x s ˜ 1
The first mode component is obtained by EMD decomposition: r1 + λ1E1[w(i)], I = 1, 2, 3, …, n; The EMD decomposition is performed again, and the available intrinsic mode components obtained after decomposition are summed and averaged to obtain the second set of EMD intrinsic mode functions:
s ˜ 2 = 1 n i = 1 n E 1 { r 1 + λ 1 E 1 [ w ( i ) ] }
And the k-th residual component can be obtained through loop iteration rk:
r k = r ( k 1 ) s ˜ k
And the complete set component of the (k + 1)-th EMD is obtained from the intrinsic mode component formula obtained through EMD decomposition:
s ˜ ( k + 1 ) = 1 n i = 1 n E 1 { r k + λ k E k [ w ( i ) ] }
Repeated iteration calculations to the remaining components cannot be performed in the EMD decomposition, the remaining components satisfy the IMF condition. After satisfying the condition after the m-th iteration, the original signal is subtracted from the sum of the intrinsic modal functions obtained by the decomposition to obtain the final residual component rm:
r m = x j = 1 m s ˜ j
Thus, the original signal can be reconstructed:
x = j = 1 m s ˜ j + r m

2.3. Adaptive Selection of Effective Inherent Modal Components by Using LSTM Model

Through a large number of experiments and years of tests by Mr. Huang [24], it has been found that the eigenmode component of the pseudo-component or noise concentration is separated during the empirical mode decomposition process. It is an important issue about how to select effective components from multiple components in signal recovery [25].
Formula (8) shows that the intrinsic mode component of a signal can be obtained by empirical mode decomposition. Due to the decomposition error, the effective intrinsic mode component of B term and the pseudo component of C term or the concentrated component of noise can be generated.
i = 1 a c i = j = 1 b c j + l = 1 c c l
The correlation between the intrinsic mode components and the pre-decomposed signals obtained by empirical mode decomposition is as follows:
R h 4 ( t ) , c i = E [ h 4 ( t ) c i ( t + τ ) ] = E [ j = 1 a 1 c j ( t ) c i ( t + τ ) ] = E ( c 1 ( t ) c i ( t + τ ) ) + + E ( c a 1 ( t ) c i ( t + τ ) ) = R c i , c i ( τ ) + j = 1 , j i a 1 R c j , c i ( τ ) R c i , c i ( τ ) R c i , c i ( τ )
Since the empirical mode decomposition process is a local orthogonal decomposition, it can be concluded from the premise:
j = 1 , j i a 1 R c i , c i ( τ ) 0
Then the above two formulas can be used to derive the correlation between the spurious component and the pre-decomposition signal. The above formula shows that the correlation between the decomposed intrinsic modal component and the original signal depends on the auto-correlation of each component’s autocorrelation. The correlation between the pseudo-component and the original signal is approximately zero. According to the non-directionality of random noise, the correlation between the components of the random noise and the original signal is low and approaches 0. Based on the above inference, we can make a provision. The correlation is judged R ≥ 0.02, and if it is greater than or equal to 0.02, it is recorded as an effective intrinsic modal component, and the self-adaptive selection is determined as the target of the next step by judging the correlation. However, through actual testing, it is found that the artificial correlation threshold will lead to incomplete classification and cannot be perfectly screened out all the effective intrinsic modal components. In order to screen out as many effective components as possible, we train an LSTM model here to continuously optimize the correlation threshold to enhance the classification effect [26].
In Figure 1, in order to filter out the effective natural mode components more effectively, the LSTM model is needed, Figure 2 describes the flow chart of RNN (recurrent neural network) model [27,28], compared with each other neutral network modules, RNN models have interconnected nodes among layers, which makes RNN perform well in dealing with timing problems.
However, as the number of sequences in the RNN model increases, the amount of error loss in the feedback process will also continue to superimpose, which may cause the disappearance of gradients. In response to this problem, researchers proposed a more complex structural form, LSTM, as a hidden unit of RNN. Using LSTM model also improves the accuracy to a certain extent. The memory unit of LSTM is shown in Figure 3; and the flow chart of LSTM training is shown in Figure 4.
When we use serial index number I, x (t) represents the input of the inherent modal component. Similarly, x (t − 1) and x (t + 1) is the input of the inherent modal components when the serial index numbers t − 1 and t + 1 are used.
In this model, we use W matrices to represent the linear relationship parameters, which are suitable for the whole model.
In the hidden unit of the module, x (t) and h (t − 1) are used to determine the state of the forget gate f (t) that represents what information we’re going to throw (t indicates the serial number):
f ( t ) = sigm { W f [ h ( t 1 ) , g ( x t ) ] + n f }
Where sigm ( ) is a activation function, as shown in Formula (12), n is a bias of linear relation, and g (x) is a function for calculating the correlation of inherent modal components [29].
sigm ( x ) = 1 1 + e x
And the input gate can calculates the cell state to be input, C ~ t and the vector i t based on x (t) and h (t − 1) in Formulas (13) and (14) (t indicates the serial number):
C ~ t = tanh { W c [ h ( t 1 ) , g ( x t ) ] + n c }
i t = sigm { W i [ h ( t 1 ) , g ( x t ) ] + n i }
Where tan h ( ) is a activation function, as shown in Formula (15):
tanh ( x ) = e x e x e x + e x
And the old cell state C t 1 can be updated by calculating with the results of the forget gate and the input gate in Formula (16):
C t = C t 1 f t + C ~ t i t
Output gate calculates the output gate state, o(t), and the cell state C t determines what information in o(t) will be output units.
o ( t ) = sigm { W o [ h ( t 1 ) , g ( x t ) ] + n o }
h ( t ) = o t tanh ( C t )
At the end of the series index number t, our predicted output is as follow:
y ^ ( t ) = sigm [ W y h ( t ) + n y ]
Finally, we used the log likelihood function L (t) to quantify the loss of the model at the current position, compare the differences between y ^ (t) and y (t) [30].

2.4. Reconstructing and Enhancing Weak Signals

The intrinsic modal component is a stationary signal or a simple non-linear signal and belongs to a narrowband signal. Any narrowband signal X(t) can get its Hilbert transform result Y(t). The formula is as follows:
Y ( t ) = 1 π X ( τ ) t τ d τ
For the selected effective modal components, the Hilbert transform is used to construct the analytical signal:
Z k ( t ) = X k ( t ) + i Y k ( t ) = a k ( t ) e i θ k ( t )
a k ( t ) = X k 2 ( t ) + Y k 2 ( t )
θ k ( t ) = arctan Y k ( t ) X k ( t )
a k ( t ) = X k 2 ( t ) + Y k 2 ( t )
Instantaneous phase and amplitude can be used to recover weak signals:
x ~ ( t ) = Re k = 1 n a ~ k ( t ) e i θ k ( t )
Traditional signal enhancement methods are very sensitive to noise, which can cause significant overshoots that can cause signal blurring. Here we use artificial training GAN (generative adversarial network) to generate high-resolution signals.
In order to improve the stability and efficiency of training, we adopt an incremental enlargement method to train GAN, as shown in Figure 5:
When the real seismic signal data input into the discriminator, we must make the objective function to be maximized as much as possible thus that the machine can determine that it is the real data. The second half of the formula shows that the generator makes the function G (z) as small as possible when inputting and generating real seismic data. In this process, the generator deceives the discriminator, making it mistakenly believe that the input is factual data [31], and the discriminator tries to identify it as fake data. They train with each other to reach the Nash equilibrium.
We start training with generators (g) and discriminators (d) with low spatial resolution, with a resolution of 4 × 4 at the beginning, and then add a convolution layer to G and D after each training, thus as to gradually improve the spatial resolution of the generated signal [32,33,34,35]. All involved convolution layers can be retrained in the whole training process. N × N in the figure refers to the convolution layer working at N*N spatial resolution.
We give the distribution of reconstructed signal data Pdata (x), and distribution of Pg (x, θ), which is controlled by θ. The formula can be found in Appendix A.
We complete GAN training based on KL divergence [36] and get the final signal enhancement model, the reconstructed signal can be steadily enhanced by adding layers from low to high.

3. Experimental Results and Discussion

This section will be divided into two parts. First, we will introduce the model training and parameter setting in the experimental process, and then we will compare and analyze different signal processing methods. The framework of this section is shown in Figure 6.

3.1. Training Detail

The LSTM model used in this paper contains two LSTM layers, a dropout layer with p = 0.2, a fully connected and a sigmoid layer. The LSTM gets 32 hidden units and the loss function is the log likelihood function. We also used the Adam optimizer to train the network. The learning rate was 0.01, which was attenuated by the natural index, the batch size was 20, and the number of the epoch was 500. The change of loss value of the model is shown in Figure 7. When the model was trained 400 times, it basically converged, and the final loss value was around 0.015.
When training GAN, the reconstructed weak signal was used as input. In the generator, we used relu as the activation and tanh as the activation function in the last layer. In the discriminator, we used leakyrelu as the activation function. In one epoch, we trained the discriminator at first. After an epoch, we froze the weight of the discriminator and trained the generator. At the end of each epoch, we added a 2 × 2 convolution layer to the generator and discriminator. During model training, we set a batch size of 16, and we iterated 300 epochs. We used an Adam optimizer, and the initial learning rate was set to 2 × 10−4, which was reduced to 2 × 10−5 after 40 rounds of training.

3.2. Contrast and Verification

In the field of biomedicine, mechanical failure, and geophysics [37], the status of weak signals is prevalent. In order to verify the validity of the method, we chose the 3D weak seismic data in the field of geophysics as the test target for experimental testing.
In seismic exploration, people used artificial methods to cause crustal vibration (such as explosive explosion and vibriosis vibration), then they used precision instruments to record the vibration information of each receiving point on the ground after explode, and people finally inferred the underground geological structure according to the result that processed from the original record information. When seismic waves travel underground, it will encounter different rock stratum interfaces with different media, which will produce reflection and refraction. At this time, we can receive this kind of seismic wave with geophone on the surface. The seismic signals we receive are related to the characteristics of seismic source, the location of the detection point, and the nature and structure of the underground strata where the seismic waves pass. Through the processing and interpretation of seismic wave records, we can infer the nature and shape of underground rock strata. The acquisition of the seismic signal is shown in Figure 8.
When effective waves are generated, various interference waves will also be generated. According to its generating law, it can be divided into regular noise and random noise. Regular noise has a dominant frequency and apparent velocity, its apparent velocity, apparent frequency, and waveform all have their own propagation characteristics and rules. It can be suppressed according to differences in the frequency spectrum or propagation direction between it and the effective wave. Random noise is the interference wave without a certain frequency and apparent velocity, which is mainly generated by natural and human factors. The method proposed in this paper is mainly to suppress this kind of random noise, highlight the effective signal, and achieve signal enhancement.

3.2.1. Experimental Results and Analysis of Synthetic Weak Signal Data

In order to detect the efficiency of our approach, we tested some simulated 3D seismic data. In our synthetic 3D seismic data, in the horizontal direction, each dataset has 50 seismic traces, each trace has 500 seismic sampling points, the plane in the horizontal direction is shown in Figure 9. We use the x-axis to represent the amount of analog 3D seismic data channels, and the y-axis is the vertical sampling points of analog 3D seismic data. Figure 9a shows the profile formed by the first line, which consists of a horizontal and a slanting reflection axis. Figure 9b is the profile of the first line after convolution of the reflection axis and the Ricker wavelet in the whole data. Figure 9c is a profile of the first line after Gaussian white noise is added to the whole synthetic data volume. Figure 9d is a profile of the first sideline of the synthesized 3D data processed by our method.
To test the feasibility of our method, we also did the contrast experiments of different methods in the background with strong noise. Figure 10a is a section diagram with Gaussian white noise and a signal-to-noise ratio of 0.5. Figure 10b shows a section diagram enhanced by wavelet transform. Figure 10c shows a section diagram enhanced by a curved wave transform. Figure 10d shows a section diagram enhanced by our method.
Figure 11 shows that the profile of the sideline was composed of two reflection axes in the same direction at the beginning. After being convoluted with the wavelet, some parts of the two reflection axes in the same direction were indistinguishable. After being processed by our method, two coaxial axes can be distinguished clearly. In order to reflect the results of different methods more intuitively, we used Formula (22) to calculate the SNR of 10 lines of the composite data and the SNR before processing, g (x, y) was the original 3D seismic data and g ^ (x, y) represented the denoised 3D data. A is the number of seismic trace and B is the number of vertical sampling points. The formula is shown in (26):
SNR = 20 log 10 x = 0 A 1 y = 0 B 1 [ g ^ ( x , y ) ] 2 x = 0 A 1 y = 0 B 1 [ g ( x , y ) g ^ ( x , y ) ] 2
The comparison results are shown in Figure 11. In Figure 11, the y-axis represents the SNR, and the x-axis represents the 10 data lines processed by the composite data. The result shows that the SNR of EMDNN processed is higher than others in the background of strong noise. The simulation results show that the 3D data processed by this method can clearly identify the wedge-shaped geology, which shows that EMDNN method not only improves the SNR but also increases the resolution.

3.2.2. Experiment and Analysis of Actual Weak Signal Data

Next, we conducted experiments on the actual weak reflection 3D data. The actual three-dimensional seismic data that we took were the weak seismic signal hidden in the background of strong noise. There were 1160 lines in the data, 1040 for each line, 6000 ms for each seismic signal and 1ms for sampling interval. We processed the whole three-dimensional seismic data, and then observed the profile of a survey line, and took the profile with weak reflection seismic signal, which contained 100 channels of 600 sampling points. The effect comparison before and after processing is shown in Figure 12 (the red rectangle represents the most distinct area before and after processing). Figure 12a is the unprocessed figure of the profile of the sideline, and Figure 12b is the figure that has been recovered by the wavelet transform. Figure 12c is a graph that has been recovered by curved wave transformation. Figure 12d is the graph that has been recovered by this method. Figure 12 shows that the pre-processing signal cannot be recognized at all. After wavelet and curve transform processing, compared with the original signal, some weak signals were recovered obviously, but some of the same direction axes were still difficult to distinguish. As a result, the effect of the data processed by our method was obviously improved, and the weak signal was clearly reflected. Figure 13a,c were unprocessed figures of the sections of different sidings, and Figure 13b,d were the figures intercepted after the processing by our method. It shows that the geological horizon processed by our method was clearer.

3.2.3. Parallel Processing Experiments and Analysis

In order to verify the advantage of this method in processing speed, we selected five groups of actual seismic weak signal data of different sizes as experimental objects, and used non-parallelized algorithms (CPU processing) and parallelized algorithms (GPU processing) to process the experimental data [38]. We counted and tabulated the processing time of the two methods. The results are shown in the Table 1. The experimental results in Table 1 show that the processing time of the GPU parallelized algorithm was only one-fifth of the conventional CPU processing time, which improves the efficiency of processing.
According to the comparison experiment, we know that using the GPU’s powerful parallel processing capability can effectively improve the program’s operating speed and reduce the processing and reconstruction time of weak signals. When the processed data was small, the acceleration ratio was only 2–4 times because all the thread resources on the GPU cannot be fully utilized. As the amount of processing data increased, all thread resources on the GPU were mobilized, and the speedup ratio gradually increased. When the amount of data processed exceeded 10 G, the processing speed was increased by nearly 8 times. The processing time and reconstruction time of the massive weak signal data were greatly reduced, the working time of the processing personnel was saved, and the working efficiency was improved.

4. Conclusions and Future Work

This paper proposes a weak signal enhancement method based on the EMDNN model for the characteristics of weak signals. Through theoretical and experimental results, it is shown that the weak signal of the typical field is selected for processing experiments, and the signal-to-noise ratio of the recovered weak signal is significantly improved. This method uses GPU parallel computing to solve the shortcomings of the large amount of computation and slow operation speed, which is 4–5 times faster than conventional CPU. After trying to introduce the LSTM model and GAN model into traditional weak signal reconstruction and enhancement methods, we find that further breakthroughs have been made in adaptive and weak signal image enhancement, which greatly improves the signal-to-noise ratio. With the development of research, we will study the application of gated recurrent unit (GRU) model in selecting the natural mode components in empirical mode decomposition and study the varieties of the generative adversarial network to improve the effect of signal enhancement further.

Author Contributions

K.C. conceived the algorithms, designed the experiments, and evaluated the experiments; K.X. conceived and initialized the research; C.W. analyzed the data; X.-G.T. reviewed the paper; K.C. wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China under grants 61701046, 41674107, 41874119, and41574064, the Graduate Education and Teaching Fund of Yangtze University (YJY2019009), the Fund of Hubei Ministry of Education (B2019039), and the Undergraduate Training Programs for Innovation and Entrepreneurship of Yangtze University under Grant 2019100.

Acknowledgments

Many people have made invaluable contributions, both directly and indirectly, to our research. Particularly, I would like to express our gratitude to Lei Yang for selfless help and advice. He provided many favorable suggestions for the grammar and experimental parts of our manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. KL Divergence Formula

θ * = arg max θ i = 1 m P G ( x i ; θ ) = arg max θ log i = 1 m P G ( x i ; θ ) = arg max θ i = 1 m P G ( x i ; θ ) arg max θ E x ~ P d a t a [ log P a ( x ; θ ) ] = arg max θ [ x P d a t a ( x ) log P a ( x ; θ ) d x x P d a t a ( x ) log P d a t a ( x ) d x ] = arg max θ x P d a t a ( x ) log P G ( x ; θ ) log P d a t a ( x ) d x = arg min θ x P d a t a ( x ) log P G ( x ; θ ) log P d a t a ( x ) d x = K L G * = min θ max θ V ( G , D ) V = E x ~ P d a t a [ log D ( x ) ] + E x ~ P a [ 1 log D ( x ) ] = x P d a t a ( x ) log D ( x ) log D ( x ) + P a ( x ) log ( 1 D ( x ) ) d x D * = P d a t a ( x ) log D ( x ) + P G ( x ) log ( 1 D ( x ) ) f ( D ) = a log D + b log ( 1 D ) d f ( D ) d D = a * 1 D + b * 1 1 D * ( 1 ) = 0 a * 1 D * = b * 1 1 D * D * = a a + b D * ( x ) = P d a t a ( x ) P d a t a ( x ) + P a ( x ) max V ( G , D ) = V ( G , D * ) = E x ~ P d a t a [ log P d a t a ( x ) P d a t a ( x ) + P G ( x ) ] + E x P a [ 1 log P d a t a ( x ) P d a t a ( x ) + P G ( x ) ] = x P d a t a ( x ) log P d a t a ( x ) P d a t a ( x ) + P G ( x ) d x + x P a ( x ) log P d a t a ( x ) P d a t a ( x ) + P G ( x ) d x J S D ( P 1 P 2 ) = 1 / 2 K L ( P 1 P 1 + P 2 2 ) + 1 / 2 K L ( P 2 P 1 + P 2 2 ) x P d a t a ( x ) log 1 / 2 * P d a t a ( x ) P d a t a ( x ) + P G ( x ) d x 2 + x P G ( x ) log 1 / 2 * P d a t a ( x ) P d a t a ( x ) + P G ( x ) d x 2 = 2 log ( 1 / 2 ) + K L ( P d a t a ( x ) P d a t a ( x ) + P a ( x ) 2 ) + K L ( P G ( x ) P d a t a ( x ) + P a ( x ) 2 ) = 2 log ( 1 / 2 ) + 2 J S D ( P d a t a ( x ) P G ( x ) ) max D V ( G , D )

References

  1. Galiana-Merino, J.J.; Rosa-Herranz, J.L.; Rosa-Cintas, S.; Martinez-Espla, J.J. SeismicWaveTool: Continuous and discrete wavelet analysis and filtering for multichannel seismic data. Comput. Phys. Commun. 2013, 184, 162–171. [Google Scholar] [CrossRef]
  2. Qin, S.; Xia, X.; Yin, H. Denoising of nonlinear wavelet transform threshold value method in engineering. Shanxi Arch. 2008, 34, 120–122. [Google Scholar]
  3. Beenamol, M.; Prabavathy, S.; Mohanalin, J. Wavelet based seismic signal de-noising using Shannon and tsallis entropy. Comput. Math. Appl. 2012, 64, 3580–3593. [Google Scholar] [CrossRef] [Green Version]
  4. Iqbal, M.Z.; Ghafoor, A.; Siddiqui, A.M.; Riaz, M.M.; Khalid, U. Dual-tree complex wavelet transform and SVD based medical image resolution enhancement. Signal Process. 2014, 105, 430–437. [Google Scholar] [CrossRef]
  5. Qin, Y.; Tang, B.; Mao, Y. Adaptive signal decomposition based on wavelet ridge and its application. Signal Process. 2016, 120, 480–494. [Google Scholar] [CrossRef]
  6. Yan, J.; Lu, L. Improved Hilbert-Huang transform based weak signal detection methodology and its application on incipient fault diagnosis and ECG signal analysis. Signal Process. 2014, 98, 74–87. [Google Scholar] [CrossRef]
  7. Carbajo, E.S.; Carbajo, R.S.; Goldrick, C.M.; Basu, B. ASDAH: An automated structural change detection algorithm based on the Hilbert-Huang transform. Mech. Syst. Signal Process. 2014, 47, 78–93. [Google Scholar] [CrossRef]
  8. Chu, P.C.; Fan, C.; Huang, N. Derivative-optimized empirical mode decomposition for the Hilbert-Huang transform. J. Comput. Appl. Math. 2014, 259, 57–64. [Google Scholar] [CrossRef] [Green Version]
  9. Al-Marzouqi, H.; AlRegib, G. Curvelet transform with learning-based tiling. Signal Process. 2017, 53, 24–39. [Google Scholar] [CrossRef]
  10. Jero, S.E.; Ramu, P.; Ramakrishnan, S. ECG steganography using curvelet transform. Biomed. Signal Process. Control 2015, 22, 161–169. [Google Scholar] [CrossRef]
  11. Li, X.; Shang, X.; Morales-Esteban, A.; Wang, Z. Identifying P phase arrival of weak events: The Akaike Information Criterion picking application based on the Empirical Mode Decomposition. Comput. Geosci. 2017, 100, 57–66. [Google Scholar] [CrossRef]
  12. Xie, K.; Bai, Z.; Yu, W. Fast Seismic Data Compression Based on High-efficiency SPIHT. Electron. Lett. 2014, 50, 365–367. [Google Scholar] [CrossRef]
  13. Hadizadeh, H. Multi-resolution local Gabor wavelets binary patterns for gray-scale texture description. Pattern Recog. Lett. 2015, 65, 163–169. [Google Scholar] [CrossRef]
  14. Liu, Z.; Chai, T.; Yu, W.; Tang, J. Multi-frequency signal modeling using empirical mode decomposition and PCA with application to mill load estimation. Neurocomputing 2015, 169, 392–402. [Google Scholar] [CrossRef]
  15. Bakker, O.J.; Gibson, C.; Wilson, P.; Lohse, N.; Popov, A.A. Linear friction weld process monitoring of fixture cassette deformations using empirical mode decomposition. Mech. Syst. Signal Process. 2015, 63, 395–414. [Google Scholar] [CrossRef] [Green Version]
  16. Colominas, M.A.; Schlotthauer, G.; Torres, M.E. Improved complete ensemble EMD: A suitable tool for biomedical signal processing. Biomed. Signal Process. Control 2014, 14, 19–29. [Google Scholar] [CrossRef]
  17. Cao, Y.J.; Jia, L.L.; Chen, Y.X.; Lin, N.; Li, X.X. Review of computer vision based on generative adversarial networks. J. Image Graph. 2018, 23, 1433–1449. [Google Scholar]
  18. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef]
  19. Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
  20. Zhang, H.; Chi, Y.; Zhou, Y.; Ren, T. Three Dimensional Seismic Signal Denoising Based on Four-Dimensional Block Matching Cooperative Filtering Combined with Principle Component Analysis. Laser Optoelectron. Prog. 2018, 55, 041007. [Google Scholar] [CrossRef]
  21. Wen, C.; Li, L.; Xie, K. Fast visualisation of massive data based on viewpoint motion model. Electron. Lett. 2017, 53, 1038–1040. [Google Scholar] [CrossRef]
  22. Chen, F.M.; Wen, C.; Xie, K.; Wen, F.-Q.; Sheng, G.-Q.; Tang, X.-G. Face liveness detection: Fusing colour texture feature and deep feature. IET Biom. 2019, 8, 369–377. [Google Scholar] [CrossRef]
  23. Wen, C.; Hu, Y.; Xie, K.; He, J.B. Fast recovery of weak signal based on three-dimensional curvelet transform and generalized cross validation. IET Signal Process. 2018, 12, 149–154. [Google Scholar] [CrossRef]
  24. Tero, K.; Timo, A.; Samuli, L.; Jaakko, L. PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION. arXiv 2018, arXiv:1710.10196. [Google Scholar]
  25. Lin, D.C.; Guo, Z.L.; An, F.P.; Zeng, F.L. Elimination of end effects in empirical mode decomposition by mirror image coupled with support vector regression. Mech. Syst. Signal Process. 2012, 31, 13–28. [Google Scholar] [CrossRef]
  26. Elman, J.L. Finding Structure in Time; CRL Technical Report 8801; Center for Research in Language, University of California: San Diego, CA, USA, 1988. [Google Scholar]
  27. Koutnik, J.; Greff, K.; Gomez, F.; Schmidhuber, J. A Clockwork RNN. In Proceedings of the 31st International Conference on Machine Learning, Beijing, China, 21–26 June 2014; pp. 1863–1871. [Google Scholar]
  28. Chung, J.; Gulcehre, C.; Cho, K.H.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
  29. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. arXiv 2014, arXiv:1406.2661. [Google Scholar]
  30. Jaeger, H.; Haas, H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 2004, 304, 78–80. [Google Scholar] [CrossRef] [Green Version]
  31. Hinton, G.E. Learning Distributed Representations of Concepts. In Proceedings of the 8th Annual Conference of the Cognitive Science Society, Amherst, MA, USA, 15–17 August 1986. [Google Scholar]
  32. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  33. Qiu, T.; Wen, C.; Xie, K.; Wen, F.-Q.; Sheng, G.-Q.; Tang, X.-G. Efficient Medical Image Enhancement Based on CNN-FBB Model. IET Image Process. 2019, 13, 1736–1744. [Google Scholar] [CrossRef]
  34. Li, J.; Qiu, T.; Wen, C.; Xie, K.; Wen, F.-Q. Robust Face Recognition Using the Deep C2D-CNN Model Based on Decision-Level Fusion. Sensors 2018, 18, 2080. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Sun, C.; Yang, Y.; Wen, C.; Xie, K.; Wen, F. Voiceprint Identification for Limited Dataset Using the Deep Migration Hybrid Model Based on Transfer Learning. Sensors 2018, 18, 2399. [Google Scholar] [CrossRef] [Green Version]
  36. Yang, Y.-X.; Wen, C.; Xie, K.; Wen, F.-Q.; Sheng, G.-Q.; Tang, X.-G. Face Recognition Using the SR-CNN Model. Sensors 2018, 18, 4237. [Google Scholar] [CrossRef] [Green Version]
  37. Reed, S.; Akata, Z.; Yan, X.; Logeswaran, L.; Schiele, B.; Lee, H. Generative adversarial text to image synthesis. arXiv 2016, arXiv:1605.05396. [Google Scholar]
  38. WANG, B.-l.; ZHU, Z.-l.; MENG, L. CUDA-based Acceleration of Three Dimensional 3D Medical Images Registration. J. Chin. Comput. Syst. 2013, 34, 2621–2625. (In Chinese) [Google Scholar]
Figure 1. Flow chart of the weak signal reconstruction method under the neural network-assisted empirical mode decomposition (EMDNN) model.
Figure 1. Flow chart of the weak signal reconstruction method under the neural network-assisted empirical mode decomposition (EMDNN) model.
Sensors 20 03373 g001
Figure 2. The flow chart of the recurrent neural network (RNN) model.
Figure 2. The flow chart of the recurrent neural network (RNN) model.
Sensors 20 03373 g002
Figure 3. Long short term memory (LSTM) internal unit structure diagram.
Figure 3. Long short term memory (LSTM) internal unit structure diagram.
Sensors 20 03373 g003
Figure 4. The flow chart of LSTM training.
Figure 4. The flow chart of LSTM training.
Sensors 20 03373 g004
Figure 5. The flow chart of generative adversarial networks (GAN) enhancement.
Figure 5. The flow chart of generative adversarial networks (GAN) enhancement.
Sensors 20 03373 g005
Figure 6. The flow chart of the experimental process.
Figure 6. The flow chart of the experimental process.
Sensors 20 03373 g006
Figure 7. The chart of training process of LSTM model.
Figure 7. The chart of training process of LSTM model.
Sensors 20 03373 g007
Figure 8. Flow chart of seismic signal acquisition.
Figure 8. Flow chart of seismic signal acquisition.
Sensors 20 03373 g008
Figure 9. The chart of processed synthetic seismic data; (a) the profile of reflection coefficient; (b) the convoluted profile; (c) the profile after adding noise; (d) the processed profile by EMDNN method.
Figure 9. The chart of processed synthetic seismic data; (a) the profile of reflection coefficient; (b) the convoluted profile; (c) the profile after adding noise; (d) the processed profile by EMDNN method.
Sensors 20 03373 g009
Figure 10. The chart of Wedge model between the original and processed: (a) The wedge model under strong noise background; (b) processed wedge model using the wavelet transform; (c) processed wedge model using the curvelet transform; (d) processed wedge model using EMDNN method.
Figure 10. The chart of Wedge model between the original and processed: (a) The wedge model under strong noise background; (b) processed wedge model using the wavelet transform; (c) processed wedge model using the curvelet transform; (d) processed wedge model using EMDNN method.
Sensors 20 03373 g010
Figure 11. The contrast chart of SNR (original and the processed simulated data).
Figure 11. The contrast chart of SNR (original and the processed simulated data).
Sensors 20 03373 g011
Figure 12. The comparison among different processing method; (a) original seismic data; (b) the seismic data processed by the wavelet transform; (c) the seismic data processed by the curvelet transform; (d) the seismic data processed by our method.
Figure 12. The comparison among different processing method; (a) original seismic data; (b) the seismic data processed by the wavelet transform; (c) the seismic data processed by the curvelet transform; (d) the seismic data processed by our method.
Sensors 20 03373 g012
Figure 13. Actual seismic data processing; (a) actual seismic profile; (b) profile 1 processed by this method; (c) actual seismic profile; (d) profile 2 processed by this method.
Figure 13. Actual seismic data processing; (a) actual seismic profile; (b) profile 1 processed by this method; (c) actual seismic profile; (d) profile 2 processed by this method.
Sensors 20 03373 g013
Table 1. The comparison of processing speed between CPU and GPU.
Table 1. The comparison of processing speed between CPU and GPU.
Test DataData Size (mb)CPU Program Running Time (s)GPU Program Running Time (s)Speed up Ratio
Data143.232.0312.412.58
Data2262.8256.1568.493.74
Data3568.7532.47122.694.34
Data41020.32209.41355.786.21
Data514,328.840,160.925001.258.03

Share and Cite

MDPI and ACS Style

Chen, K.; Xie, K.; Wen, C.; Tang, X.-G. Weak Signal Enhance Based on the Neural Network Assisted Empirical Mode Decomposition. Sensors 2020, 20, 3373. https://doi.org/10.3390/s20123373

AMA Style

Chen K, Xie K, Wen C, Tang X-G. Weak Signal Enhance Based on the Neural Network Assisted Empirical Mode Decomposition. Sensors. 2020; 20(12):3373. https://doi.org/10.3390/s20123373

Chicago/Turabian Style

Chen, Kai, Kai Xie, Chang Wen, and Xin-Gong Tang. 2020. "Weak Signal Enhance Based on the Neural Network Assisted Empirical Mode Decomposition" Sensors 20, no. 12: 3373. https://doi.org/10.3390/s20123373

APA Style

Chen, K., Xie, K., Wen, C., & Tang, X. -G. (2020). Weak Signal Enhance Based on the Neural Network Assisted Empirical Mode Decomposition. Sensors, 20(12), 3373. https://doi.org/10.3390/s20123373

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop