Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Subway Gearbox Fault Diagnosis Algorithm Based on Adaptive Spline Impact Suppression
Previous Article in Journal
Selection of the Optimal Smart Meter to Act as a Data Concentrator with the Use of Graph Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Systematic Bias of Entropy Calculation in the Multi-Scale Entropy Algorithm

1
School of Mathematics, Physics and Information Science, Shaoxing University, Shaoxing 312000, China
2
Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD 21201, USA
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(6), 659; https://doi.org/10.3390/e23060659
Submission received: 30 April 2021 / Revised: 19 May 2021 / Accepted: 21 May 2021 / Published: 24 May 2021

Abstract

:
Entropy indicates irregularity or randomness of a dynamic system. Over the decades, entropy calculated at different scales of the system through subsampling or coarse graining has been used as a surrogate measure of system complexity. One popular multi-scale entropy analysis is the multi-scale sample entropy (MSE), which calculates entropy through the sample entropy (SampEn) formula at each time scale. SampEn is defined by the “logarithmic likelihood” that a small section (within a window of a length m) of the data “matches” with other sections will still “match” the others if the section window length increases by one. “Match” is defined by a threshold of r times standard deviation of the entire time series. A problem of current MSE algorithm is that SampEn calculations at different scales are based on the same matching threshold defined by the original time series but data standard deviation actually changes with the subsampling scales. Using a fixed threshold will automatically introduce systematic bias to the calculation results. The purpose of this paper is to mathematically present this systematic bias and to provide methods for correcting it. Our work will help the large MSE user community avoiding introducing the bias to their multi-scale SampEn calculation results.

1. Introduction

Complexity is an important property of a complex system such as the living organisms, Internet, traffic system etc. Measuring system complexity has long been of great interest in many research fields. Since complexity is still elusive to define, a few approximate metrics have been used to quantify complexity. One widely used measure is entropy which quantifies the irregularity or randomness. Complexity and entropy, however, diverge when complexity reaches the peak. Before the peak, complexity increases with complexity, but complexity decreases with entropy after the peak. To provide approximate solution to this dilemma, people have proposed many empirical measures. A popular one is the multi-scale entropy (MSE) proposed by Costa et al. [1]. MSE is based on Sample entropy (SampEn) [2,3], which is an extension of the well-known Approximate entropy (ApEn) [3,4] after removing the self-matching induced bias. SampEn has gained popularity in many applications such as neurophysiological data analysis [5] and functional MRI data analysis [6,7] because of the relative insensitivity to data length [2,8]. Because complex signal often presents self similarity when the signal is observed at different time scale, Costa et al first applied SampEn to the same signal but at different time scales after coarse graining. When applied to Gaussian noise and 1/f noise, it was observed that SampEn of Gaussian noise decreases with the signal subsampling scale while it stays at the same level for most of scales of a 1/f process. Since a 1/f process is known to have higher complexity (defined by the higher self similarity) than Gaussian noise, the diverging MSE of a 1/f noise and the Gaussian noise appears to support that MSE may provide an approximate approach to measure system complexity. Since its introduction, MSE has been widely used in many different applications as reflected by the thousands of paper citations [1,9]. While MSE and its variants have been shown to be effective for differentiating different system states through simulation or real data, it introduces bias by using the same threshold for identifying the repeated transit status at all time scales. Nikulin and Brismar [10] first observed that MSE not purely measures entropy but both entropy and variation at different scales. We here claimed that the changing variation captured by MSE is mainly caused by an incomplete scaling during the coarse-graining process and the subsequent variance change induced entropy change should be considered as a systematic bias to be removed.
The rest of this report is organized as follows. Section 2 is the background. To better understand the series of entropy formation, we introduced Shannon entropy, ApEn, SampEn, and MSE. Section 3 describes the bias caused by the coarse-graining process and the one threshold-for-all-scales MSE algorithm. Both a mathematical solution and a practical solution were provided to correct the bias. Section 5 concludes the paper.

2. Entropy and MSE

This section provides a brief history about the evolution of entropy and approximate entropy measures.
Hartley and Nyquist first used logarithm to quantify information [11,12]. Shannon then proposed the concept of Shannon entropy as a measure of information through the sum of the logarithmically weighted probability [13]. Denoting a discrete random variable by X and its probability by p ( x ) , Shannon entropy of X is formulated as:
H ( X ) = x X p ( x ) log p ( x ) = E [ log ( 1 p ( x ) ) ] ;
In an analogous manner Shannon defined the entropy of a continuous distribution with the density distribution function(pdf) p ( x ) by:
H ( X ) = x X p ( x ) log p ( x ) d x = E [ log p ( x ) ] ,
where E represent the expectation operator. Without loss of generality, in this paper we use natural logarithms to calculate entropy. When the entropy calculated via a logarithm to base b, it could be calculated by H b ( X ) = 1 log b H ( X ) .
Shannon entropy was then extended into the Kolmogorov–Sinai(K-S) entropy [14] for characterizing a dynamic system. Assume that the F-dimension phase space is partitioned into a collection of cells of size r F and the state of the system is measured at constant time intervals δ . Let p ( c 1 , c n ) be the joint probability that the state of system x ( t = δ ) is in cell c 1 , x ( t = 2 δ ) is in cell c 2 , … , and x ( t = n δ ) is in cell c n . The K-S entropy is defined as
K - S entropy = lim δ 0 lim r 0 lim n 1 δ n c 1 , , c n p ( c 1 , , c n ) log p ( c 1 , , c n ) .
K-S entropy depends several parameters and is not easy to estimate. To solve this problem, Grassberger and Procaccia [15] proposed K 2 entropy as a lower bound of K-S entropy. Given a time series U = { u 1 , u 2 , , u N } with length N, define a sequence of m dimension vectors v i ( m ) = [ u i , u i + 1 , , u i + m 1 ] , 1 i N m + 1 . The m dependence of functions are
C i m ( r ) = ( N m + 1 ) 1 j = 1 N m + 1 θ ( r v i ( m ) v j ( m ) )
and
C m ( r ) = ( N m + 1 ) 1 i = 1 N m + 1 C i m ( r )
where v i v j is Euclidean metric v i v j = h = 0 m 1 ( u i + h u j + h ) 2 1 2 and θ ( · ) is Heaviside step function. K 2 entropy is defined as
K 2 entropy = lim r 0 lim m lim N 1 δ log C m ( r ) C m + 1 ( r ) .
By incorporating the embedding vector based phase space reconstruction idea proposed by Takens [16] and replacing the Euclidean metric with the Chebyshev metric v i v j = max h = 0 m 1 | u i + h u j + h | , Eckmann and Ruelle [17] proposed an estimate of the K-S entropy through the so-called E-R entropy:
Φ m ( r ) = ( N m + 1 ) 1 i = 1 N m + 1 log C i m ( r )
E-R entropy = lim r 0 lim m lim N 1 δ Φ m ( r ) Φ m + 1 ( r ) ,
where the delay is often set to be δ = 1 .
The E-R entropy has been useful in classifying low-dimensional chaotic systems, but it becomes infinity for a process with superimposed noise of any magnitude [18]. Pincus [4] then extended the E-R entropy into the now well-known ApEn depending on a given embedding window length m and a distance cutoff r for the Heaviside function:
A p E n ( U ; m , r ) = Φ m ( r ) Φ m + 1 ( r ) ,
and
A p E n ( m , r ) = lim N A p E n ( U ; m , r ) , N is the length of discrete signal U .
SampEn was proposed by Richman and Moorman [19] as an extension of ApEn to avoid the bias induced by countering the self-matching of each of the embedding vectors. Specifically, SampEn is formulated by:
B i m ( r ) = ( N m 1 ) 1 j = 1 , j i N m θ ( r v i ( m ) v j ( m ) ) ,
B m ( r ) = ( N m ) 1 i = 1 N m B i m ( r ) ,
A i m ( r ) = ( N m 1 ) 1 j = 1 , j i N m θ ( r v i ( m + 1 ) v j ( m + 1 ) ) ,
A m ( r ) = ( N m ) 1 i = 1 N m A i m ( r ) ,
S a m p E n ( U ; m , r ) = log A m ( r ) B m ( r ) , fix m and r ,
S a m p E n ( m , r ) = lim N S a m p E n ( U ; m , r ) , N is the length of discrete signal U .
The coarse-graining multi-scale entropy-based complexity measurement can be traced back to the work by Zhang [20] and Fogedby [21]. In [1,22] Costa et al. calculated entropy at each coarse-grained scale using SampEn and named this process as the MSE. As commented by Nikulin and Brismar [10], a problem of the MSE algorithm is the use of the same matching criterion r for all scales, which causes systematic bias to SampEn.

3. The Systematic Bias of Entropy Calculation in MSE

In MSE [1,22], the embedding vector matching threshold r in defined by the standard deviation of the original signal. Using the same threshold, entropy of Gaussian signal decreases with the scale used to downsample the original signal. By contrast, entropy of 1/f signal remains unchanged when scale increases. As 1/f signal is known to have high complexity while Gaussian noise has a very low complexity, the monotonic MSE decaying trend or the sum of MSE at different scales were proposed as a metric for quantifying signal complexity.
However, the moving-average based coarse-graining process automatically scales down the subsampled signal at different time scales. Without correction, this additional multiplicative scaling will be propagated into the standard deviation of the signal to be assessed at each time scale and will artificially change sample entropy. This bias can be easily seen from the coarse-graining of a Gaussian noise.
Denote a Gaussian variable and its observations by X = { x 1 , x 2 , , x N } , where N indicates the length of the time series. The coarse-graining or moving averaging process can be described by Y ( τ ) = { y j ( τ ) } , y j ( τ ) = 1 / τ i = ( j 1 ) τ + 1 j τ x i where τ > 0 is the coarse-graining level or the so-called “scale”. Given the mutual independence of the individual samples of X, the moving averaging of these samples can be considered as an average of independent random variables rather than observations of a particular random variable. In other word, we can rewrite Y ( τ ) to be Y j ( τ ) = 1 / τ i = ( j 1 ) τ + 1 j τ X i , where X i is a random variable. For Gaussian noise X, X i will be Gaussian noise too and can be fully characterized with the same mean μ and standard deviation (SD) σ . Through a simple mathematics operation, we can get that SD ( Y ( τ ) ) = σ / τ . Because SD( τ ) monotonically decreases with τ , if we do not adjust the matching threshold, the number of matched embedded vectors will increase with τ , resulting a decreasing SampEn.
Entropy of a Gaussian distributed variable can be calculated through Shannon entropy:
H ( Y ) = + p ( y ) log p ( y ) d y = + p ( y ) log ( 1 σ y 2 π e ( y μ y ) 2 2 σ y 2 ) d y = + p ( y ) log ( 1 σ y 2 π ) d y + p ( y ) log ( e ( y μ y ) 2 2 σ y 2 ) d y = log ( 1 σ y 2 π ) + p ( y ) d y + 1 2 σ y 2 + ( y μ y ) 2 p ( y ) d y = 1 2 log ( 2 π σ y 2 ) + 1 2 .
For the simplicity of description, we often normalize the random variable to have a μ = 0 and σ = 1 . Considering the scale-dependent SD derived above, we can then get the Shannon entropy of the Gaussian variable at the scale τ by
H ( Y ( τ ) ) = 1 2 log ( 2 π τ ) + 1 2
This equation clearly demonstrates the non-linearly but monotonically decreasing relationship of entropy with respect to scale τ .
Below, we provided mathematical derivation of the dependence of MSE on the signal subsampling scale. Given the m dimensional embedding vectors Z j ( m ) = [ Y j , Y j + 1 , , Y j + m 1 ] , sample entropy can be expressed as [22]
S a m p E n ( Y ; m , r ) = log Pr ( Z j ( m + 1 ) Z i ( m + 1 ) r ) Pr ( Z j ( m ) Z i ( m ) r ) = log Pr Z j ( m + 1 ) Z i ( m + 1 ) r | Z j ( m ) Z i ( m ) r .
where · is the Chebyshev distance.
For m = 1 , we can have
{ Z j ( m ) Z i ( m ) r } = { | Y j Y i | r } ,
and
{ Z j ( m + 1 ) Z i ( m + 1 ) r } = { max { | Y j Y i | , | Y j + 1 Y i + 1 | } r } = { | Y j Y i | r } { | Y j + 1 Y i + 1 | r } .
Thus,
S a m p E n ( Y ; m , r ) = log Pr ( { | Y j Y i | r } { | Y j + 1 Y i + 1 | r } ) Pr ( | Y j Y i | r ) = Pr | Y j + 1 Y i + 1 | r | | Y j Y i | r .
Based on the iid condition of Y j , we can draw a conclusion that
Pr | Y j + 1 Y i + 1 | r | | Y j Y i | r = Pr | Y j + 1 Y i + 1 | r .
If m 2 , we can get
{ Z j ( m ) Z i ( m ) r } = { max k { 0 , , m 1 } { | Y j + k Y i + k | } r } ,
and
{ Z j ( m + 1 ) Z i ( m + 1 ) r } = { max k { 0 , . . . , m } { | Y j + k Y i + k | } r } = { | Y j + m Y i + m | r } { max k { 0 , . . . , m 1 } { | Y j + k Y i + k | } r } .
Therefore,
S a m p E n ( Y ; m , r ) = log { | Y j + m Y i + m | r } { max k { 0 , . . . , m 1 } { | Y j + k Y i + k | } r } { max k { 0 , . . . , m 1 } { | Y j + k Y i + k | } r } = Pr | Y j + m Y i + m | r | { max k { 0 , . . . , m 1 } { | Y j + k Y i + k | } r } .
and
Pr | Y j + m Y i + m | r | { max k { 0 , , m 1 } { | Y j + k Y i + k | } r } = Pr | Y j + m Y i + m | r ,
given the mutual independence of Y j . It should be noted that this conclusion does not require the condition of identical distribution, as long as the condition of independence is sufficient.
For the simplicity of description, we re-denote Y j + m and Y i + m by two general normally distributed but independent random variables ξ and η whose means are 0 and SDs are 1. The joint probability density functions (PDF) is
p ( ξ , η ) = 1 2 π σ y 2 e ( ξ μ y ) 2 + ( η μ y ) 2 2 σ y 2
and probability is
Pr | ξ η | r = | ξ η | r 1 2 π σ y 2 e ( ξ μ y ) 2 + ( η μ y ) 2 2 σ y 2 d ξ d η = η r η + r 1 2 π σ y 2 e ( ξ μ y ) 2 + ( η μ y ) 2 2 σ y 2 d ξ d η .
We can then get
S a m p E n ( Y ; m , r ) = log Pr | Y j + 1 Y i + 1 | r = log Pr | ξ η | r = log 1 2 π σ y 2 η r η + r e ( ξ μ y ) 2 + ( η μ y ) 2 2 σ y 2 d ξ d η t = η μ y σ y s = ξ μ y σ y log 1 2 π t r σ y t + r σ y e s 2 + t 2 2 d s d t .
Similar to Shannon entropy calculating, after normalize the random variable to have a μ = 0 and σ = 1 , the scale-dependent SD derived for coarse grained signal is S D ( Y ( τ ) ) = 1 / τ . We can get
S a m p E n ( Y ( τ ) ; m , r ) = log 1 2 π t r τ t + r τ e s 2 + t 2 2 d s d t .
Since the interval [ t r τ , t + r τ ] increases with τ , the above integral monotonically increases with τ . Accordingly, the negative logarithm based sample entropy S a m p E n ( Y ( τ ) ; m , r ) will monotonically decreases with τ . This is consistent with the aforementioned Shannon entropy-based MSE bias description.
The systematic bias in MSE can be corrected by using a scale adaptive matching threshold. One approach to adjust the threshold is to use S D ( τ ) = S D ( 0 ) / τ for scale τ during S a m p E n ( Y ( τ ) ; m , r ) calculation. This works well for Gaussian signal but may not be effective for other signals if they have extra scale-dependent SD behavior in addition to that induced by the subsampling scale. Finding the theoretical scale-dependent SD equation may not be trivial too. Instead, SD can be directly calculated from the data after each coarse graining. This approach has been proposed in [10].
To demonstrate the systematic bias of MSE and the effeteness of the correction method, we used three synthetic time series with known entropy difference: the Gaussian noise, a 1/f noise, and a random walk. The length of time series was N = 2 × 10 4 . MSE with and without bias correction were performed.

4. Results

Figure 1 shows the results of MSE with and without bias correction for the three time series (Figure 1a). Parameters used for SampEn calculation were m = 2 , and r = 0.15 × S D . Without bias correction, MSE produced a monotonically decaying SampEn for Gaussian noise when scale increases. By contrast, SampEn of Gaussian noise stays the same level at different scales after bias correction. The SD bias showed minor effects on SampEn calculation for both 1/f noise and the random walk. Correcting the bias did not dramatically change the SampEn at different scales.

5. Discussion and Conclusions

We provided a full mathematical derivation for the systematic bias in MSE introduced by the coarse graining process. We then used synthetic data to show the bias and the correction of it using dynamic SD calculation. Bias correction for Gaussian data MSE calculation works exactly as described by the theoretical descriptions given in this paper. The systematic bias does not appear to be a big issue for the temporally correlated process such as the 1/f noise and random walk. This is because variance of a temporally correlated process does change with the subsampling process if the sampling rate is still higher than the maximal frequency. According to [23], both 1/f noise and random work can be considered special cases of the autoregressive integrated moving average (ARIMA) model. As we derived in Appendix A, an ARIMA model is still an ARIMA model after coarse graining given the condition of that the residuals at different time points are independently and identically distributed (i.i.d.) Gaussian noise. In other words, the moving averaging process will not change the signal variance and will not change SampEn.
While we only showed the results based on one particular set SampEn calcualtion parameters, we included additional figures in Appendix B showing that the bias and the bias correction are still true for other parameters. We did not show the effects of bias correction on real data, but the results shown in the synthetic data should be generalizable to real applications since both the math derivations and the correction process are independent of any specific data but rather general to any dynamic system.

Author Contributions

Conceptualization: Z.W.; mathematical derivations: J.L. and Z.W.; experiments: J.L. and Z.W.; draft preparation, J.L.; manuscript review and editing, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

Jue Lu was supported by the China Scholarship Council(CSC201908330018). Ze Wang was supported by NIH/NIA grant R01AG060054.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
MSEmulti-scale sample entropy
SampEnsample entropy
ApEnApproximate entropy
ARIMAautoregressive integrated moving average

Appendix A The Coarse-Grained ARIMA Process

Assume that a time series { X i } can be modeled by an ARIMA process: ARIMA(p,d,q):
1 h = 1 p φ h L h ( 1 L ) d X i = 1 + h = 1 q θ h L h ε i ,
where { ε i } are i.i.d. Gaussian noise, L is the lag operator.
Denote the consecutive coarse-grained time series of { X i } by { Y j ( τ ) } :
Y j ( τ ) = 1 τ i = ( j 1 ) τ + 1 j τ X i .
where τ is scale.
Let
ϵ j ( τ ) = 1 τ i = ( j 1 ) τ + 1 j τ ε i ,
then { ϵ j ( τ ) } are also i.i.d. Gaussian and we can have
1 h = 1 p φ h L h ( 1 L ) d 1 τ i = ( j 1 ) τ + 1 j τ X i = 1 τ i = ( j 1 ) τ + 1 j τ 1 h = 1 p φ h L h ( 1 L ) d X i = 1 τ i = ( j 1 ) τ + 1 j τ 1 + h = 1 q θ h L h ε i = 1 + h = 1 q θ h L h 1 τ i = ( j 1 ) τ + 1 j τ ε i .
and
1 h = 1 p φ h L h ( 1 L ) d Y j ( τ ) = 1 + h = 1 q θ h L h ϵ j ( τ ) .
This proves that for any scale τ , { Y j ( τ ) } is also an ARIMA(p,d,q) process.

Appendix B Numerical Results on Different m and r

The following Figure A1, Figure A2, Figure A3, Figure A4, Figure A5, Figure A6, Figure A7 and Figure A8 show additional MSE calculation results for different SampEn parameters m and r with and without bias correction. N means the length of the time series at scale 1. Theses figures confirmed the systematic bias in the original MSE algorithm for different m and r and the data adaptive correction successfully removed the bias for all assessed signals.
Figure A1. MSE calculation results with m = 2 and r = 0.1 × S D . Data length N = 2 × 10 4 .
Figure A1. MSE calculation results with m = 2 and r = 0.1 × S D . Data length N = 2 × 10 4 .
Entropy 23 00659 g0a1
Figure A2. MSE calculation results with m = 2 and r = 0.2 × S D . Data length N = 2 × 10 4 .
Figure A2. MSE calculation results with m = 2 and r = 0.2 × S D . Data length N = 2 × 10 4 .
Entropy 23 00659 g0a2
Figure A3. MSE calculation results with m = 2 and r = 0.3 × S D . Data length N = 2 × 10 4 .
Figure A3. MSE calculation results with m = 2 and r = 0.3 × S D . Data length N = 2 × 10 4 .
Entropy 23 00659 g0a3
Figure A4. MSE calculation results with m = 2 and r = 0.4 × S D . Data length N = 2 × 10 4 .
Figure A4. MSE calculation results with m = 2 and r = 0.4 × S D . Data length N = 2 × 10 4 .
Entropy 23 00659 g0a4
Figure A5. MSE calculation results with m = 3 and r = 0.1 × S D . Data length N = 2 × 10 4 .
Figure A5. MSE calculation results with m = 3 and r = 0.1 × S D . Data length N = 2 × 10 4 .
Entropy 23 00659 g0a5
Figure A6. MSE calculation results with m = 3 and r = 0.2 × S D . Data length N = 2 × 10 4 .
Figure A6. MSE calculation results with m = 3 and r = 0.2 × S D . Data length N = 2 × 10 4 .
Entropy 23 00659 g0a6
Figure A7. MSE calculation results with m = 3 and r = 0.3 × S D . Data length N = 2 × 10 4 .
Figure A7. MSE calculation results with m = 3 and r = 0.3 × S D . Data length N = 2 × 10 4 .
Entropy 23 00659 g0a7
Figure A8. MSE calculation results with m = 3 and r = 0.4 × S D . Data length N = 2 × 10 4 .
Figure A8. MSE calculation results with m = 3 and r = 0.4 × S D . Data length N = 2 × 10 4 .
Entropy 23 00659 g0a8

References

  1. Costa, M.; Goldberger, A.L.; Peng, C.K. Multiscale entropy analysis of complex physiologic time series. Phys. Rev. Lett. 2002, 89, 068102. [Google Scholar] [CrossRef] [Green Version]
  2. Lake, D.E.; Richman, J.S.; Griffin, M.P.; Moorman, J.R. Sample entropy analysis of neonatal heart rate variability. Am. J. Physiol. Regul. Integr. Comp. Physiol. 2002, 283, R789–R797. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Delgado-Bonal, A.; Marshak, A. Approximate entropy and sample entropy: A comprehensive tutorial. Entropy 2019, 21, 541. [Google Scholar] [CrossRef] [Green Version]
  4. Pincus, S.M. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci. USA 1991, 88, 2297–2301. [Google Scholar] [CrossRef] [Green Version]
  5. Alcaraz, R.; Rieta, J.J. A review on sample entropy applications for the non-invasive analysis of atrial fibrillation electrocardiograms. Biomed. Signal Process. Control 2010, 5, 1–14. [Google Scholar] [CrossRef]
  6. Wang, Z.; Li, Y.; Childress, A.R.; Detre, J.A. Brain entropy mapping using fMRI. PLoS ONE 2014, 9, e89948. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Sokunbi, M.O. Sample entropy reveals high discriminative power between young and elderly adults in short fMRI data sets. Front. Neuroinform. 2014, 8, 69. [Google Scholar] [CrossRef]
  8. Richman, J.S.; Lake, D.E.; Moorman, J.R. Sample entropy. Methods Enzymol. 2004, 384, 172–184. [Google Scholar]
  9. Humeau-Heurtier, A. The multiscale entropy algorithm and its variants: A review. Entropy 2015, 17, 3110–3123. [Google Scholar] [CrossRef] [Green Version]
  10. Nikulin, V.V.; Brismar, T. Comment on “Multiscale entropy analysis of complex physiologic time series”. Phys. Rev. Lett. 2004, 92, 089803. [Google Scholar] [CrossRef]
  11. Nyquist, H. Certain factors affecting telegraph speed. Trans. Am. Inst. Electr. Eng. 1924, 43, 412–422. [Google Scholar] [CrossRef]
  12. Hartley, R.V. Transmission of information 1. Bell Syst. Tech. J. 1928, 7, 535–563. [Google Scholar] [CrossRef]
  13. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  14. Sinai, Y.G. On the notion of entropy of a dynamical system. Dokl. Russ. Acad. Sci. 1959, 124, 768–771. [Google Scholar]
  15. Grassberger, P.; Procaccia, I. Estimation of the Kolmogorov entropy from a chaotic signal. Phys. Rev. A 1983, 28, 2591–2593. [Google Scholar] [CrossRef] [Green Version]
  16. Takens, F. Invariants related to dimension and entropy. Atas Do 1983, 13, 353–359. [Google Scholar]
  17. Eckmann, J.; Ruelle, D. Ergodic theory of chaos and strange attractors. Rev. Mod. Phys. 1985, 57, 617–656. [Google Scholar] [CrossRef]
  18. Pincus, S.M.; Gladstone, I.M.; Ehrenkranz, R.A. A regularity statistic for medical data analysis. J. Clin. Monit. 1991, 7, 335–345. [Google Scholar] [CrossRef]
  19. Richman, J.S.; Moorman, J.R. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. Heart Circ. Physiol. 2000, 278, H2039–H2049. [Google Scholar] [CrossRef] [Green Version]
  20. Zhang, Y.C. Complexity and 1/f noise. A phase space approach. J. De Phys. I 1991, 1, 971–977. [Google Scholar] [CrossRef]
  21. Fogedby, H.C. On the phase space approach to complexity. J. Stat. Phys. 1992, 69, 411–425. [Google Scholar] [CrossRef]
  22. Costa, M.; Goldberger, A.L.; Peng, C.K. Multiscale entropy analysis of biological signals. Phys. Rev. E 2005, 71, 021906. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Stadnitski, T. Measuring fractality. Front. Physiol. 2012, 3, 127. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Different signals and their SampEn calculated at different scales. (a) The original signals before coarse graining; (b) MSE without bias correction; (c) MSE with bias correction.
Figure 1. Different signals and their SampEn calculated at different scales. (a) The original signals before coarse graining; (b) MSE without bias correction; (c) MSE with bias correction.
Entropy 23 00659 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lu, J.; Wang, Z. The Systematic Bias of Entropy Calculation in the Multi-Scale Entropy Algorithm. Entropy 2021, 23, 659. https://doi.org/10.3390/e23060659

AMA Style

Lu J, Wang Z. The Systematic Bias of Entropy Calculation in the Multi-Scale Entropy Algorithm. Entropy. 2021; 23(6):659. https://doi.org/10.3390/e23060659

Chicago/Turabian Style

Lu, Jue, and Ze Wang. 2021. "The Systematic Bias of Entropy Calculation in the Multi-Scale Entropy Algorithm" Entropy 23, no. 6: 659. https://doi.org/10.3390/e23060659

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop