Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
The Weak Reality That Makes Quantum Phenomena More Natural: Novel Insights and Experiments
Next Article in Special Issue
Noise Reduction Method of Underwater Acoustic Signals Based on Uniform Phase Empirical Mode Decomposition, Amplitude-Aware Permutation Entropy, and Pearson Correlation Coefficient
Previous Article in Journal
Estimation of Economic Indicator Announced by Government From Social Big Data
Previous Article in Special Issue
Causal Shannon–Fisher Characterization of Motor/Imagery Movements in EEG
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Model Selection for Body Temperature Signal Classification Using Both Amplitude and Ordinality-Based Entropy Measures

by
David Cuesta-Frau
1,*,
Pau Miró-Martínez
2,
Sandra Oltra-Crespo
1,
Jorge Jordán-Núñez
2,
Borja Vargas
3,
Paula González
3 and
Manuel Varela-Entrecanales
3
1
Technological Institute of Informatics, Universitat Politècnica de València, 03801 Alcoi Campus, Spain
2
Department of Statistics, Universitat Politècnica de València, 03801 Alcoi Campus, Spain
3
Internal Medicine Department, Teaching Hospital of Móstoles, 28935 Madrid, Spain
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(11), 853; https://doi.org/10.3390/e20110853
Submission received: 21 September 2018 / Revised: 31 October 2018 / Accepted: 5 November 2018 / Published: 6 November 2018
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)

Abstract

:
Many entropy-related methods for signal classification have been proposed and exploited successfully in the last several decades. However, it is sometimes difficult to find the optimal measure and the optimal parameter configuration for a specific purpose or context. Suboptimal settings may therefore produce subpar results and not even reach the desired level of significance. In order to increase the signal classification accuracy in these suboptimal situations, this paper proposes statistical models created with uncorrelated measures that exploit the possible synergies between them. The methods employed are permutation entropy (PE), approximate entropy (ApEn), and sample entropy (SampEn). Since PE is based on subpattern ordinal differences, whereas ApEn and SampEn are based on subpattern amplitude differences, we hypothesized that a combination of PE with another method would enhance the individual performance of any of them. The dataset was composed of body temperature records, for which we did not obtain a classification accuracy above 80% with a single measure, in this study or even in previous studies. The results confirmed that the classification accuracy rose up to 90% when combining PE and ApEn with a logistic model.

1. Introduction

A great diversity of time series has been successfully analysed in the last several decades since the widespread availability of digital computers and the development of efficient data acquisition and processing methods: biology time series [1], econometrics records [2], environmental sciences data [3], industrial processes and manufacturing information [4], and many more. The case of non-linear methods, capable of extracting elusive features from any type of time series, is especially remarkable. However, these methods can sometimes be difficult to customize for a specific purpose, and some signal classification problems remain unsolved or scarcely studied. In this regard, this paper addresses the problem of physiological temperature record classification. This problem has only recently begun to be studied [5], with so far only marginally significant differences [6]. Instead of trying to find a single better non-linear optimal measure or parameter configuration, we propose a new approach, based on a combination of several sub-optimal methods.
Electroencephalographic (EEG) and heart rate variability (HRV) records are probably the two types of physiological time series most analyzed in signal classification studies using non-linear methods. The rationale of this scientific popularity is twofold. On the one hand, these records are frequently used in clinical practice, convenient and affordable monitoring devices abound, and a growing body of publicly available data has been created in the last several decades. On the other hand, recently developed non-linear measures suit well the features and requirements of these records with regard to a number of samples and noise levels.
As a consequence, there are a myriad of scientific papers describing successful classification approaches for different types of EEG or HRV signals. For instance, in [7], EEG signals were classified using several entropy statistics under noisy conditions. Despite high noise levels, most of the entropy methods were able to find differences among signals acquired from patients with disparate clinical backgrounds. The study in [8] used two of these measures to group epileptic recordings of very short length, only 868 samples. The work [9] also used EEG records to detect Alzheimer’s disease based on changes in the regularity of the signals. One out of the two methods assessed was able to find significant differences between pathological and healthy subjects. Regarding RR time series, studies such as [10] classified congestive heart failure and normal sinus rhythm again using two entropy measures and assessed the influence of the input parameters on these measures. One of the very first applications of sample entropy (SampEn) was the analysis of neonatal HRV [11]. Other approaches based on ordinal patterns instead of amplitude differences have also been successful in classifying HRV records, in this case for the diagnosis of cardiovascular autonomic neuropathy [12]. In summary, EEG and HRV records have been extensively processed using approximate entropy (ApEn), SampEn, distribution entropy (DistEn), fuzzy entropy (FuzzyEn), permutation entropy (PE), and many more, in isolation or in comparative studies.
Conversely, other biomedical records, such as blood or interstitial glucose, arterial blood pressure, or body temperature data, have not been studied as extensively. Despite their convenience and demonstrated diagnosis potential [13,14], the use of entropy, complexity, or regularity measures is still lacking in these contexts. These records are more often found in clinical settings as single readings instead of time series, and if continuous readings are available, they are usually very short and sampled at very low frequencies. As a result of this lack of scientific literature and data on which further studies can be rooted, the selection of methods and parameters is more difficult. Thus, the quest for suitable classification methods may become a brute force search, and often the results achieved are at the borderline of significance, at most. The question that arises in these cases can be: Are there no method differences because the records from several classes do not exhibit any particular feature or because the method has not been used to its full potential?
Obviously, when a single feature is not sufficient for a clear classification of objects, more features can be included in the classification function, following the general pattern recognition principles for feature extraction and selection. For instance, in [14], in addition to a non-linear measure for early fever peak detection, the classification function used other parameters such as the temperature gradient between core and peripheral body temperature. To detect atrial fibrillation in very short RR records, in [15], the authors proposed adding the heart rate as a predictor variable along with the entropy estimate. In these and other examples, a single non-linear method was combined with other parameters to improve the accuracy of the classifier employed. Along these lines, in this paper, we propose the use of two variables for classifying body temperature records. However, instead of using a single non-linear method combined with other unrelated parameters [15], the main novelty of our work is the utilization of two uncorrelated entropy measures as explanatory variables of the same equation: PE and ApEn/SampEn. There are other works with comparative analyses using several entropy measures independently, and some authors have even recommended applying more than one method together to reveal different features of the underlying dynamics [16], but our method combines more than one measure together in a single function after a correlation analysis. There are a few studies using pattern recognition techniques and more than one entropy statistic, such as [17], to improve the classification accuracy of a single method.
For many years, temperature recordings in standard clinical practice have been limited to scarce measurements (once per day or once per shift), which provides very little information about the processes underlying body temperature regulation [18,19]. For these reasons, physicians are only capable of distinguishing between febrile patients and afebrile patients. However, information from continuous body temperature recordings may be helpful in improving our understanding of body temperature disorders in patients with fever [5,20,21].
ApEn and SampEn are arguably the two families of statistics most extensively used in the non-linear biosignal processing realm, with ApEn accounting for more than 1100 citations in PubMed and SampEn for almost 800. PE is not that common yet, since it is a more recent method, but it is probably the best representative of the tools based on sample order differences instead of on sample amplitude differences, as is the case for ApEn and SampEn. Different values of SampEn/ApEn and PE between healthy individuals and patients with fever are likely to reflect subtle changes in body temperature regulation that may be more relevant than the mere identification of a fever peak. It seems reasonable to believe that the process of body temperature regulation may be altered during infectious diseases and it may return to normal during the recovery phase [22]. Therefore, information obtained by non-linear methods could be useful to evaluate the response to antimicrobial treatments or to adjust the length of those treatments.
Each method separately provides a borderline temperature body time series classification, as is the case in many other studies, but the two combined improve its accuracy significantly. The results of our study show that logistic models including SampEn/ApEn and PE have and accuracy that is acceptable for classifying temperature time series from patients with fever and healthy individuals. The ability of the models developed in this work to classify body temperature time series seems to be the first step in giving temperature recording a more significant role in clinical practice. As has been proved with other clinical signals like heart rate or glycaemia [10,13], many diseases reflect a deep disturbance of complex physiologic systems, which can be measured by non-linear statistics. This scheme could therefore be exported to other similar situations where several methods are assessed but none of them reaches the significance level desired. The solution to most of these problems probably lies in a similar approach to that described in the present paper, whose main contributions are an improvement in body temperature classification accuracy and the introduction of a logistic model to perform such a classification.

2. Materials and Methods

2.1. Entropy Measures

The input to all the entropy measures used in this study is a normalized time series of length L, x = x 1 , x 2 , x 3 , , x L , from which embedded sequences of length m starting at sample t can be extracted as x t = x t , x t + 1 , x t + 2 , , x t + m 1 , with 1 t L ( m 1 ) . With this input, the main steps to compute ApEn, SampEn, and PE are defined next. The references included can be consulted for further details.

2.1.1. Approximate Entropy

ApEn is a very successful entropy measure for signal classification that was first introduced in [23]. Given the input time series x , a distance d between two embedded sequences x i = x i , x i + 1 , x i + 2 , , x i + m 1 and x j = x j , x j + 1 , x j + 2 , , x j + m 1 is defined as d i j = max ( | x i + k x j + k | ) , 0 k m 1 . For each pair 1 i , j L m + 1 , this distance has to be computed. A variable termed C i j , is assigned 1 each time the distance d i j between the associated two sequences is lower than a predefined threshold r, 0 otherwise:
C i j = 0 if d i j r 1 if d i j < r .
All the C i j values are averaged to obtain the following statistic:
C i m ( r ) = 1 L m + 1 j = 1 L m + 1 C i j .
This variable is then log-averaged:
Φ m ( r ) = 1 L m + 1 i = 1 L m + 1 log C i m ( r ) ,
and the process is repeated for m m + 1 . Finally, ApEn can be obtained as
ApEn ( m , r , L ) = Φ m ( r ) Φ m + 1 ( r ) .

2.1.2. Sample Entropy

SampEn was introduced in [24], as an improvement for ApEn. SampEn and ApEn algorithms are quite similar, especially the first steps. The main differences are that self-matches are not computed ( j i ) in Equation (1), since that case is not included in C i j :
C i m ( r ) = 1 L m j = 1 , j i L m + 1 C i j ,
and that variable is now linearly averaged instead (compared to Equation (2)):
Φ m ( r ) = 1 L m + 1 i = 1 L m + 1 C i m ( r ) .
The process is again repeated for m m + 1 . Finally, SampEn can be obtained as
SampEn ( m , r , L ) = log Φ m + 1 ( r ) Φ m ( r ) .

2.1.3. Permutation Entropy

Contrary to ApEn and SampEn, PE is based on x t ordinal differences instead of amplitude differences [25]. The subsequences under comparison are not the amplitude of the samples but are the resulting sample indices after repositioning them in ascending order. Initially, before the ordering process takes place, the default index set for x t = x t , x t + 1 , x t + 2 , , x t + m 1 is 0 , 1 , , m 1 . Defining the vector π t as the permutation indices of x t once it is re-assembled in sample ascending order, π t = π t , π t + 1 , π t + 2 , , π t + m 1 , such that x π t < x π t + 1 < < x π t + ( m 1) ) , with π i [ 0 , m 1 ] , and π i π j , i j . The probability p π t of each ordinal pattern can be estimated as its relative frequency, taking into account all the possible m ! permutations of m symbols (indices), and all the L m + 1 embedded sequences x t of length m:
p π t = card ( π t ) L m + 1
where card ( ) accounts for the cardinality of π , namely, the number of times that order is found in the subsequences. There are potentially up to m ! different π t patterns, although it is quite usual that some of them have a cardinality of 0. PE can then be computed as the Shannon entropy of the resulting probability distribution:
PE ( m , L ) = p π t 0 p π t log 2 p π t .
As for many other entropy statistics, a time scale could be applied to PE. This embedded delay, usually termed τ , is often introduced as a subsampling factor in the input time series [26]. This study only uses the original time scale of the data for all the entropy methods, including PE, and therefore τ = 1 .

2.2. Classification Analysis

The classification is based on a quantitative model using PE, ApEn, SampEn, or a combination of two of them as input variables. Specifically, we have used a logistic regression probabilistic model [27]. The objective is to model the expected two classes of the temperature records (sick/healthy), one coded as 0 and the other coded as 1. One of the strengths of this model is that it does not require the data assumptions of other models, such as normality, linearity, or homoscedasticity, making it less restrictive than other methods such as discriminant analysis [28]. Moreover, this model is almost 10 times less data hungry than other classification techniques such as support vector machines or neural networks. With only 18–23 samples, logistic models are able to achieve a difference between the apparent AUC and the validated AUC smaller than 0.01 [29]. It is also one of the most stable classifiers, yielding consistent results for both training and validation experiments, even with imbalanced datasets [30].
Logistic models have been successfully applied in many time series classification tasks: EEG [31,32], HRV [33,34], as stated in the Introduction section, and others beyond the medical framework such as [35,36,37]. The general expression for this model is
p ( z ) = exp b 0 + i = 1 q b i z i 1 + exp b 0 + i = 1 q b i z i
where p ( z ) is the probability of the predicted class being 1. The variables b i are the regression coefficients, and z i are the quantitative variables, in this case the entropy statistic employed (therefore, q = 1 or q = 2 if one or two statistics are included in the model, respectively).
Specifically, the model for each case becomes
p ( z ) = exp b 0 + b 1 z 1 1 + exp b 0 + b 1 z 1
for the individual models (univariate model), with z 1 accounting for the entropy measurement employed, that is, z 1 = ApEn ( m , r , L ) for the model using ApEn only, z 1 = SampEn ( m , r , L ) for the SampEn based model, and z 1 = PE ( m , r , L ) for PE. The parameters b 0 and b 1 are the unknowns that the model computes. For the model using two measures (bivariate model), the general expression is
p ( z ) = exp b 0 + b 1 z 1 + b 2 z 2 1 + exp b 0 + b 1 z 1 + b 2 z 2
with z 1 and z 2 accounting for the two measures employed, PE, and either ApEn or SampEn. In this case, there are three parameters to compute: b 0 , b 1 , and b 2 .
The computation of the model was carried out using the R statistical package [38]. The output includes the coefficients stated above ( b 0 , b 1 , b 2 ), the standard error, the Wald statistic [39], the degrees of freedom, the significance, and the exponentiation of the b 0 coefficient, which is the odds ratio. These results will be shown in Section 3.
The Akaike information criterion (AIC) [40] is the metric used for model assessment. This statistic is computed for all the models to be compared. The best model is that with the minimum AIC of all the models under comparison. In practical terms, the goodness of fit of the model will be assessed computing the probability of class membership for each record in the experimental database. The optimal model will be that with the minimum number of classification errors or with the maximum classification accuracy. This accuracy will be quantified in terms of specificity and sensitivity. These values can be directly obtained from the ROC curve, changing the threshold point. However, the model obtained using all the time series in the data set may not give a reliable idea of the classification capability of the model (over-fitting risk). It is better to apply the model to a subset not involved in the process of estimating it. Since the dataset is relatively small, we used the leave-one-out (LOO) method [41] and the same statistical package [42], in which all the time series except one for each class are used to estimate the model parameters. Specifically, one time series of each class is held out and not included in the model calculations (test set), and the remaining ones are used to obtain the model (training set). The resulting model is then applied to the unused pair of series in order to evaluate its performance on unseen data. This process is repeated for all the records in the dataset. The overall prediction error is finally obtained by averaging errors from each individual model obtained [43].
The proportion of variance explained by the model is quantified by the Nagelkerke [44] and the Cox–Snell R 2 [45] coefficients. A value for the Nagelkerke R 2 greater than 0.5 indicates that the variance is well explained. The minimization criterion is based on the −2 log-likelihood (−2LL) [46], the smallest possible deviance or residual variance. These values will also be reported in the Results section.

2.3. Experimental Dataset

The experimental dataset is composed of 30 body temperature records obtained from two groups of individuals. The first group included 16 healthy individuals (10 women and 6 men). They were asked to refrain from taking a shower and to avoid strenuous exercises, but otherwise they were allowed to follow their normal routine. The second group included 14 patients that had been admitted to a general internal medicine ward of a teaching hospital in Madrid (Spain). To be considered suitable for inclusion, patients were required to be over 18 and under 85 years of age, to have been admitted to a hospital for less than a week, and to have had at least a standard temperature recording above 38 ° C the day before they were monitored. Temperature monitoring was carried out with two probes, placed in the external auditory canal (Mono-a-Therm Tympanic Temperature Probe, Mallinckrodt) for central temperature and in the cubital aspect of the forearm (Mono-a-Therm Skin Temperature Probe, Mallinckrodt) for peripheral temperature. Measurements were obtained once per minute during 24 h and stored in a holter device (TherCom©, Innovatec).
For the purposes of this work, an 8 h interval starting at 8:00 a.m. was selected. This way, the records were more uniform in terms of chonobiological effects, and this was an interval available in all the time series. Recordings from healthy individuals are labelled as Class 0 and recordings from patients are labelled as 1. These records have been used in previous publications by our group, where further details can be found [20]. Ethical Review Board approval was granted, and written informed consent gathered from each participant before inclusion.

3. Experiments and Results

The length of the time series was fixed at L = 480 , the 8 h interval stated above. ApEn and SampEn were also first tested using different values for their input parameters in the vicinity of the usual recommended configuration of r [ 0.1 , 0.2 ] and m = 2 [47]. Specifically, the values for m were 1 and 2, and r varied between 0.1 and 0.25 in 0.05 steps. Except for r = 0.1 , with a relatively low classification performance of 64%, all the tested parameter values yielded a very similar accuracy, around 70%, with m = 1 and r = 0.25 offering a slightly superior performance. This final parameter configuration is very similar to that used in previous similar studies [5].
The influence of the embedded dimension on PE was also analysed, with m ranging from 3 up to 8. The classification results for each value are shown in Table 1. The value for the embedded dimension in PE was finally set at 8. This configuration was found to be optimal for the same time series in terms of classification performance and computational cost [48]. However, since m = 8 does not satisfy the recommendation m ! < < N , m = 5 was also used in the computation of the final model.
The three statistics, ApEn ( m = 1 , r = 0.25 , L = 480 ), SampEn ( m = 1 , r = 0.25 , L = 480 ), and PE ( m = 8 , L = 480 ), were computed for each record first. Results are shown in Table 2.
The next step was to assess the independence between the input variables used to build the model. This step was carried out using a correlation matrix and by computing the p-values of the correlation test between variable pairs, as described in Table 3 and Table 4. This correlation analysis was used to assess the association degree between the information provided by ApEn and SampEn, in order to omit possible redundancy in the models fitted, and provide a rationale for not using both measures in the same model.
As expected, ApEn and SampEn are strongly correlated. However, PE exhibits very low correlations, and high p-values, which suggests there is no correlation between PE and any of the other two measures, ApEn or SampEn. This may be due to the fact that PE is based on ordinal differences, whereas the other two are based on amplitude differences, as was hypothesized.
In the following sections, the predictive capability of each one of the measures is assessed, using a logistic model for all the variables and their combinations, discarding the correlated cases. PE, ApEn, or SampEn are the temperature time series features used for classification.

3.1. Individual Models

Table 5 shows the results of the model using only PE. This model, whose assessment parameters are summarized in Table 6, achieves a significant classification performance, with 83.3% correctly classified records (Table 7) and an average classification performance of 77.6% using the LOO method (Table 6). The LOO method leaves out one time series (validation set) of each class, and a model is built using the remaining data (training data). This model is used to make a prediction about the validation set, and the final classification performance using LOO is obtained by averaging all the partial results. The classification achieved is expected to be lower than that for the entire dataset since training and test sets are different, but provides a good picture of the generalization capabilities of the model.
The percentages in Table 7 account for sensitivity (correct percentage for Class 0), specificity (correct percentage for Class 1), and classification accuracy (total). This will be repeated for the other models (confusion matrix).
Replacing the values obtained for the model coefficients, p ( z ) can then be computed as
p ( z ) = exp 32.202 + 3.704 z PE 1 + exp 32.202 + 3.704 z PE
.
For illustrative purposes, Table 8 shows the p ( z ) values obtained for all the PE results ( z PE ) in Table 2. If p ( z ) > 0.5 , time series should be classified as 1 or 0. According to this threshold, there are 2 classification errors in Class 0, and 3 in Class 1. This process can be repeated for all the models fitted in this study.
The results of the model using only SampEn are shown in Table 9. In contrast to results with PE, this model, whose parameters are summarized in Table 10, achieves a borderline classification performance instead, with 70% correctly classified records (Table 11), but only 57.1% for Class 1 records. The average classification performance was 68.7% using the LOO method (Table 10).
The last individual model, using only ApEn, achieves a better performance than that of SampEn. Its modelization results are shown in Table 12, and summary in Table 13. The classification performance is also at the verge of significance: 0.014 and 0.021, with an overall accuracy of 73.3%, but with better Class 1 classification, 64.3% (Table 14). The average classification performance was 69.7% using the LOO method (Table 10). The better performance of ApEn over SampEn using temperature records, although counter–intuitive, is in accordance with other similar studies [5].

3.2. Joint Models

The joint models correspond to models where PE and SampEn, or PE and ApEn, are combined to improve the classification performance of the models described in the previous section. The model results using PE and SampEn are shown in Table 15, Table 16 and Table 17. In comparison with previous individual results for PE or SampEn, there is a compelling performance improvement, from 83.3% to 90% classification accuracy, although the performance for SampEn was only 70%. Arguably, there is a synergy between PE and SampEn, as expected. The average classification performance was 87.2% using the LOO method (Table 16).
Figure 1 summarizes the ROC plots of all the models studied. It becomes apparent in this figure how the performance significantly increases for the joint models.
Visually, the separability of the classes using SampEn and PE combined in a logistic model, is shown in Figure 2. As numerically described in Table 17, only 1 or 2 objects are located in the opposite group.
The model results using PE and ApEn are shown in Table 18, Table 19 and Table 20. As for PE with SampEn, in comparison with previous individual results, there is a compelling performance improvement, from 83.3% to 93.3% classification accuracy, although the performance for ApEn was 73.3%. Again, there appears to be a synergy between PE and ApEn. The average classification performance was 90.1% using the LOO method (Table 19).
This is the model with the highest classification accuracy. Replacing the values obtained, the fitted logistic model becomes
p ( z ) = exp 24.94+ + 3.433 z PE 12.806 z ApEn 1 + exp 24.94+ + 3.433 z PE 12.806 z ApEn
from which p ( z ) can be computed replacing z PE and z ApEn by their results for each time series, as done for the univariate PE model.
The separability of the classes using ApEn and PE combined in a logistic model is depicted in Figure 3. As numerically described in Table 20, only one object of each class is located in the opposite group.
The LOO analysis was also performed using these joint models, omitting a record from each class in each experiment, and averaging the classification results obtained. For the case with PE and ApEn, classification accuracy dropped from 90% to 87.22%. For the second model, it also dropped, from 93.3% to 90.11%. These performance decrements can be expected in any LOO analysis. A 3% difference can be considered small enough to assume a reasonable generalization capability for the joint models. Table 21 summarizes the performance of all the models studied.
The computation of the final model was repeated using the PE results achieved using m = 5 , as described in Table 1. In this case, the parameters of the model became b 1 PE = 3.1462 , b 2 ApEn = 14.54 , b 0 = 16.005 , with p = 0.0000 . Using this model instead, there were 4 classification errors in Class 0, and 1 error in Class 1, with a global accuracy of 83.3%. This is the same performance using only PE, but with a more conservative approach in terms of m. The classification was also improved by 10% in comparison with the results achieved by PE and ApEn in isolation with the same parameter configuration.
Finally, in order to further validate the approach proposed in this study, we applied the same scheme to EEG records of the Bonn database [49]. This database is publicly available and has been used in many studies, including ours [7,48,50] and others that have also proposed using more than an entropy statistic simultaneously [16,17] to improve classification performance. Therefore, we omit the details of this database, since it is not the focus of the present study, which can be obtained from those papers.
We applied the same SampEn and ApEn configuration as in [17], and the PE configuration used here. There is a great classification performance variation for each pair of classes, but the segmentation of EEGs from healthy subjects with eyes open (Group A in [17]) and from subjects with epilepsy during a seizure-free period from the epileptogenic zone (Group C in [17]) yielded a borderline significant classification performance (52% for PE, 78% for SampEn, and 72% for ApEn) that suited very well the case studied in the present paper.
A model including PE and SampEn was created as described above, with the following results: b 1 PE = 5.4482 , b 2 SampEn = 39.2787 , b 0 = 13.2617 , with p < 0.01 . Applying that model in a similar way as that in Equation (9), there were only 7 objects of Group A and 4 of Group C that were misclassified. Overall, the classification performance was increased up to 94.5%.

4. Discussion

PE could be initially supposed to look at signal properties that are different from those that ApEn or SampEn do. Indeed, the correlation analysis in Table 3 present very low values (–0.2374 and –0.1342), whereas, as expected, ApEn and SampEn were strongly correlated. This initial test suggested that only models with PE and either SampEn or ApEn should be studied. In addition, all the coefficients obtained were reasonably similar, without very large standard errors, which confirms that the S-shaped logistic model function is a suitable relationship for the data (there are no separation problems [51]).
Individual models were first computed for each measure in order to assess their performance independently. The classification results are acceptable for PE, but not for ApEn or SampEn, at most, borderline. PE results were 87.5% and 78.6%, but only 81.3% and 64.3% for ApEn and 81.3% and 57.1% for SampEn. While the classification accuracy for Class 0 is similar in all measures, it is very poor for Class 1 using ApEn or SampEn.
Two joint models were studied using PE and ApEn, and PE and SampEn, namely, by using pairs of the uncorrelated explanatory variables according to Table 3. The model with PE and ApEn improved the best individual performances in all cases, up to 93.8% and 92.9%. The model with PE and SampEn also improved the individual results, but to a lesser extent: 87.5% and 92.9%. Therefore, the classification results indicate that PE and ApEn are the best choice for a model in this case, confirmed by the minimum value achieved by the AIC (Table 21). According to these results, ApEn outperforms SampEn, which may seem counter-intuitive, but this also happened in a similar study with temperature records [5]. Moreover, the LOO analysis yielded a very similar classification performance, with only a 3% drop and still well above the individual performances. Specifically, the classification dropped from 83.3% to 77.6% using PE, from 70% to 68.7% using SampEn, and from 73% to 69.7% using ApEn. Regarding joint models, it also dropped from 93.3% to 87.2% using PE and ApEn, but with PE and SampEn it was fairly constant: 90% against 90.1%. Therefore, it can be concluded that the models are able to generalize well, given the small dataset available.
The Nagelkerke R 2 coefficients were smaller than 0.5 for the individual models using ApEn or SampEn only (0.493 and 0.399 respectively), whereas for PE, it was 0.588. These values also confirm that the individual results can only be considered significant for PE, although ApEn almost reached a level of significance in terms of R 2 . The two joint models also improved with regard to this parameter, with values higher than 0.77.
In terms of class balance, the individual model based solely on SampEn yields 6 and 3 errors for each class—slightly unbalanced. However, results are more equally distributed for the other two individual models (3 and 2 errors for PE, and 5 and 3 errors for ApEn). For the joint models, there is a balanced classification, with 2 and 1 errors, or even 1 error for each class for the model proposed. This can be considered another advantage of the method proposed, since the classification is not only more accurate but also more equally distributed.

5. Conclusions

Entropy measures are sometimes unable to find significant differences among time series from disjoint clusters. This can be due to a sub-optimal parameter configuration, specific signal features, or simply because the method chosen is not appropriate for that purpose in that specific context. However, despite not finding statistically significant differences, classification results are frequently well over simple guessing, and such results are almost meaningful. Taking advantage of the fact that each measure is usually more focused on a specific region of the parameter space, we hypothesized that a combination of uncorrelated statistics could arguably improve the individual classification results achieved by each one independently and reach a suitable significance level.
With that purpose in mind, we analyzed the classification performance of a logistic model built from two entropy statistics, PE and ApEn/SampEn. These two measures look at different relationships of the information in the time series: ordinal or amplitude variations. Separately, they were less capable of tackling the difficult problem of body temperature time series classification (83% and 73% accuracy, respectively), but together the accuracy of the classification rose to 93% and 90% using a LOO approach. It is important to note that the main goal of this work was not to determine the exact percentage of correctly assigned objects was not the main goal of this work but to demonstrate that a combined approach can improve the baseline performance, however high or low it is already.
This scheme could be applied to other classification problems where independent measures achieve borderline results if applied in isolation. The exploitation of possible synergies between different methods is a novel approach that has not been applied very extensively so far, and could open doors to more accurate methods.

Author Contributions

D.C.-F. and P.M.-M. designed and conducted the experiments. B.V., P.G. and M.V.-E. prepared and provided the experimental dataset and the clinical viewpoint. S.O.-C. and J.J.-N. developed the statistical analysis. D.C.-F. wrote the paper.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cerutti, S.; Carrault, G.; Cluitmans, P.; Kinie, A.; Lipping, T.; Nikolaidis, N.; Pitas, I.; Signorini, M. Non-linear algorithms for processing biological signals. Comput. Methods Programs Biomed. 1996, 51, 51–73. [Google Scholar] [CrossRef]
  2. Hashem, P.M.; Potter, S.M. Nonlinear dynamics and econometrics: An introduction. J. Appl. Econom. 1992, 7, S1–S7. [Google Scholar]
  3. Kian, R.; Horrillo, J.; Zaytsev, A.; Yalciner, A.C. Capturing Physical Dispersion Using a Nonlinear Shallow Water Model. J. Mar. Sci. Eng. 2018, 6. [Google Scholar] [CrossRef]
  4. Ge, Z.; Song, Z.; Ding, S.X.; Huang, B. Data Mining and Analytics in the Process Industry: The Role of Machine Learning. IEEE Access 2017, 5, 20590–20616. [Google Scholar] [CrossRef]
  5. Cuesta, D.; Varela, M.; Miró, P.; Galdós, P.; Abásolo, D.; Hornero, R.; Aboy, M. Predicting survival in critical patients by use of body temperature regularity measurement based on approximate entropy. Med. Biol. Eng. Comput. 2007, 45, 671–678. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Papaioannou, V.E.; Chouvarda, I.G.; Maglaveras, N.K.; Baltopoulos, G.I.; Pneumatikos, I.A. Temperature multiscale entropy analysis: A promising marker for early prediction of mortality in septic patients. Physiol. Meas. 2013, 34, 1449. [Google Scholar] [CrossRef] [PubMed]
  7. Cuesta-Frau, D.; Miró-Martínez, P.; Núñez, J.J.; Oltra-Crespo, S.; Picó, A.M. Noisy EEG signals classification based on entropy metrics. Performance assessment using first and second generation statistics. Comput. Biol. Med. 2017, 87, 141–151. [Google Scholar] [CrossRef] [PubMed]
  8. Li, P.; Karmakar, C.; Yan, C.; Palaniswami, M.; Liu, C. Classification of 5-S Epileptic EEG Recordings Using Distribution Entropy and Sample Entropy. Front. Physiol. 2016, 7, 136. [Google Scholar] [CrossRef] [PubMed]
  9. Abásolo, D.; Hornero, R.; Espino, P.; Álvarez, D.; Poza, J. Entropy analysis of the EEG background activity in Alzheimer’s disease patients. Physiol. Meas. 2006, 27, 241. [Google Scholar] [CrossRef] [PubMed]
  10. Zhao, L.; Wei, S.; Zhang, C.; Zhang, Y.; Jiang, X.; Liu, F.; Liu, C. Determination of Sample Entropy and Fuzzy Measure Entropy Parameters for Distinguishing Congestive Heart Failure from Normal Sinus Rhythm Subjects. Entropy 2015, 17, 6270–6288. [Google Scholar] [CrossRef] [Green Version]
  11. Lake, D.E.; Richman, J.S.; Griffin, M.P.; Moorman, J.R. Sample entropy analysis of neonatal heart rate variability. Am. J. Physiol. Regul. Integr. Comp. Physiol. 2002, 283, R789–R797. [Google Scholar] [CrossRef] [PubMed]
  12. Naranjo, C.C.; Sanchez-Rodriguez, L.M.; Martínez, M.B.; Báez, M.E.; García, A.M. Permutation entropy analysis of heart rate variability for the assessment of cardiovascular autonomic neuropathy in type 1 diabetes mellitus. Comput. Biol. Med. 2017, 86, 90–97. [Google Scholar] [CrossRef] [PubMed]
  13. Rodriguez de Castro, C.; Vigil, L.; Vargas, B.; Garcia Delgado, E.; Garcia-Carretero, R.; Ruiz-Galiana, J.; Varela, M. Glucose time series complexity as a predictor of type 2 Diabetes. Diabetes Metab. Res. Rev. 2017, 30, e2831. [Google Scholar] [CrossRef] [PubMed]
  14. Jordan, J.; Miro-Martinez, P.; Vargas, B.; Varela-Entrecanales, M.; Cuesta-Frau, D. Statistical models for fever forecasting based on advanced body temperature monitoring. J. Crit. Care 2017, 37, 136–140. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Lake, D.E.; Moorman, J.R. Accurate estimation of entropy in very short physiological time series: The problem of atrial fibrillation detection in implanted ventricular devices. Am. J. Physiol. Heart Circ. Physiol. 2011, 300, H319–H325. [Google Scholar] [CrossRef] [PubMed]
  16. Keller, K.; Unakafov, A.M.; Unakafova, V.A. Ordinal Patterns, Entropy, and EEG. Entropy 2014, 16, 6212–6239. [Google Scholar] [CrossRef]
  17. Keller, K.; Mangold, T.; Stolz, I.; Werner, J. Permutation Entropy: New Ideas and Challenges. Entropy 2017, 19, 134. [Google Scholar] [CrossRef]
  18. Dakappa, P.H.; Bhat, G.K.; Bolumbu, G.; Rao, S.B.; Adappa, S.; Mahabala, C. Comparison of Conventional Mercury Thermometer and Continuous TherCom(Â?) Temperature Recording in Hospitalized Patients. J. Clin. Diagn. Res. 2016, 10, OC43–OC46. [Google Scholar] [CrossRef] [PubMed]
  19. Varela, M.; Ruiz-Esteban, R.; Martinez-Nicolas, A.; Cuervo-Arango, J.A.; Barros, C.; Delgado, E.G. Catching the spike and tracking the flow: Holter–temperature monitoring in patients admitted in a general internal medicine ward. Int. J. Clin. Pract. 2011, 65, 1283–1288. [Google Scholar] [CrossRef] [PubMed]
  20. Varela, M.; Cuesta, D.; Madrid, J.A.; Churruca, J.; Miro, P.; Ruiz, R.; Martinez, C. Holter monitoring of central and peripheral temperature: possible uses and feasibility study in outpatient settings. J. Clin. Monit. Comput. 2009, 23, 209–216. [Google Scholar] [CrossRef] [PubMed]
  21. Dakappa, P.H.; Prasad, K.; Rao, S.B.; Bolumbu, G.; Bhat, G.K.; Mahabala, C. A Predictive Model to Classify Undifferentiated Fever Cases Based on Twenty-Four-Hour Continuous Tympanic Temperature Recording. J. Healthc. Eng. 2017, 2017, 5707162. [Google Scholar] [CrossRef] [PubMed]
  22. Vargas, B.; Varela, M.; Ruiz-Esteban, R.; Cuesta-Frau, D.; Cirugeda-Roldan, E. What Can Biosignal Entropy Tell Us About Health and Disease? Applications in Some Clinical Fields. Nonlinear Dyn. Psychol. Life Sci. 2015, 19, 419–436. [Google Scholar]
  23. Pincus, S.; Gladstone, I.; Ehrenkranz, R. A regularity statistic for medical data analysis. J. Clin. Monit. Comput. 1991, 7, 335–345. [Google Scholar] [CrossRef]
  24. Richman, J.; Moorman, J.R. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. Heart Circ. Physiol. 2000, 278, H2039–H2049. [Google Scholar] [CrossRef] [PubMed]
  25. Bandt, C.; Pompe, B. Permutation Entropy: A Natural Complexity Measure for Time Series. Phys. Rev. Lett. 2002, 88, 174102. [Google Scholar] [CrossRef] [PubMed]
  26. Azami, H.; Escudero, J. Amplitude-aware permutation entropy: Illustration in spike detection and signal segmentation. Comput. Methods Programs Biomed. 2016, 128, 40–51. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Bewick, V.; Cheek, L.; Ball, J.R. Statistics review 14: Logistic regression. Crit. Care 2005, 9, 112–118. [Google Scholar] [CrossRef] [PubMed]
  28. Josephat, P.K.; Ame, A. Effect of Testing Logistic Regression Assumptions on the Improvement of the Propensity Scores. Int. J. Stat. Appl. 2018, 8, 9–17. [Google Scholar]
  29. Van der Ploeg, T.; Austin, P.C.; Steyerberg, E.W. Modern modelling techniques are data hungry: A simulation study for predicting dichotomous endpoints. BMC Med. Res. Methodol. 2014, 14, 137. [Google Scholar] [CrossRef] [PubMed]
  30. Rahman, H.A.A.; Wah, Y.B.; He, H.; Bulgiba, A. Comparisons of ADABOOST, KNN, SVM and Logistic Regression in Classification of Imbalanced Dataset. In Soft Computing in Data Science; Berry, M.W., Mohamed, A., Yap, B.W., Eds.; Springer: Singapore, 2015; pp. 54–64. [Google Scholar]
  31. Subasi, A.; Erçelebi, E. Classification of EEG signals using neural network and logistic regression. Comput. Methods Programs Biomed. 2005, 78, 87–99. [Google Scholar] [CrossRef] [PubMed]
  32. Tomioka, R.; Aihara, K.; Robert Müller, K. Logistic regression for single trial EEG classification. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2007; Volume 19, pp. 1377–1384. [Google Scholar]
  33. Hu, W.; Jin, X.; Zhang, P.; Yu, Q.; Yin, G.; Lu, Y.; Xiao, H.; Chen, Y.; Zhang, D. Deceleration and acceleration capacities of heart rate associated with heart failure with high discriminating performance. Sci. Rep. 2016, 6, 23617. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Igasaki, T.; Nagasawa, K.; Murayama, N.; Hu, Z. Drowsiness estimation under driving environment by heart rate variability and/or breathing rate variability with logistic regression analysis. In Proceedings of the 2015 8th International Conference on Biomedical Engineering and Informatics (BMEI), Shenyang, China, 14–16 October 2015; pp. 189–193. [Google Scholar]
  35. Henry, F.; Herwindiati, D.E.; Mulyono, S.; Hendryli, J. Sugarcane Land Classification with Satellite Imagery using Logistic Regression Model. IOP Conf. Ser. Mater. Sci. Eng. 2017, 185, 012024. [Google Scholar] [CrossRef] [Green Version]
  36. Perelman, L.; Arad, J.; Housh, M.; Ostfeld, A. Event Detection in Water Distribution Systems from Multivariate Water Quality Time Series. Environ. Sci. Technol. 2012, 46, 8212–8219. [Google Scholar] [CrossRef] [PubMed]
  37. Zaidi, M. Forecasting Stock Market Trends by Logistic Regression and Neural Networks Evidence from Ksa Stock Market. Int. J. Econ. Commer. Manag. 2016, 4, 4–7. [Google Scholar]
  38. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2013. [Google Scholar]
  39. Katsaragakis, S.; Koukouvinos, C.; Stylianou, S.; Theodoraki, E.M. Comparison of statistical tests in logistic regression: The case of hypernatreamia. J. Mod. Appl. Stat. Methods 2005, 4, 514–521. [Google Scholar] [CrossRef]
  40. Akaike, H. A new look at the statistical model identification. IEEE Trans. Autom. Control 1974, 19, 716–723. [Google Scholar] [CrossRef]
  41. Vehtari, A.; Gelman, A.; Gabry, J. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Stat. Comput. 2016, 27, 1413–1432. [Google Scholar] [CrossRef] [Green Version]
  42. Davison, A.C.; Hinkley, D.V. Bootstrap Methods and Their Applications; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  43. Sing, T.; Sander, O.; Beerenwinkel, N.; Lengauer, T. ROCR: Visualizing classifier performance in R. Bioinformatics 2005, 21, 7881. [Google Scholar] [CrossRef] [PubMed]
  44. Nagelkerke, N.J.D. A note on a general definition of the coefficient of determination. Biometrika 1991, 78, 691–692. [Google Scholar] [CrossRef]
  45. Thoya, D.; Waititu, A.; Magheto, T.; Ngunyi, A. Evaluating Methods of Assessing Optimism in Regression Models. Am. J. Appl. Math. Stat. 2018, 6, 126–134. [Google Scholar]
  46. Hemmert, G.A.J.; Schons, L.M.; Wieseke, J.; Schimmelpfennig, H. Log-likelihood-based Pseudo-R2 in Logistic Regression: Deriving Sample-sensitive Benchmarks. Sociol. Methods Res. 2018, 47, 507–531. [Google Scholar] [CrossRef]
  47. Pincus, S.M. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci. USA 1991, 88, 2297–2301. [Google Scholar] [CrossRef] [PubMed]
  48. Cuesta-Frau, D.; Varela-Entrecanales, M.; Molina-Picó, A.; Vargas, B. Patterns with Equal Values in Permutation Entropy: Do They Really Matter for Biosignal Classification? Complexity 2018, 2018, 1324696. [Google Scholar] [CrossRef]
  49. Andrzejak, R.G.; Lehnertz, K.; Mormann, F.; Rieke, C.; David, P.; Elger, C.E. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Phys. Rev. E 2001, 64, 061907. [Google Scholar] [CrossRef] [PubMed]
  50. Cirugeda-Roldan, E.; Cuesta-Frau, D.; Miro-Martinez, P.; Oltra-Crespo, S. Comparative Study of Entropy Sensitivity to Missing Biosignal Data. Entropy 2014, 16, 5901–5918. [Google Scholar] [CrossRef] [Green Version]
  51. Mansournia, M.A.; Geroldinger, A.; Greenland, S.; Heinze, G. Separation in Logistic Regression: Causes, Consequences, and Control. Am. J. Epidemiol. 2018, 187, 864–870. [Google Scholar] [CrossRef] [PubMed]
Figure 1. ROC plots of all the models studied. The best classification performance is achieved with the PE+ApEn model (bold dotted line).
Figure 1. ROC plots of all the models studied. The best classification performance is achieved with the PE+ApEn model (bold dotted line).
Entropy 20 00853 g001
Figure 2. Clouds of points for temperature records using PE and SampEn as coordinates. The separability of the two classes can be easily observed, with Class 1 objects (triangles) located mainly at the lower right zone of the plot, whereas Class 0 objects (circles) are located at the higher left zone. Only two circles and one triangle are clearly misplaced, accounting for the errors in Table 17.
Figure 2. Clouds of points for temperature records using PE and SampEn as coordinates. The separability of the two classes can be easily observed, with Class 1 objects (triangles) located mainly at the lower right zone of the plot, whereas Class 0 objects (circles) are located at the higher left zone. Only two circles and one triangle are clearly misplaced, accounting for the errors in Table 17.
Entropy 20 00853 g002
Figure 3. Clouds of points for temperature records using PE and ApEn as coordinates. The separability of the two classes can be easily observed, with Class 1 objects (triangles) located mainly at the lower right zone of the plot, whereas Class 0 objects (circles) are located at the higher left zone, as for the PE-SampEn case. Only one circle and one triangle are clearly misplaced, accounting for the errors in Table 20.
Figure 3. Clouds of points for temperature records using PE and ApEn as coordinates. The separability of the two classes can be easily observed, with Class 1 objects (triangles) located mainly at the lower right zone of the plot, whereas Class 0 objects (circles) are located at the higher left zone, as for the PE-SampEn case. Only one circle and one triangle are clearly misplaced, accounting for the errors in Table 20.
Entropy 20 00853 g003
Table 1. Classification results using permutation entropy (PE) with m ranging from 3 up to 8.
Table 1. Classification results using permutation entropy (PE) with m ranging from 3 up to 8.
mSensitivity (Class 0)Specificity (Class 1)Accuracy (Correct)
80.8750.7860.833
70.93750.64280.8
60.750.71420.733
50.81250.57140.7
40.68750.57140.633
30.56250.64280.6
Table 2. Individual results for each of the measures employed.
Table 2. Individual results for each of the measures employed.
ApEnPESampEnClass
10.3753698.1738810.2595920
20.3556217.3544330.1664360
30.4271848.2338460.3058180
40.3289208.4371230.1338820
50.7421448.5231120.5890700
60.4448398.4587430.3123690
70.4448398.2535190.3318990
80.4657838.4227550.3274470
90.6492928.5626410.4116710
100.3345778.7379730.2386480
110.4043677.9444530.2822840
120.6861128.6831030.5007970
130.2116787.0454370.1256580
140.6353638.7025520.4883070
150.3753698.1738810.2595920
160.6460448.0234780.3333060
170.1325859.0967280.1020071
180.4092749.4806020.2923901
190.0838158.1609920.0746641
200.4576428.0425450.2901631
210.1432939.3933430.0906061
220.3540529.8580800.2568551
230.4070619.5942360.2755801
240.3146899.7035210.2430191
250.2696028.7925170.1617931
260.0279899.4394940.0063321
270.3764108.6978500.2307831
280.2413929.9595480.1721431
290.0532929.1821140.0115051
300.2793028.6609420.2019911
Table 3. Correlation results obtained for the three measures. Clearly, approximate entropy (ApEn) and sample entropy (SampEn) are strongly correlated.
Table 3. Correlation results obtained for the three measures. Clearly, approximate entropy (ApEn) and sample entropy (SampEn) are strongly correlated.
ApEnPESampEn
ApEn1.0000−0.23740.9604
PE−0.23741.0000−0.1342
SampEn0.9604−0.13421.0000
Table 4. Significance of the correlation analysis between measures.
Table 4. Significance of the correlation analysis between measures.
ApEnPESampEn
ApEn-0.2066<0.0001
PE0.2066-0.4795
SampEn<0.00010.4795-
Table 5. Variables in the equation for PE.
Table 5. Variables in the equation for PE.
CoefficientsStandard ErrorWalddfSignificanceExp(B)
PE ( b 1 )3.7041.3957.05010.00840.598
b 0 −32.20212.0257.17110.0070.000
Table 6. Individual model summary for PE. It includes some R 2 measures to assess the model’s predictive power, the area under ROC curve (AUC), and the leave-one-out (LOO) average classification results.
Table 6. Individual model summary for PE. It includes some R 2 measures to assess the model’s predictive power, the area under ROC curve (AUC), and the leave-one-out (LOO) average classification results.
Step−2 Log LikelihoodCox–Snell R 2 Nagelkerke R 2 AUCLOO
124.0310.4410.5880.8777.6%
Table 7. Percentage agreement between observed and predicted classifications for temperature records using an individual model based on PE.
Table 7. Percentage agreement between observed and predicted classifications for temperature records using an individual model based on PE.
Predicted
ClassPercentage
Observed01Correct
Class014287.5
131178.6
Total 83.3
Table 8. Quantitative model probability results. Classification errors based on the computed p ( z ) value are in bold.
Table 8. Quantitative model probability results. Classification errors based on the computed p ( z ) value are in bold.
PEClass p ( z )
18.17388100.1270
27.35443300.0069
38.23384600.1537
48.43712300.2783
58.52311200.3465
68.45874300.2946
78.25351900.1634
88.42275500.2677
98.56264100.3803
108.73797300.5402
117.94445300.0585
128.68310300.4895
137.04543700.0022
148.70255200.5075
158.17388100.1270
168.02347800.0769
179.09672810.8161
189.48060210.9484
198.16099210.1218
208.04254510.0821
219.39334310.9301
229.85808010.9867
239.59423610.9655
249.70352110.9767
258.79251710.5898
269.43949410.9404
278.69785010.5032
289.95954810.9909
299.18211410.8589
308.66094210.4690
Table 9. Variables in the equation for SampEn.
Table 9. Variables in the equation for SampEn.
CoefficientsStandard ErrorWalddfSignificanceExp(B)
SampEn ( b 1 )−12.4635.0576.07510.0140.000
b 0 2.8611.2765.02610.02517.481
Table 10. Individual model summary for SampEn. It includes some R 2 measures to assess the model’s predictive power, the area under ROC curve (AUC), and the leave-one-out (LOO) average classification results.
Table 10. Individual model summary for SampEn. It includes some R 2 measures to assess the model’s predictive power, the area under ROC curve (AUC), and the leave-one-out (LOO) average classification results.
Step−2 Log LikelihoodCox–Snell R 2 Nagelkerke R 2 AUCLOO
130.7980.2990.3990.8268.7%
Table 11. Percentage agreement between observed and predicted classifications for temperature records using an individual model based on SampEn.
Table 11. Percentage agreement between observed and predicted classifications for temperature records using an individual model based on SampEn.
Predicted
ClassPercentage
Observed01Correct
Class013381.3
16857.1
Total 70.0
Table 12. Variables in the equation for ApEn.
Table 12. Variables in the equation for ApEn.
CoefficientsStandard ErrorWalddfSignificanceExp(B)
ApEn ( b 1 )−11.7444.7616.08310.0140.000
b 0 4.0961.7705.35410.02160.077
Table 13. Individual model summary for ApEn. It includes some R 2 measures to assess the model’s predictive power, the area under ROC curve (AUC), and the leave-one-out (LOO) average classification results.
Table 13. Individual model summary for ApEn. It includes some R 2 measures to assess the model’s predictive power, the area under ROC curve (AUC), and the leave-one-out (LOO) average classification results.
Step−2 Log LikelihoodCox–Snell R 2 Nagelkerke R 2 AUCLOO
127.6360.3690.4930.8369.7%
Table 14. Percentage agreement between observed and predicted classifications for temperature records using an individual model based on ApEn.
Table 14. Percentage agreement between observed and predicted classifications for temperature records using an individual model based on ApEn.
Predicted
ClassPercentage
Observed01Correct
Class013381.3
15964.3
Total 73.3
Table 15. Results for the logistic regression model using SampEn and PE.
Table 15. Results for the logistic regression model using SampEn and PE.
CoefficientsStandard ErrorWalddfSignificanceExp(B)
SampEn ( b 2 )−15.8147.6214.30610.0380.000
PE ( b 1 )3.5641.5325.40910.02035.293
b 0 −26.89912.9554.31110.0380.000
Table 16. Summary for the joint model using SampEn and PE. It includes some R 2 measures to assess the model’s predictive power, the area under ROC curve (AUC), and the leave-one-out (LOO) average classification results.
Table 16. Summary for the joint model using SampEn and PE. It includes some R 2 measures to assess the model’s predictive power, the area under ROC curve (AUC), and the leave-one-out (LOO) average classification results.
Step−2 Log LikelihoodCox–Snell R 2 Nagelkerke R 2 AUCLOO
115.3960.5800.7750.9587.2%
Table 17. Percentage agreement between observed and predicted classifications for temperature records using the joint model with SampEn and PE.
Table 17. Percentage agreement between observed and predicted classifications for temperature records using the joint model with SampEn and PE.
Predicted
ClassPercentage
Observed01Correct
Class014287.5
111392.9
Total 90.0
Table 18. Results for the logistic regression model using ApEn and PE.
Table 18. Results for the logistic regression model using ApEn and PE.
CoefficientsStandard ErrorWalddfSignificanceExp(B)
ApEn ( b 2 )−12.8066.4573.93410.0470.000
PE ( b 1 )3.4331.6024.59010.03230.974
b 0 −24.94013.7113.30910.0690.000
Table 19. Summary for the joint model using ApEn and PE. It includes some R 2 measures to assess the model’s predictive power, the area under ROC curve (AUC), and the leave-one-out (LOO) average classification results.
Table 19. Summary for the joint model using ApEn and PE. It includes some R 2 measures to assess the model’s predictive power, the area under ROC curve (AUC), and the leave-one-out (LOO) average classification results.
Step−2 Log LikelihoodCox–Snell R 2 Nagelkerke R 2 AUCLOO
114.8130.5890.7860.9490.1%
Table 20. Percentage agreement between observed and predicted classifications for temperature records using the joint model with ApEn and PE. Results from previous experiments are included for comparative purposes.
Table 20. Percentage agreement between observed and predicted classifications for temperature records using the joint model with ApEn and PE. Results from previous experiments are included for comparative purposes.
Predicted
ClassPercentage
Observed01Correct
Class015193.8
111392.9
Total 93.3
90.0 (87.5, 92.9) (SampEn, PE)
73.3 (81.3, 64.3) (ApEn)
70.0 (81.3, 57.1) (SampEn)
Table 21. Best model fit based on the Akaike information criterion (AIC) for each case.
Table 21. Best model fit based on the Akaike information criterion (AIC) for each case.
Explanatory VariableResidual VarianceAIC
PE24.0312628.03126
ApEn27.6361631.63616
SampEn30.7984134.79841
PE+ApEn14.8131620.81316
PE+SampEn15.3960721.39607

Share and Cite

MDPI and ACS Style

Cuesta-Frau, D.; Miró-Martínez, P.; Oltra-Crespo, S.; Jordán-Núñez, J.; Vargas, B.; González, P.; Varela-Entrecanales, M. Model Selection for Body Temperature Signal Classification Using Both Amplitude and Ordinality-Based Entropy Measures. Entropy 2018, 20, 853. https://doi.org/10.3390/e20110853

AMA Style

Cuesta-Frau D, Miró-Martínez P, Oltra-Crespo S, Jordán-Núñez J, Vargas B, González P, Varela-Entrecanales M. Model Selection for Body Temperature Signal Classification Using Both Amplitude and Ordinality-Based Entropy Measures. Entropy. 2018; 20(11):853. https://doi.org/10.3390/e20110853

Chicago/Turabian Style

Cuesta-Frau, David, Pau Miró-Martínez, Sandra Oltra-Crespo, Jorge Jordán-Núñez, Borja Vargas, Paula González, and Manuel Varela-Entrecanales. 2018. "Model Selection for Body Temperature Signal Classification Using Both Amplitude and Ordinality-Based Entropy Measures" Entropy 20, no. 11: 853. https://doi.org/10.3390/e20110853

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop