Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Study on Low-Frequency Band Gap Characteristics of a New Helmholtz Type Phononic Crystal
Previous Article in Journal
IoT Botnet Detection Using Salp Swarm and Ant Lion Hybrid Optimization Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kernel-Free Quadratic Surface Minimax Probability Machine for a Binary Classification Problem

College of Mathematics and Systems Science, Xinjiang University, Urumuqi 830046, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(8), 1378; https://doi.org/10.3390/sym13081378
Submission received: 1 July 2021 / Revised: 24 July 2021 / Accepted: 26 July 2021 / Published: 28 July 2021

Abstract

:
In this paper, we propose a novel binary classification method called the kernel-free quadratic surface minimax probability machine (QSMPM), that makes use of the kernel-free techniques of the quadratic surface support vector machine (QSSVM) and inherits the advantage of the minimax probability machine (MPM) without any parameters. Specifically, it attempts to find a quadratic hypersurface that separates two classes of samples with maximum probability. However, the optimization problem derived directly was too difficult to solve. Therefore, a nonlinear transformation was introduced to change the quadratic function involved into a linear function. Through such processing, our optimization problem finally became a second-order cone programming problem, which was solved efficiently by an alternate iteration method. It should be pointed out that our method is both kernel-free and parameter-free, making it easy to use. In addition, the quadratic hypersurface obtained by our method was allowed to be any general form of quadratic hypersurface. It has better interpretability than the methods with the kernel function. Finally, in order to demonstrate the geometric interpretation of our QSMPM, five artificial datasets were implemented, including showing the ability to obtain a linear separating hyperplane. Furthermore, numerical experiments on benchmark datasets confirmed that the proposed method had better accuracy and less CPU time than corresponding methods.

1. Introduction

Machine learning is an important branch in the field of artificial intelligence, which has a wide range of applications in various fields of contemporary science [1]. With the development of machine learning, the classification problem has been widely concerned and studied in the fields of pattern recognition [2], text classification [3], image processing [4], financial time series prediction [5], skin disease [6], intrusion detection systems [7], etc. The classification problem is a vital task in supervised learning that learns a classification rule from a training set with known labels and then uses it to assign a new sample to a class.
At present, there are many famous classification methods. Among these existing methods, Lanckriet et al. [8,9] proposed an excellent classifier, called the minimax probability machine (MPM). For a given binary classification problem, the MPM not only deals with it in the linear case, but also in the nonlinear case by the kernel trick. It is worth noting that the MPM does not have any parameters, which is an important advantage. Therefore, it has been widely used in computer vision [10], engineering technology [11,12], agriculture [13], and novelty detection [14]. Moreover, many researchers have proposed a variety of improved versions of the MPM from different perspectives [14,15,16,17,18,19,20,21,22,23,24,25]. The representative works can be briefly reviewed as follows. In [15], Thomas and Gregory proposed MPM regression (MPMR), which transformed the regression problem into a classification problem, and then used the classifier MPM to obtain a regression function. To further exploit the structural information of the training set, Gu et al. [17] proposed the structural MPM (SMPM) by combining the finite mixture models with the MPM. In addition, Yoshiyama et al. [21] proposed the Laplacian MPM (Lap-MPM), which improved the performance of the MPM in semisupervised learning. However, the nonlinear MPM using kernel techniques lacks interpretability and usually depends heavily on the choice of a proper kernel function and the corresponding kernel parameters. Furthermore, choosing the appropriate kernel function and adjusting its parameters may require much computational time and effort. Therefore, it naturally occurs to us that the study of a kernel-free nonlinear MPM is of great significance.
For the first time, Dagher [26] proposed a kernel-free nonlinear classifier, namely the quadratic surface support vector machine (QSSVM), in 2008. It was based on the maximum margin idea, and the training points were separated by a quadratic hypersurface without a kernel function, avoiding the time-consuming process of selecting the appropriate kernel function and its corresponding parameters. Furthermore, in order to improve the classification accuracy and robustness, Luo et al. [27] proposed the soft-margin quadratic surface support vector machine (SQSSVC). After that, Bai et al. [28] proposed the quadratic kernel-free least-squares support vector machine for target diseases’ classification. Following these leading works, some scholars performed further studies, e.g., see [29,30,31,32,33,34] for the classification problem, [35] for the regression problem, and [36] for the cluster problem. The good performance of these methods demonstrates that the quadratic hypersurface is an effective method to flexibly capture the nonlinear structure of data. Thus, it can be seen that it is very interesting to study the kernel-free nonlinear MPM using the above kernel-free technique.
In this paper, for the binary classification problem, a new kernel-free nonlinear method is proposed, which is called the kernel-free quadratic surface minimax probability machine (QSMPM). It was constructed on the basics of the MPM by using the kernel-free techniques of the QSSVM. Specifically, it tries to seek a quadratic hypersurface that separates two classes of samples with maximum probability. However, the optimization problem derived directly was too difficult to solve. Therefore, a nonlinear transformation was introduced to change the quadratic function involved into a linear function. Through such processing, our optimization problem finally became a second-order cone programming problem, which was solved efficiently by an alternate iteration method. It is important to point out that our QSMPM addresses the following key issues. First, our method directly generates a nonlinear (quadratic) hypersurface without the kernel function, so there is no need to select the appropriate kernel. Second, our method does not need to choose any parameters. Third, the quadratic hypersurface obtained by our method has better interpretability than the one by the methods with the kernel function. Fourth, it is rather flexible because the quadratic hypersurface obtained by our method can be any general form of the quadratic hypersurface. In our experiment, the results of five artificial datasets showed that the proposed method can find the general form of the quadratic surface and has also the ability to obtain the linear separating hyperplane. Numerical experiments on 14 benchmark datasets verified that the proposed method was superior to corresponding methods in both accuracy and CPU time. What is more gratifying is that when the number of samples or the dimension is relatively large, our method can obtain good classification performance quickly. In addition, the results of the Friedman test and Nemenyi post-hoc test indicated that our QSMPM was statistically the best one compared to other methods.
The rest of this paper is organized as follows. Section 2 briefly reviews the related works, the QSSVM, and the MPM. Section 3 presents our method QSMPM, gives its algorithm, and analyzes the computational complexity of the QSMPM. In Section 4, we show the interpretability of our method. In Section 5, the results of the numerical experiments on the artificial datasets and benchmark datasets are presented, and a further statistical analysis is performed. Finally, Section 6 gives the conclusion and future work of this paper.
Throughout this paper, we use lower case letters to represent scalars, lower case bold letters to represent vectors, and upper case bold letters to represent matrices. R denotes the set of real numbers. R d denotes the space of d-dimensional vectors. R d × d denotes the space of d × d matrices. S d denotes the set of d × d symmetric matrices. S + d denotes the set of d × d symmetric positive semidefinite matrices. I d denotes the d × d identity matrix. x 2 denotes the two-norm of the vector x .

2. Related Work

In this section, we briefly introduce the QSSVM and the MPM. For a binary classification problem, the training set is given as:
T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x m + + m , y m + + m ) } ,
where x i R d is the i-th sample and y i { + 1 , 1 } corresponds to the class label, i = 1 , 2 , , m + + m . The number of samples in class +1 and class −1 is m + and m , respectively. For the training set (1), we want to find a hyperplane or quadratic hypersurface:
g ( x ) = 0 ,
and then use a decision function:
f ( x ) = sign ( g ( x ) )
to determine whether a new sample x R d is assigned to class +1 or class −1.

2.1. Quadratic Surface Support Vector Machine

We first shortly outline the quadratic surface support vector machine (QSSVM) [26]. For the given training set (1), the goal of the QSSVM is to seek a quadratic separating hypersurface:
g ( x ) = 1 2 x T A x + b T x + c = 0 ,
where A S d , b R d , c R , which separates the samples into two classes with the largest margin. In order to obtain the quadratic hypersurface (4), the QSSVM establishes the following optimization problem:
min A , b , c i = 1 m + + m A x i + b 2 2 s . t . y i ( 1 2 x i T A x i + b T x i + c ) 1 , i = 1 , , m + + m .
The optimization problem (5) is a convex quadratic programming problem.
After obtaining the optimal solution A * , b * , and c * to the optimization problem (5), for a given new sample x R d , its label is assigned to either class +1 or class −1 by the decision function:
f ( x ) = sgn ( 1 2 x T A * x + b * T x + c * ) .
To allow some samples in the training set (1) to be misclassified, Luo et al. further proposed the soft-margin quadratic surface support vector machine (SQSSVM); please refer to [27].

2.2. Minimax Probability Machine

Now, we briefly review the minimax probability machine (MPM) [8,9]. Let us leave the training set (1) aside for a moment and suppose that these samples have some distribution. Specifically, assume that the samples in class +1 are drawn from a distribution with the mean vector μ + R d and the covariance matrix Σ + S + d , without making other specific distributional assumptions. A similar assumption is also given for the samples in class −1 with the mean vector μ R d and the covariance matrix Σ S + d . Denote the two distributions as x + ( μ + , Σ + ) and x ( μ , Σ ) , respectively. Based on the above assumptions, the MPM attempts to obtain a separating hyperplane:
g ( x ) = w T x b = 0 ,
where w R d , b R , which separates the two classes of samples with maximal probability with respect to all distributions having these mean vectors and covariance matrices. This is expressed as:
max w , b , α α s . t . inf x + ( μ + , Σ + ) pr { w T x + b 0 } α , inf x ( μ , Σ ) pr { w T x b 0 } α ,
where α ( 0 , 1 ) represents the lower bound of the accuracy for future data, namely the worst-case accuracy. The infimum “ inf ” is taken over all distributions having these mean vectors μ ± R d and covariance matrices Σ ± S + d .
The constraint condition of the above optimization problem (8) is the probabilistic constraint, which is difficult to solve. In order to convert the probabilistic constraints to easy, tractable constraints, the following lemma [9] is given:
Lemma 1
([9]). Let x be a d-dimensional random vector with mean vector μ and covariance matrix Σ, where Σ S + d . Given w R d , b R , such that w T x b and α ( 0 , 1 ) , the condition:
inf x ( μ , Σ ) pr { w T x b 0 } α
holds if and only if:
b w T μ κ ( α ) w T Σ w ,
where κ ( α ) = α 1 α .
Using the above Lemma 1, the optimization problem (8) is equivalent to:
max w , b , α α s . t . b + w T μ + κ ( α ) w T Σ + w , b w T μ κ ( α ) w T Σ w .
Then, through a series of algebraic operations (see Theorem 2 in [9], for the details), the above optimization problem (11) leads to:
min w Σ + 1 2 w 2 + Σ 1 2 w 2 s . t . w T ( μ + μ ) = 1 .
When its optimal solution w * is obtained, for the optimization problem (11), the optimal solution with respect to b is given by:
b * = w * T μ + Σ 1 2 w * 2 Σ + 1 2 w * 2 + Σ 1 2 w * 2 ,
or:
b * = w * T μ + Σ + 1 2 w * 2 Σ + 1 2 w * 2 + Σ 1 2 w * 2 .
Now, let us return to the training set (1). It is easy to see that these required mean vectors μ ± R d and covariance matrices Σ ± S + d are able to be estimated by the training set (1) as follows:
μ ^ ± = 1 m ± i = 1 m ± x i R d , Σ ^ ± = 1 m ± i = 1 m ± ( x i μ ^ ± ) ( x i μ ^ ± ) T S + d .
Therefore, in practice, these mean vectors μ ± and covariance matrices Σ ± in (12)–(14) should be replaced by μ ^ ± and Σ ^ ± , and the optimal solutions of w and b thus obtained are denoted as w ^ * and b ^ * . Then, for a given new sample x R d , its label is assigned to either class +1 or class −1 by the decision function:
f ( x ) = sgn ( w ^ * T x b ^ * ) .
In addition, for nonlinear cases and more details, please refer to [8,9].

3. Kernel-Free Quadratic Surface Minimax Probability Machine

In this section, we first formulate the kernel-free quadratic surface minimax probability machine (QSMPM). Then, its algorithm is given.

3.1. Optimization Problem

For the binary classification problem with the training set (1), we attempt to find a quadratic separating hypersurface:
g ( x ) = 1 2 x T A x + b T x c = 0 ,
where A S d , b R d , c R , which separates the two classes of the samples. Inspired by the MPM, we construct the following optimization problem:
max A , b , c , α α s . t . inf x + ( μ + , Σ + ) pr { 1 2 x + T A x + + b T x + c 0 } α , inf x ( μ , Σ ) pr { 1 2 x T A x + b T x c 0 } α ,
where α ( 0 , 1 ) represents the lower bound of the accuracy for future data, namely the worst-case accuracy. The notation x + ( μ + , Σ + ) refers to the class distribution that has the prescribed mean vector μ + R d and covariance matrix Σ + S + d , but otherwise arbitrary, and likewise for x .
The above optimization problem (18) corresponds to the optimization problem (8), which is used to derive the optimization problem (11). Analogically, the optimization problem (18) should be used to derive the required optimization problem. Unfortunately, it does not have a counterpart when the functions in curly braces in the optimization problem (18) are quadratic because of the lack of the corresponding Lemma 1. In order to overcome this difficulty, we change the quadratic functions as a linear function by introducing a nonlinear transformation from x = ( [ x ] 1 , [ x ] 2 , , [ x ] d ) T R d to:   
z = z ( x ) = ( 1 2 [ x ] 1 2 , [ x ] 1 [ x ] 2 , , [ x ] 1 [ x ] d , 1 2 [ x ] 2 2 , , [ x ] 2 [ x ] d , , 1 2 [ x ] d 2 , [ x ] 1 , [ x ] 2 , , [ x ] d ) T R d 2 + 3 d 2 .
By representing the upper triangular entries of the symmetric matrix:
A = A T = ( a i j ) d × d S d
as a vector:
a = ( a 11 , a 12 , , a 1 d , a 22 , , a 2 d , , a d d ) T R d 2 + d 2 ,
and defining:
w = ( a T , b T ) T R d 2 + 3 d 2 ,
the quadratic function (17) of x in d-dimensional space yields the linear function of z in d 2 + 3 d 2 -dimensional space as follows:
g ( x ) = 1 2 x T A x + b T x c = w T z c .
Following the transformation (19), the training set (1) in the d-dimensional space correspondingly becomes:
T ˜ = { ( z 1 , y 1 ) , ( z 2 , y 2 ) , , ( z m + + m , y m + + m ) } ,
where z i = z ( x i ) ; in other words, z i R d 2 + 3 d 2 is obtained by replacing x in the formula (19) with x i = ( [ x i ] 1 , [ x i ] 2 , , [ x i ] d ) T R d , i = 1 , 2 , , m + + m . For the training set (24), it is naturally assumed that the samples of the two classes are sampled from z + ( μ z + , Σ z + ) and z ( μ z , Σ z ) , respectively, where these mean vectors μ z ± R d 2 + 3 d 2 and covariance matrices Σ z ± S + d 2 + 3 d 2 can be estimated as:
μ ^ z ± = 1 m ± i = 1 m ± z i R d 2 + 3 d 2 , Σ ^ z ± = 1 m ± i = 1 m ± ( z i μ ^ z ± ) ( z i μ ^ z ± ) T S + d 2 + 3 d 2 .
Based on the transformation (19), the optimization problem (18) is replaced by:
max w , c , α α s . t . inf z + ( μ ^ z + , Σ ^ z + ) pr { w T z + c 0 } α , inf z ( μ ^ z , Σ ^ z ) pr { w T z c 0 } α .
Now, Lemma 1 [9] is applicable to the optimization problem (26). Thus, we have:
max w , c , α α s . t . c + w T μ ^ z + κ ( α ) w T Σ ^ z + w , c w T μ ^ z κ ( α ) w T Σ ^ z w ,
where κ ( α ) = α 1 α . Moreover, a series of algebraic operation shows that the above optimization problem (27) is equivalent to the following second-order cone programming problem:
min w Σ ^ z + 1 2 w 2 + Σ ^ z 1 2 w 2 s . t . w T ( μ ^ z + μ ^ z ) = 1 .
When its optimal solution w * is obtained, for the optimization problem (27), the optimal solution with respect to c is given by:
c * = w * T μ ^ z + Σ ^ z 1 2 w * 2 Σ ^ z + 1 2 w * 2 + Σ ^ z 1 2 w * 2 ,
or:
c * = w * T μ ^ z + Σ ^ z + 1 2 w * 2 Σ ^ z + 1 2 w * 2 + Σ ^ z 1 2 w * 2 .
In the next subsection, we show how to solve the optimization problem (28).

3.2. Algorithm

Now, we present the solving process of the optimization problem (28), which is achieved by referring to [9]. By constructing an orthogonal matrix F R d 2 + 3 d 2 × d 2 + 3 d 2 2 whose columns span the subspace of vectors orthogonal to μ ^ z + μ ^ z R d 2 + 3 d 2 , the unknown variable w R d 2 + 3 d 2 is converted into u R d 2 + 3 d 2 2 . Specifically, let w = w 0 + F u , where w 0 = ( μ ^ z + μ ^ z ) μ ^ z + μ ^ z 2 2 R d 2 + 3 d 2 ; the optimization problem (28) is transferred to the unconstrained optimization problem:
min u Σ ^ z + 1 2 ( w 0 + F u ) 2 + Σ ^ z 1 2 ( w 0 + F u ) 2 ,
In order to solve the above optimization problem (31), Lanckriet et al. [9] introduced two extra variables β and η and considered the following optimization problem:
min u , β , η β + 1 β Σ ^ z + 1 2 ( w 0 + Fu ) 2 2 + η + 1 η Σ ^ z 1 2 ( w 0 + Fu ) 2 2 .
This optimization problem (32) is solved by an alternative iteration. The variables are divided into two sets: one is β and η , and the other is u . At the t-th iteration, first by fixing β and η to take the derivative of the optimization problem (32) with respect to u , we have the following updated iteration formula of u t :
( 1 β t P + 1 η t Q ) u t = ( 1 β t p + 1 η t q ) ,
where P = F T Σ ^ z + F R d 2 + 3 d 2 2 × d 2 + 3 d 2 2 , Q = F T Σ ^ z F R d 2 + 3 d 2 2 × d 2 + 3 d 2 2 , p = F T Σ ^ z + w 0 R d 2 + 3 d 2 2 , q = F T Σ ^ z w 0 R d 2 + 3 d 2 2 . To ensure the stability, the regularization term δ I d 2 + 3 d 2 2 ( δ > 0 ) is added. Therefore, the Equation (33) is replaced by:
( 1 β t P + 1 η t Q + δ I ) u t = ( 1 β t p + 1 η t q ) .
Next, by fixing u to take the derivative of the optimization problem (32) with respect to β and η , respectively, we have the following updated iteration formula of β t and η t :
β t = Σ ^ g + 1 2 ( w 0 + F u t ) 2 , η t = Σ ^ g 1 2 ( w 0 + F u t ) 2 .
When the optimal solution u * is obtained by the above two updated iteration Formulas (34) and (35), the optimal solution w * of the optimization problem (28) is w * = w 0 + F u * . Then, we summarize the process of finding the optimal solution A * , b * , c * of the optimization problem (18) in Algorithm 1.
Algorithm 1: Kernel-free quadratic surface minimax probability machine (QSMPM).
Input: Training set (1), δ = 1 × 10 6 , number of maximum iterations τ = 100 .
1:
Initialize β 1 = 1 , η 1 = 1 , t = 1 .
2:
Obtain z i by (19), i = 1 , 2 , , m + + m ;
3:
Calculate μ ^ z ± and Σ ^ z ± by (25), and calculate w 0 = ( μ ^ z + μ ^ z ) μ ^ z + μ ^ z 2 2 ;
4:
Calculate P = F T Σ ^ z + F , Q = F T Σ ^ z F , p = F T Σ ^ z + w 0 , q = F T Σ ^ z w 0 , where F is an orthogonal matrix whose columns span the subspace of vectors orthogonal μ ^ z + μ ^ z ;
5:
while t < τ do
6:
Given β t and η t , update u t by (34);
7:
Given u t , update β t and η t by (35);
8:
t t + 1 .
9:
end
10:
Assign w * = w 0 + F u t , then obtain A * , b * by (20), (21), and (22); further, obtain c * by (29) or (30).
Output: A * , b * , c * .
After obtaining the optimal solution A * , b * and c * to the optimization problem (18), for a given new sample x R d , its label is assigned to either class +1 or class −1 by the decision function:
f ( x ) = sgn ( 1 2 x T A * x + b * T x c * ) .
It should be pointed out that our QSMPM is kernel-free, which avoids the time-consuming task of selecting the appropriate kernel function and its corresponding parameters. What is more, it does not require any choice of parameter, which makes its use simpler and convenient. Furthermore, from the geometric point of view, the quadratic hypersurface (17) determined by our method is allowed to be any general form of quadratic hypersurface, including hyperplanes, hyperparaboloids, hyperspheres, hyperellipsoids, hyperhyperboloids, and so on, which is shown clearly by five artificial examples in Section 5.

3.3. Computational Complexity

Here, we analyze the computational complexity of our QSMPM. Suppose that the number and the dimension of the samples are N and d, respectively. Before reformulating the QSMPM as an SOCP problem, all d-dimensional samples need to be projected into the d 2 + 3 d 2 -dimensional space. Therefore, the total computational complexity of the QSMPM is O ( ( d 2 + 3 d 2 ) 3 + N ( d 2 + 3 d 2 ) 2 + N d 2 ) . In addition, we give the computational complexity of the MPM and the SVM. Their complexity is O ( d 3 + N d 2 ) [9] and O ( N 3 ) [19], respectively. Then, by referencing the computational complexity of SVM, we obtain that the computational complexity of the QSSVM is O ( N 3 + N d 2 ) . According to the above analysis, assuming that N is much larger than d, we can see that the computational complexity of the QSMPM is higher than that of the MPM, but lower than that of the SVM and the QSSVM.

4. The Interpretability

In this section, we discuss the interpretability of our method QSMPM. Suppose we have obtained the optimal solution A * , b * , c * to the optimization problem (18), then the quadratic hypersurface (17) has the following component form:
g ( x ) = 1 2 x T A * x + b * T x c * = 1 2 i = 1 d j = 1 d a i j * [ x ] i [ x ] j + i = 1 d b i * [ x ] i c * = 0 ,
where [ x ] i is the i-th component of the vector x R d , a i j * is the i-th row and the j-th column component of the matrix A * S d , and b i * is the i-th component of the vector b * R d . Each component of x produces the contribution of a quadratic polynomial function. Specifically, b i * is the linear effect coefficient of the i-th component, a i i * ( i = j ) is the quadratic effect coefficient of the i-th component, and a i j * ( i j ) is the interaction coefficient between the i-th component and the j-th component. Therefore, for the i-th component of x , consider that the larger | a i i * | + | a i j * | + | b i * | ( j = 1 , 2 , , d , j i ), the greater the contribution of the i-th component is. In particular, when | a i i * | + | a i j * | + | b i * | = 0 ( j = 1 , 2 , , d , j i ), the i-th component of x would not work. Therefore, compared with the methods with the kernel function, the QSMPM has better interpretability.

5. Numerical Experiments

In this section, we provide some numerical experiments to verify the performance of our QSMPM. We compared it with the hard-margin support vector machine (H−SVM), the soft-margin support vector machine (S−SVM), and the MPM with the linear kernel, the quadratic polynomial kernel, and the RBF kernel (H−SVM−L, H−SVM−P, H−SVM−R, S−SVM−L, S−SVM−P, S−SVM−R, MPM−L, MPM−P, and MPM−R, respectively). In addition, we also compared it with the QSSVM and the SQSSVM. In all numerical experiments, the penalty parameter C in the S-SVM and the kernel parameter σ of the RBF kernel were selected from { 2 7 , 2 6 , , 2 7 } by the 10-fold cross-validation method. All numerical experiments were conducted using MATLAB R2016 (b) on a computer equipped with a 2.50 GHz (I5-4210U) CPU, and 4G available memory.

5.1. Artificial Datasets

To show the geometric interpretation of the proposed method QSMPM and to compare it with the original methods, the MPM−L, the MPM−P, and the MPM−R, we performed the following numerical experiments on 4 artificial examples. These 5 artificial examples were all generated in the 2-dimensional space, and each generated 300 samples { x i = ( [ x i ] 1 , [ x i ] 2 ) T } i = 1 300 , of which the first 150 were samples in class +1 and the last 150 samples in class −1. Here, we first illustrate the symbols in all figures. The red “+” represents the samples in class +1, and the blue “o” represents the samples in class −1. The results in the upper right express the accuracy of each method on the artificial example. The curve in bold black represents the hyperplane or quadratic hypersurface. Now, let us introduce the numerical experiments on each artificial example in turn.
Example 1.
[ x i ] 2 = 1 2 [ x i ] 1 + 2 + ξ i , i = 1 , , 150 , [ x i ] 2 = 1 2 [ x i ] 1 3 + ξ i , i = 151 , , 300 ,
where [ x i ] 1 U [ 3 , 4 ] , ξ i N ( 0 , 1 ) .
Figure 1 illustrates the classification results of the MPM−L, the MPM−P, the MPM−R, and the QSMPM on Example 1, respectively. We can see from Figure 1 that our QSMPM can obtain classification results as good as the other three methods. In addition, the quadratic hypersurface found by our QSMPM is a straight line, that is a linear separating hyperplane.
Example 2.
[ x i ] 2 = 1 2 [ x i ] 1 2 + 2 + ξ i , i = 1 , , 150 , [ x i ] 2 = 1 2 [ x i ] 1 2 3 + ξ i , i = 151 , , 300 ,
where [ x i ] 1 U [ 3 , 4 ] , ξ i N ( 0 , 1 ) .
Figure 2 presents the classification results on Example 2. We can observe in Figure 2 that the classification result of our QSMPM is superior to the MPM−L and similar to the MPM−P and the MPM−R. Moreover, the quadratic hypersurface obtained by our QSMPM is a parabola.
Example 3.
[ x i ] 1 = r i cos ( t i ) , [ x i ] 2 = r i sin ( t i ) , i = 1 , , 150 , [ x i ] 1 = 1.3 + r i cos ( t i ) , [ x i ] 2 = 1.3 + r i sin ( t i ) , i = 151 , , 300 ,
where r i U [ 0 , 1 ] , t i U [ 0 , 1 ] .
Figure 3 reports the classification results of the MPM−L, the MPM−P, the MPM−R, and the QSMPM on Example 3, respectively. Obviously, in Figure 3, we can see that the classification result of our QSMPM is superior to the MPM−L and is the same as the MPM−P and the MPM−R. Furthermore, the quadratic hypersurface of our QSMPM is a circle.
Example 4.
[ x i ] 1 = 2 + a i cos ( t i ) , [ x i ] 2 = b i sin ( t i ) , i = 1 , , 150 , [ x i ] 1 = 3.5 + a i cos ( t i ) , [ x i ] 2 = 1.5 + b i cos ( t i ) , i = 151 , , 300 ,
where a i U [ 0 , 1 ] , b i U [ 0 , 1 ] , t i U [ 0 , 1 ] .
Figure 4 shows the classification results on Example 4. We can observe in Figure 4 that the QSMPM can obtain the same classification performance as the MPM−P and the MPM−R and is better than the MPM−L. Our QSMPM can find an ellipse.
Example 5.
[ x i ] 2 2 = [ x i ] 1 2 1 + ξ i , i = 1 , , 150 , [ x i ] 1 U [ 0.6 , 0.6 ] , [ x i ] 2 U [ 6 , 6 ] , i = 151 , , 300 ,
where [ x i ] 1 U [ 4 , 1 ] and U [ 1 , 4 ] , ξ i N ( 0 , 1 ) , i = 1 , , 150 .
Figure 5 reports the classification results of the MPM−L, the MPM−P, the MPM−R, and the QSMPM on Example 5, respectively. We can observe in Figure 5 that the classification performance of QSMPM is better than the MPM−L and is similar to the MPM−P and the MPM−R. In addition, our QSMPM finds a hyperbola.
In summary, from Figure 1 to Figure 5, we can see that our QSMPM can find any general form of the quadratic hypersurface, such as the line, parabola, circle, ellipse, and hyperbola found in sequence in the above numerical experiments. Moreover, our method can achieve as good classification performance as the MPM−P and the MPM−R. In addition, it can be seen from Figure 1d that our method can obtain the linear separating hyperplane.

5.2. Benchmark Datasets

To verify the classification performance and computational efficiency of our QSMPM, we performed the following numerical experiments on 14 benchmark datasets. Table 1 summarizes the basic information of the 14 benchmark datasets in the UCI Machine Learning Repository.
We divided the datasets in Table 1 into two groups for the numerical experiment. The first group was the first seven datasets, and the second group was the last seven datasets. All numerical experimental results were obtained through 10-times 10-fold cross-validation, as well as the numerical experimental results including the mean and standard deviation of accuracy and the CPU time of one experiment. The best results are highlighted in boldface. First of all, Table 2 shows the classification results on the first group.
It can be seen from Table 2 that compared with the other methods, our QSMPM obtained better accuracy on the first group of benchmark datasets, among which the accuracy was the best on four benchmark datasets. More specifically, except for Haberman and Bupa, the accuracy of our method was the best compared to the QSSVM and the SQSSVM. The accuracy of our QSMPM was the best compared to the three original kernel versions of the MPM except for Bupa. Furthermore, the accuracy of our method was the best compared to the H−SVM and the S−SVM with three kernel function except for Heart and Haberman. In addition, we can observe that QSMPM had a short CPU time.
Then, the classification results on the second group are reported in Table 3. The symbol “−” indicates that the corresponding method cannot obtain the classification results, because it cannot choose the optimal parameter in a limited amount of time or because the dimension and the number of the dataset are relatively large, resulting in insufficient memory.
From Table 3, we can see that our QSMPM had good classification results on the second group of benchmark datasets. Especially on QSAR and Turkiye, the H−SVM−R, the three kernel versions of the S−SVM, the QSSVM, and the SQSSVM could not obtain the corresponding classification results, but our QSMPM could obtain good classification performance. Here, we mention the reason for this situation. According to the computational complexity of each method, we know that when the sample dimension and the number of samples are relatively large, the SVM and the QSSVM need a larger memory space. In addition, our QSMPM had the fastest running time except the MPM−L, and it ran quite fast when the number of samples or the dimension was large.

5.3. Statistical Analysis

To further compare the performance of the above 12 methods, the Friedman test and the post-hoc test were performed. The ranks of the 12 methods on all benchmark datasets is shown in Table 4.
First, the Friedman test was used to compare the average ranks of different methods. The null hypothesis states that all methods have the same performance, that is their average ranks are the same. Based on the average ranks displayed in Table 4, we can calculate the Friedman statistic τ F by the following formula:
τ χ 2 = 12 N k ( k + 1 ) ( i = 1 k r i k ( k + 1 ) 2 4 ) , τ F = ( N 1 ) τ χ 2 N ( k 1 ) τ χ 2 ,
where N and k are the number of datasets and methods, respectively. r i is the average rank of the i-th method. According to the formula (38), τ F = 4.1825 . For α = 0.05 , we can obtain F α = 1.8526 . Since τ F > F α , we rejected the null hypothesis. Then, we proceeded with a post-hoc test (the Nemenyi test) to find out which methods significantly differed. To be more specific, the performance of two methods was considered to be significantly different if the difference of their average ranks was larger than the critical difference (CD). The CD can be calculated by:
C D = q α k ( k + 1 ) 6 N .
For α = 0.05, we know q α = 3.2680 . Thus, we obtained CD = 4.4535 by the formula (39).
Figure 6 visually displays the results of the Friedman test and Nemenyi post-hoc test, where the average ranks of each method are marked along an axis. The axis is turned so that the lowest (best) ranks are to the right. Groups of methods that are not significantly different are linked by a red line. In Figure 6, we can see that our QSMPM was the best one statistically among the compared methods. Furthermore, there was no significant difference in performance between the QSMPM and the MPM−R.

6. Conclusions

For the binary classification problem, a new classifier, called the kernel-free quadratic surface minimax probability machine (QSMPM), was proposed by using the kernel-free techniques of the QSSVM and the classification idea of the MPM. Specifically, our goal was to find a quadratic hypersurface that separates two classes of samples with maximum probability. However, the optimization problem derived directly was too difficult to solve. Therefore, a nonlinear transformation was introduced to change the quadratic function involved into a linear function. Through such processing, our optimization problem finally became a second-order cone programming problem, which was solved efficiently by an alternate iteration method. Here, we clarify the main contributions of this paper. Unlike the methods realizing nonlinear separation, our method was kernel-free and had better interpretability. Then, our method was easy to use because it did not have any parameters. Furthermore, numerical experiments on five artificial datasets showed that the quadratic hypersurfaces found by our method were rather general, including that it could obtain the linear separating hyperplane. In addition, numerical experiments on benchmark datasets confirmed that the proposed method was superior to some relevant methods in both accuracy and computational time. Especially when the number of samples or dimension was relatively large, our method could also quickly obtain good classification performance. Finally, the results of the statistical analysis showed that our QSMPM was statistically the best one compared with the corresponding methods. Our QSMPM focuses on the standard binary classification problem, which we will extend to the multiclassification problem.
In our future work, there will be some issues to be address to extend our QSMPM. For example, we need to investigate further how to add appropriate regularization terms to our method. Meanwhile, we need to consider that the worst-case accuracies for two classes are not the same, and that will be interesting. Furthermore, we will pay attention to how the QSMPM achieves the dual purpose of feature selection and classification simultaneously. In addition, we can apply our method to practical problems in many fields in the future, especially image recognition in the medical field.

Author Contributions

Conceptualization, X.Y.; Data curation, Y.W.; Formal analysis, X.Y.; Methodology, Y.W. and Z.Y.; Software, Y.W.; Supervision, Z.Y.; Writing—original draft, Y.W.; Writing—review and editing, Z.Y. and X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Xinjiang Provincial Natural Science Foundation of China (No. 2020D01C028) and the National Natural Science Foundation of China (No. 12061071).

Data Availability Statement

All of the benchmark datasets used in our numerical experiments are from the UCI Machine Learning Repository, which are available at http://archive.ics.uci.edu/ml/ (accessed on 10 June 2021).

Conflicts of Interest

The authors declare that they have no conflict of interest regarding this work.

References

  1. Langley, P.; Simon, H.A. Applications of machine learning and rule induction. Commun. ACM 1995, 38, 54–64. [Google Scholar] [CrossRef]
  2. Vapnik, V.; Sterin, A. On structural risk minimization or overall risk in a problem of pattern recognition. Autom. Remote Control 1977, 10, 1495–1503. [Google Scholar]
  3. Forman, G. An extensive empirical study of feature selection metrics for text classification. J. Mach. Learn. Res. 2003, 3, 1289–1305. [Google Scholar]
  4. Ying, S.H.; Qiao, H. Lie group method: A new approach to image matching with arbitrary orientations. Int. J. Imaging Syst. Technol. 2010, 20, 245–252. [Google Scholar] [CrossRef]
  5. Cao, L.J.; Tay, F. Support vector machine with adaptive parameters in financial time series forecasting. IEEE Trans. Neural. Netw. 2003, 14, 1506–1518. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Srinivasu, P.N.; Sivasai, J.G.; Ijaz, M.F.; Bhoi, A.K.; Kim, W.; Kang, J.J. Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM. Sensors 2021, 21, 2852. [Google Scholar] [CrossRef] [PubMed]
  7. Panigrahi, R.; Borah, S.; Bhoi, A.K.; Ijaz, M.F.; Pramanik, M.; Jhaveri, R.H.; Chowdhary, C.L. Performance assessment of supervised classifiers for designing intrusion detection systems: A comprehensive review and recommendations for future research. Mathematics 2021, 9, 690. [Google Scholar] [CrossRef]
  8. Lanckriet, G.R.G.; Ghaoui, L.E.; Bhattacharyya, C.; Jordan, M.I. Minimax probability machine. Adv. Neural Inf. Process. Syst. 2001, 37, 192–200. [Google Scholar]
  9. Lanckriet, G.R.G.; Ghaoui, L.E.; Bhattacharyya, C.; Jordan, M.I. A robust minimax approach to classification. J. Mach. Learn. Res. 2003, 3, 555–582. [Google Scholar]
  10. Johnny, K.C.; Zhong, Y.; Yang, S. A comparative study of minimax probability machine-based approaches for face recognition. Pattern Recognit. Lett. 2007, 28, 1995–2002. [Google Scholar]
  11. Deng, Z.; Wang, S.; Chung, F.L. A minimax probabilistic approach to feature transformation for multi-class data. Appl. Soft Comput. 2013, 13, 116–127. [Google Scholar] [CrossRef]
  12. Jiang, B.; Guo, Z.; Zhu, Q.; Huang, G. Dynamic minimax probability machine based approach for fault diagnosis using pairwise discriminate analysis. IEEE Trans. Control Syst. Technol. 2017, 27, 806–813. [Google Scholar] [CrossRef]
  13. Yang, L.; Gao, Y.; Sun, Q. A new minimax probabilistic approach and its application in recognition the purity of hybrid seeds. Comput. Model. Eng. Sci. 2015, 104, 493–506. [Google Scholar]
  14. Kwok, J.T.; Tsang, W.H.; Zurada, J.M. A Class of Single-Class Minimax Probability Machines for Novelty Detection. IEEE. Trans. Neural Netw. 2007, 18, 778–785. [Google Scholar] [CrossRef]
  15. Strohmann, T.; Grudic, G.Z. A formulation for minimax probability machine regression. Adv. Neural Inf. Process. Syst. 2003, 76, 9–776. [Google Scholar]
  16. Huang, k.; Yang, H.; King, I.; Lyu, M.R.; Chan, L. The minimum error minimax probability machine. J. Mach. Learn. Res. 2004, 5, 1253–1286. [Google Scholar]
  17. Gu, B.; Sun, X.; Sheng, V.S. Structural minimax probability machine. IEEE Trans. Neural Netw. Learn. Syst. 2017, 1, 1646–1656. [Google Scholar] [CrossRef] [PubMed]
  18. Maldonado, S.; Carrasco, M.; Lopez, J. Regularized minimax probability machine. Knowl. Based Syst. 2019, 177, 127–135. [Google Scholar] [CrossRef]
  19. Yang, L.M.; Wen, Y.K.; Zhag, M.; Wang, X. Twin minimax probability machine for pattern classification. Neural Netw. 2020, 131, 201–214. [Google Scholar] [CrossRef]
  20. Cousins, S.; Shawe-Taylor, J. High-probability minimax probability machines. Mach. Learn. 2017, 106, 863–886. [Google Scholar] [CrossRef] [Green Version]
  21. Yoshiyama, K.; Sakurai, A. Laplacian minimax probability machine. Pattern Recognit. Lett. 2014, 37, 192–200. [Google Scholar] [CrossRef]
  22. He, K.X.; Zhong, M.Y.; Du, W.L. Weighted incremental minimax probability machine-based method for quality prediction in gasoline blending process. Chemometr. Intell. Lab. Syst. 2019, 196, 103909. [Google Scholar] [CrossRef]
  23. Ma, J.; Yang, L.M.; Wen, Y.K.; Sun, Q. Twin minimax probability extreme learning machine for pattern recognition. Knowl. Based Syst. 2020, 187, 104806. [Google Scholar] [CrossRef]
  24. Ma, J.; Shen, J.M. A novel twin minimax probability machine for classification and regression. Knowl. Based Syst. 2020, 196, 105703. [Google Scholar] [CrossRef]
  25. Deng, Z.H.; Chen, J.Y.; Zhang, T.; Cao, L.B.; Wang, S.T. Generalized hidden-mapping minimax probability machine for the training and reliability learning of several classical intelligent models. Inf. Sci. 2018, 436–437, 302–319. [Google Scholar] [CrossRef]
  26. Dagher, I. Quadratic kernel-free non-linear support vector machine. J. Glob. Optim. 2008, 41, 15–30. [Google Scholar] [CrossRef]
  27. Luo, J.; Fang, S.C.; Deng, Z.B.; Guo, X.L. Soft quadratic surface support vector machine for binary classification. Asia Pac. J. Oper. Res. 2016, 33, 1650046. [Google Scholar] [CrossRef]
  28. Bai, Y.Q.; Han, X.; Chen, T.; Yu, H. Quadratic kernel-free least squares support vector machine for target diseases classification. J. Comb. Optim. 2015, 30, 850–870. [Google Scholar] [CrossRef]
  29. Gao, Q.Q.; Bai, Y.R.; Zhan, Y.R. Quadratic kernel-free least square twin support vector machine for binary classification problems. J. Oper. Res. Soc. China. 2019, 7, 539–559. [Google Scholar] [CrossRef]
  30. Mousavi, A.; Gao, Z.M.; Han, L.S.; Lim, A. Quadratic surface support vector machine with L1 norm regularization. J. Ind. Manag. Optim. 2019. [Google Scholar] [CrossRef]
  31. Gao, Z.M.; Fang, S.C.; Luo, J.; Medhin, N. A kernel-free double well potential support vector machine with applications. Eur. J. Oper. Re. 2020, 290, 248–262. [Google Scholar] [CrossRef]
  32. Luo, J.; Fang, S.C.; Bai, Y.Q.; Deng, Z.B. Fuzzy quadratic surface support vector machine based on fisher discriminant analysis. J. Ind. Mangn. Optim. 2015, 12, 357–373. [Google Scholar] [CrossRef]
  33. Yan, X.; Bai, Y.Q.; Fang, S.C.; Luo, J. A kernel-free quadratic surface support vector machine for semi-supervised learning. J. Oper. Res. Soc. 2016, 67, 1001–1011. [Google Scholar] [CrossRef]
  34. Tian, Y.; Yong, Z.Y.; Luo, J. A new approach for reject inference in credit scoring using kernel-free fuzzy quadratic surface support vector machines. Appl. Soft Comput. 2018, 73, 96–105. [Google Scholar] [CrossRef]
  35. Zhai, Q.R.; Tian, Y.; Zhou, J.Y. Linear twin quadratic surface support vector regression. Math. Probl. Eng. 2020, 2020, 1–18. [Google Scholar] [CrossRef]
  36. Luo, J.; Tian, Y.; Yan, X. Clustering via fuzzy one-class quadratic surface support vector machine. Soft Comput. 2017, 21, 5859–5865. [Google Scholar] [CrossRef]
Figure 1. Classification results on Example 1.
Figure 1. Classification results on Example 1.
Symmetry 13 01378 g001
Figure 2. Classification results on Example 2.
Figure 2. Classification results on Example 2.
Symmetry 13 01378 g002
Figure 3. Classification results on Example 3.
Figure 3. Classification results on Example 3.
Symmetry 13 01378 g003
Figure 4. Classification results on Example 4.
Figure 4. Classification results on Example 4.
Symmetry 13 01378 g004
Figure 5. Classification results on Example 5.
Figure 5. Classification results on Example 5.
Symmetry 13 01378 g005
Figure 6. Results of the Friedman test and the Nemenyi post-hoc test.
Figure 6. Results of the Friedman test and the Nemenyi post-hoc test.
Symmetry 13 01378 g006
Table 1. Basic information for the 14 benchmark datasets.
Table 1. Basic information for the 14 benchmark datasets.
DatasetsSamplesAttributesClass
Iirs15043
Hepatitis155192
Wine178133
Heart270132
Heart-c303142
Haberman30632
Bupa34562
Pima76882
QSAR1055412
Winequality-red1599116
Wireless200074
Image2310197
Abalone264982
Turkiye58203213
Table 2. Classification results on the first group.
Table 2. Classification results on the first group.
MethodsIrisHepatitisWineHeartHeart-cHabermanBupa
H−SVM−L0.6413 ± 0.02450.5835 ± 0.01850.9573 ± 0.00540.6174 ± 0.02871.0000 ± 0.00000.5189 ± 0.00240.5573 ± 0.0021
(4.1293)(2.1965)(1.8439)(7.70490)(3.9203)(15.1649)(11.2976)
H−SVM−P0.9447 ± 0.00550.5150 ± 0.03990.7571 ± 0.03650.6985 ± 0.01260.8678 ± 0.03860.7351 ± 0.00050.5798 ± 0.0003
(1.4087)(1.5304)(2.6005)(3.8797)(4.5568)(10.0121)(13.3459)
H−SVM−R0.9467 ± 0.00000.5858 ± 0.02800.9341 ± 0.01160.7144 ± 0.01831.0000 ± 0.00000.7362 ± 0.00220.5930 ± 0.0192
(0.8250)(0.8734)(1.0813)(2.8344)(2.4438)(4.0609)(3.7438)
S−SVM−L0.8693 ± 0.01550.6037 ± 0.02060.9582 ± 0.01470.8259 ± 0.01290.9888 ± 0.01560.7241 ± 0.00730.6812 ± 0.0096
(1.4594)(1.3416)(2.6656)(9.2391)(8.2672)(7.8422)(1 0.7031)
S−SVM−P0.9653 ± 0.00530.6037 ± 0.02340.7274 ± 0.02350.8296 ± 0.00970.8605 ± 0.00870.7228 ± 0.01030.6788 ± 0.0095
(1.4149)(1.4703)(1.9407)(9.0609)(11.3328)(11.1953)(12.3328)
S−SVM−R0.9547 ± 0.00610.5588 ± 0.01830.8921 ± 0.01310.7989 ± 0.00580.7642 ± 0.01010.7261 ± 0.00800.6826 ± 0.0088
(0.9406)(0.8656)(1.1641)(2.1344)(2.6000)(2.7719)(3.2813)
MPM−L0.8280 ± 0.00820.6010 ± 0.01140.9731 ± 0.00520.8133 ± 0.00801.0000 ± 0.00000.7177 ± 0.00930.6220 ± 0.0079
(0.3073)(0.0244)(1.7831)(3.1887)(1.7379)(3.4944)(0.1201)
MPM−P0.9747 ± 0.00280.5954 ± 0.03020.9759 ± 0.00460.8026 ± 0.01330.9681 ± 0.00660.7159 ± 0.00750.6891 ± 0.0114
(1.9079)(1.7051)(2.9640)(5.6504)(6.6425)(5.9514)(5.5131)
MPM−R0.9620 ± 0.00630.5382 ± 0.04120.7860 ± 0.01210.6578 ± 0.01050.6981 ± 0.01010.6879 ± 0.01140.7212 ± 0.0088
(4.5256)(9.0683)(4.3758)(10.5238)(5.6452)(11.9107)(8.5723)
QSSVM0.9533 ± 0.00700.5626 ± 0.03420.9608 ± 0.00520.6989 ± 0.01820.9980 ± 0.00230.7416 ± 0.00730.7203 ± 0.0047
(1.2371)(3.7612)(2.1109)(3.5241)(4.2947)(5.6426)(8.0918)
SQSSVM0.9527 ± 0.00490.5650 ± 0.01170.9622 ± 0.00710.7970 ± 0.00980.9954 ± 0.00240.7296 ± 0.00500.7220 ± 0.0074
(0.9266)(4.6098)(2.6832)(5.2822)(7.2993)(5.0482)(7.0653)
QSMPM0.9767 ± 0.00350.6069 ± 0.03130.9759 ± 0.00530.8293 ± 0.01141.0000 ± 0.00000.7205 ± 0.00690.7164 ± 0.0093
(0.3089)(2.0608)(1.3884)(0.8580)(2.5179)(0.2902)(0.2870)
Table 3. Classification results on the second group.
Table 3. Classification results on the second group.
MethodsPimaQSARWinequality-RedWirelessImageAbaloneTurkiye
H−SVM−L0.6799 ± 0.00350.3675 ± 0.00140.6156 ± 0.00140.7280 ± 0.00940.6470 ± 0.00110.7357 ± 0.00400.5051 ± 0.0005
(128.9255)(223.9172)(813.7250)(3956.2000)(947.0836)(2825.5000)(3013.1000)
H−SVM−P0.5710 ± 0.01550.8133 ± 0.00810.4653 ± 0.00000.8463 ± 0.02340.6979 ± 0.00200.4934 ± 0.00000.5033 ± 0.0038
(75.1160)(451.2469)(475.2321)(4449.2000)(996.2086)(1429.9000)(23897.0000)
H−SVM−R0.6919 ± 0.01170.8171 ± 0.00690.7628 ± 0.00580.9877 ± 0.00100.9698 ± 0.00190.8067 ± 0.0062
(14.9234)(110.5313)(3707.5000)(404.7438)(567.4688)(693.4125)
S−SVM−L0.7669 ± 0.00760.8340 ± 0.01520.7291 ± 0.00230.9139 ± 0.0044)0.7531 ± 0.03040.8108 ± 0.0009
(82.0609)(211.0188)(640.9000)(1793.9000)(2595.7000)(1189.7000)
S−SVM−P0.7585 ± 0.00660.8677 ± 0.00580.7492 ± 0.00210.9799 ± 0.00170.9602 ± 0.00150.8298 ± 0.0018
(27.8571)(259.1922)(1123.8000)(3120.7000)(4820.8000)(3120.7000)
S−SVM−R0.6941 ± 0.00780.8503 ± 0.00310.7352 ± 0.00240.9853 ± 0.00540.9715 ± 0.00200.8242 ± 0.0008
(16.5313)(103.9438)(294.0688)(717.2984)(506.2984)(984.2250)
MPM−L0.7405 ± 0.00480.8296 ± 0.00390.7409 ± 0.00250.9108 ± 0.00080.8555 ± 0.00120.8144 ± 0.00090.5779 ± 0.0026
(0.1014)(0.8377)(0.2683)(0.1560)(0.2969)(0.1266)(0.3167)
MPM−P0.7442 ± 0.00350.8225 ± 0.00610.7326 ± 0.00200.9361 ± 0.00130.8499 ± 0.00110.8109 ± 0.00090.5780 ± 0.0014
(32.9178)(68.1350)(122.7891)(187.2266)(257.7266)(413.9861)(2669.2000)
MPM−R0.7356 ± 0.00540.8373 ± 0.00370.7234 ± 0.00200.9850 ± 0.00060.9413 ± 0.00180.8264 ± 0.00140.5689 ± 0.0016
(37.2874)(83.8022)(139.5734)(248.8594)(402.6281)(423.6125)(2386.7000)
QSSVM0.7663 ± 0.00490.4722 ± 0.00320.6315 ± 0.00780.5714 ± 0.00000.5125 ± 0.0012
(70.7730)(27.7094)(14.7250)(59.1266)(29.5344)
SQSSVM0.7589 ± 0.00360.7452 ± 0.00360.9782 ± 0.00120.5714 ± 0.00000.8280 ± 0.0015
(43.0610)(36.8234)(45.3036)(63.7109)(50.1641)
QSMPM0.7530 ± 0.00490.8482 ± 0.00470.7470 ± 0.00270.9427 ± 0.00120.8731 ± 0.00130.8299 ± 0.00110.5852 ± 0.0018
(0.5585)(16.6328)(1.3525)(1.0608)(3.6953)(1.1984)(14.6360)
Table 4. Ranks of accuracy.
Table 4. Ranks of accuracy.
DatasetsH−SVM−LH−SVM−PH−SVM−RS−SVM−LS−SVM−PS−SVM−RMPM−LMPM−PMPM−RQSSVMSQSSVMQSMPM
Iirs129810351124671
Hepatitis71262.52.5104511981
Wine7118612931.510541.5
Heart121083164511972
Heart-c2.592.5710112.5812562.5
Haberman123267591011148
Bupa Liver121110786952314
Pima111210149768235
QSAR109851267411.511.53
Winequality-red101218265791143
Wireless111018429731256
Image Segmentation109283167411.511.55
Abalone101298256741131
Turkiye569.59.59.59.53249.59.51
Average ranks9.39299.64296.07146.35714.92866.17866.03575.67866.92867.60716.03573.1429
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Yang, Z.; Yang, X. Kernel-Free Quadratic Surface Minimax Probability Machine for a Binary Classification Problem. Symmetry 2021, 13, 1378. https://doi.org/10.3390/sym13081378

AMA Style

Wang Y, Yang Z, Yang X. Kernel-Free Quadratic Surface Minimax Probability Machine for a Binary Classification Problem. Symmetry. 2021; 13(8):1378. https://doi.org/10.3390/sym13081378

Chicago/Turabian Style

Wang, Yulan, Zhixia Yang, and Xiaomei Yang. 2021. "Kernel-Free Quadratic Surface Minimax Probability Machine for a Binary Classification Problem" Symmetry 13, no. 8: 1378. https://doi.org/10.3390/sym13081378

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop