Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- ArticleJanuary 2023
One-Against-All Halfplane Dichotomies
Structural, Syntactic, and Statistical Pattern RecognitionPages 183–192https://doi.org/10.1007/978-3-031-23028-8_19AbstractGiven M vectors in N-dimensional attribute space, it is much easier to find M hyperplanes that separate each of the vectors from all the others than to solve M arbitrary linear dichotomies with approximately equal class memberships. An explanation ...
- ArticleMay 2020
Classification of Fish Species Using Silhouettes
AbstractThe classification of the fish silhouettes allows a quick decision of the fish species presence and amount in the given scene. The classical approach of the machine learning is used to test the question of linear separability of fish species ...
- research-articleOctober 2018
On the separability of stochastic geometric objects, with applications
Computational Geometry: Theory and Applications (COGE), Volume 74, Issue CPages 1–20https://doi.org/10.1016/j.comgeo.2018.06.001AbstractIn this paper, we study the linear separability problem for stochastic geometric objects under the well-known unipoint and multipoint uncertainty models. Let S = S R ∪ S B be a given set of stochastic bichromatic points, and define n = ...
- articleNovember 2015
Linearizing layers of radial binary classifiers with movable centers
Pattern Analysis & Applications (PAAS), Volume 18, Issue 4Pages 771–781https://doi.org/10.1007/s10044-015-0473-3Ranked layers of binary classifiers are used for the linearization of learning sets composed of multivariate feature vectors. After transformation by ranked layer, each learning set can be separated by a hyperplane from the sum of other learning sets. ...
- articleJuly 2012
Linear separability and classification complexity
Expert Systems with Applications: An International Journal (EXWA), Volume 39, Issue 9Pages 7796–7807https://doi.org/10.1016/j.eswa.2012.01.090We study the relationship between linear separability and the level of complexity of classification data sets. Linearly separable classification problems are generally easier to solve than non linearly separable ones. This suggests a strong correlation ...
- articleJanuary 2012
Analysis of complexity indices for classification problems: Cancer gene expression data
Currently, cancer diagnosis at a molecular level has been made possible through the analysis of gene expression data. More specifically, one usually uses machine learning (ML) techniques to build, from cancer gene expression data, automatic diagnosis ...
- articleMarch 2011
Choice effect of linear separability testing methods on constructive neural network algorithms: An empirical study
Expert Systems with Applications: An International Journal (EXWA), Volume 38, Issue 3Pages 2330–2346https://doi.org/10.1016/j.eswa.2010.08.021Several algorithms exist for testing linear separability. The choice of a particular testing algorithm has effects on the performance of constructive neural network algorithms that are based on the transformation of a nonlinear separability ...
- articleSeptember 2010
On the equivalence of weak learnability and linear separability: new relaxations and efficient boosting algorithms
Boosting algorithms build highly accurate prediction mechanisms from a collection of low-accuracy predictors. To do so, they employ the notion of weak-learnability. The starting point of this paper is a proof which shows that weak learnability is ...
- articleApril 2010
Feature space versus empirical kernel map and row kernel space in SVMs
Neural Computing and Applications (NCAA), Volume 19, Issue 3Pages 487–498https://doi.org/10.1007/s00521-010-0337-0In machine-learning technologies, the support vector machine (SV machine, SVM) is a brilliant invention with many merits, such as freedom from local minima, the widest possible margins separating different clusters, and a solid theoretical foundation. ...
- ArticleNovember 2009
Determine the Kernel Parameter of KFDA Using a Minimum Search Algorithm
ICIC '07: Proceedings of the 3rd International Conference on Intelligent Computing: Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial IntelligencePages 418–426https://doi.org/10.1007/978-3-540-74205-0_46In this paper, we develop a novel approach to perform kernel parameter selection for Kernel Fisher discriminant analysis (KFDA) based on the viewpoint that optimal kernel parameter is associated with the maximum linear separability of samples in the ...
- articleAugust 2007
On linear separability of data sets in feature space
Neurocomputing (NEUROC), Volume 70, Issue 13-15Pages 2441–2448https://doi.org/10.1016/j.neucom.2006.12.002In this paper we focus our topic on linear separability of two data sets in feature space, including finite and infinite data sets. We first develop a method to construct a mapping that maps original data set into a high dimensional feature space, on ...
- articleJanuary 2006
Adaptive conjugate gradient algorithm for perceptron training
Neurocomputing (NEUROC), Volume 69, Issue 4-6Pages 368–386https://doi.org/10.1016/j.neucom.2005.03.007An adaptive algorithm for function minimization based on conjugate gradients for the problem of finding linear discriminant functions in pattern classification is developed. The algorithm converges to a solution in both consistent and inconsistent cases ...
- articleJanuary 1993
Better learning for bidirectional associative memory
Neural Networks (NENE), Volume 6, Issue 8Pages 1131–1146In this paper, we are to develop better learning rules for the bidirectional associative memory (BAM) based on three well-recognized optimal criteria, that is, all desired attractors should be made not only stable but also asymptotically stable, and ...