Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- ArticleSeptember 2024
Frugal Generative Modeling for Tabular Data
Machine Learning and Knowledge Discovery in Databases. Research Track and Demo TrackPages 55–72https://doi.org/10.1007/978-3-031-70371-3_4AbstractThis paper presents a generative modeling approach called Gmda designed for tabular data, adapted to its arbitrary feature correlation structure. The generative model is trained so that sampled regions in the feature space contain the same ...
- ArticleMarch 2022
Combination of One-Class Support Vector Machines for Classification with Reject Option
Machine Learning and Knowledge Discovery in DatabasesPages 547–562https://doi.org/10.1007/978-3-662-44848-9_35AbstractThis paper focuses on binary classification with reject option, enabling the classifier to detect and abstain hazardous decisions. While reject classification produces in more reliable decisions, there is a tradeoff between accuracy and rejection ...
- rapid-communicationSeptember 2021
CASCARO: Cascade of classifiers for minimizing the cost of prediction
Pattern Recognition Letters (PTRL), Volume 149, Issue CPages 37–43https://doi.org/10.1016/j.patrec.2021.06.010Highlights- A model of cascade of classifiers with reject option
- Optimization of the reject area of the classifiers
- Ranking of the features for the cascade construction
- Minimization of the cost of classification
- Controlling the trade-...
Display Omitted
AbstractAlthough the prediction performance is crucial for a classifier, its cost of use is also an essential issue for practical application. The aim of this article is to propose a prediction method that controls not only the error rate but also the ...
- research-articleDecember 2019
Performance visualization spaces for classification with rejection option
Highlights- Analysis of three different visualization spaces to represent the performance of reject classifiers.
The classification with reject option consists to train a classifier that rejects the examples when the confidence in its prediction is low. The objective is to improve the accuracy of the non-rejected examples and the reliability of ...
- ArticleJanuary 2019
Controlling and Visualizing the Precision-Recall Tradeoff for External Performance Indices
Machine Learning and Knowledge Discovery in DatabasesPages 687–702https://doi.org/10.1007/978-3-030-10925-7_42AbstractIn many machine learning problems, the performance of the results is measured by indices that often combine precision and recall. In this paper, we study the behavior of such indices in function of the tradeoff precision-recall. We present a new ...
-
- ArticleFebruary 2016
Controlling the Cost of Prediction in using a Cascade of Reject Classifiers for Personalized Medicine
BIOSTEC 2016: Proceedings of the International Joint Conference on Biomedical Engineering Systems and TechnologiesPages 42–50https://doi.org/10.5220/0005685500420050The supervised learning in bioinformatics is a major tool to diagnose a disease, to identify the best therapeutic strategy or to establish a prognostic. The main objective in classifier construction is to maximize the accuracy in order to obtain a ...
- ArticleSeptember 2014
Combination of one-class support vector machines for classification with reject option
This paper focuses on binary classification with reject option, enabling the classifier to detect and abstain hazardous decisions. While reject classification produces in more reliable decisions, there is a tradeoff between accuracy and rejection rate. ...
- ArticleMarch 2014
Stability of Ensemble Feature Selection on High-Dimension and Low-Sample Size Data
ICPRAM 2014: Proceedings of the 3rd International Conference on Pattern Recognition Applications and MethodsPages 325–330https://doi.org/10.5220/0004922203250330Feature selection is an important step when building a classifier. However, the feature selection tends to be unstable on high-dimension and small-sample size data. This instability reduces the usefulness of selected features for knowledge discovery: if ...
- ArticleMarch 2014
Unsupervised Consensus Functions Applied to Ensemble Biclustering
ICPRAM 2014: Proceedings of the 3rd International Conference on Pattern Recognition Applications and MethodsPages 30–39https://doi.org/10.5220/0004789800300039The ensemble methods are very popular and can improve significantly the performance of classification and clustering algorithms. Their principle is to generate a set of different models, then aggregate them into only one. Recent works have shown that ...
- research-articleMarch 2014
Analysis of feature selection stability on high dimension and small sample data
Feature selection is an important step when building a classifier on high dimensional data. As the number of observations is small, the feature selection tends to be unstable. It is common that two feature subsets, obtained from different datasets but ...
- ArticleJune 2013
Precision-recall space to correct external indices for biclustering
ICML'13: Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28Pages II-136–II-144Biclustering is a major tool of data mining in many domains and many algorithms have emerged in recent years. All these algorithms aim to obtain coherent biclusters and it is crucial to have a reliable procedure for their validation. We point out the ...
- articleMarch 2013
The reliability of estimated confidence intervals for classification error rates when only a single sample is available
Pattern Recognition (PATT), Volume 46, Issue 3Pages 1067–1077https://doi.org/10.1016/j.patcog.2012.09.019Error estimation accuracy is the salient issue regarding the validity of a classifier model. When samples are small, training-data-based error estimates tend to suffer from inaccuracy and quantification of error estimation accuracy is difficult. ...
- ArticleNovember 2012
Experimental analysis of feature selection stability for high-dimension and low-sample size gene expression classification task
BIBE '12: Proceedings of the 2012 IEEE 12th International Conference on Bioinformatics & Bioengineering (BIBE)Pages 350–355https://doi.org/10.1109/BIBE.2012.6399649Gene selection is a crucial step when building a classifier from microarray or metagenomic data. As the number of observations is small, the gene selection tends to be unstable. It is common that two gene subsets, obtained from different datasets but ...
- articleNovember 2012
Ensemble methods for biclustering tasks
Pattern Recognition (PATT), Volume 45, Issue 11Pages 3938–3949https://doi.org/10.1016/j.patcog.2012.04.010Several biclustering algorithms have been proposed in different fields of microarray data analysis. We present a new approach that improves their performance in using the ensemble methods. An ensemble biclustering is considered and formalized by a ...
- articleSeptember 2012
A New Measure of Classifier Performance for Gene Expression Data
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), Volume 9, Issue 5Pages 1379–1386https://doi.org/10.1109/TCBB.2012.21One of the major aims of many microarray experiments is to build discriminatory diagnosis and prognosis models. A large number of supervised methods have been proposed in literature for microarray-based classification for this purpose. Model evaluation ...
- ArticleAugust 2011
Improving the Biological Relevance of Biclustering for Microarray Data in Using Ensemble Methods
DEXA '11: Proceedings of the 2011 22nd International Workshop on Database and Expert Systems ApplicationsPages 413–417https://doi.org/10.1109/DEXA.2011.44Biclustering is become undoubtedly a current tool for micro array data analysis. Its objective is to identify a set of biclusters, i.e. sub matrices of the original data matrix, presenting a particular pattern. A large number of biclustering methods ...
- articleMay 2011
Using the bagging approach for biclustering of gene expression data
Neurocomputing (NEUROC), Volume 74, Issue 10Pages 1595–1605https://doi.org/10.1016/j.neucom.2011.01.013Several methods have been proposed for microarray data analysis that enables to identify groups of genes with similar expression profiles only under a subset of examples. We propose to improve the performance of these biclustering methods by adapting ...
- ArticleSeptember 2010
Bagging for biclustering: application to microarray data
One of the major tools of transcriptomics is the biclustering that simultaneously constructs a partition of both examples and genes. Several methods have been proposed for microarray data analysis that enables to identify groups of genes with similar ...
- ArticleSeptember 2010
Bagging for biclustering: application to microarray data
ECMLPKDD'10: Proceedings of the 2010th European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part IPages 490–505https://doi.org/10.1007/978-3-642-15880-3_37One of the major tools of transcriptomics is the biclustering that simultaneously constructs a partition of both examples and genes. Several methods have been proposed for microarray data analysis that enables to identify groups of genes with similar ...
- ArticleAugust 2010
Bagged Biclustering for Microarray Data
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial IntelligencePages 1131–1132One of the major tools of transcriptomics is the biclustering that simultaneously constructs a partition of both examples and genes. Several methods have been proposed for microarray data analysis that enables to identify groups of genes with similar ...