Abstract
The bag-of-words approach to text document representation typically results in vectors of the order of 5000–20,000 components as the representation of documents. To make effective use of various statistical classifiers, it may be necessary to reduce the dimensionality of this representation. We point out deficiencies in class discrimination of two popular such methods, Latent Semantic Indexing (LSI), and sequential feature selection according to some relevant criterion. As a remedy, we suggest feature transforms based on Linear Discriminant Analysis (LDA). Since LDA requires operating both with large and dense matrices, we propose an efficient intermediate dimension reduction step using either a random transform or LSI. We report good classification results with the combined feature transform on a subset of the Reuters-21578 database. Drastic reduction of the feature vector dimensionality from 5000 to 12 actually improves the classification performance.
Similar content being viewed by others
Author information
Authors and Affiliations
Corresponding author
Additional information
An erratum to this article can be found at http://dx.doi.org/10.1007/s10044-004-0216-3
Rights and permissions
About this article
Cite this article
Torkkola, K. Discriminative features for text document classification. Formal Pattern Analysis & Applications 6, 301–308 (2004). https://doi.org/10.1007/s10044-003-0196-8
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.1007/s10044-003-0196-8