Abstract
High dimensional data is a challenge for the KDD community. Feature Selection (FS) is an efficient preprocessing step for dimensionality reduction thanks to the removal of redundant and/or noisy features. Few and mostly recent FS methods have been proposed for clustering. Furthermore, most of them are ”wrapper” methods that require the use of clustering algorithms for evaluating the selected features subsets. Due to this reliance on clustering algorithms that often require parameters settings (such as number of clusters), and due to the lack of a consensual suitable criterion to evaluate clustering quality in different subspaces, the wrapper approach cannot be considered as a universal way to perform FS within the clustering framework. Thus, we propose and evaluate in this paper a ”filter” FS method. This approach is consequently completely independent of any clustering algorithm. It is based upon the use of two specific indices that allow to assess the adequacy between two sets of features. As these indices exhibit very specific and interesting properties as far as their computational cost is concerned (they just require one dataset scan), the proposed method can be considered as an effective method not only from the point of view of the results quality but also from the execution time point of view.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Dash, M., Liu, H.: Feature selection for classification. Int. Journal of Intelligent Data Analysis 1(3) (1997)
Dash, M., Liu, H.: Feature selection for clustering. In: Terano, T., Chen, A.L.P. (eds.) PAKDD 2000. LNCS, vol. 1805. Springer, Heidelberg (2000)
Dash, M., Choi, K., Scheuermann, P., Liu, H.: Feature Selection for Clustering-A Filter Solution. In: Proc. of Int. Conference on Data Mining (ICDM 2002), pp. 115–122 (2002)
Devaney, M., Ram, A.: Efficient feature selection in conceptual clustering. In: Proc. of the International Conference on Machine Learning (ICML), pp. 92–97 (1997)
Dy, J.G., Brodley, C.E.: Visualization and interactive feature selection for unsupervised data. In: Proc. of the International Conference on Knowledge Discovery and Data Mining (KDD), pp. 360–364 (2000)
Huang, Z.: A Fast Clustering Algorithm to Cluster VeryLarge Categorical Data Sets in Data Mining. Research Issues on Data Mining and Knowledge Discovery (1997)
Jouve, P.E.: Clustering and Knowledge Discovery in Databases. PhD thesis, Lab. ERIC, University Lyon II, France (2003)
Jouve, P.E., Nicoloyannis, N.: KEROUAC, an Algorithm for Clustering Categorical Data Sets with Practical Advantages. In. Proc. of International Workshop on Data Mining for Actionable Knowledge (PAKDD 2003) (2003)
Kim, Y.S., Street, W.N., Menczer, F.: Feature selection in unsupervised learning via evolutionary search. In: Proc. of ACM SIGKDD International Conference on Knowledge and Discovery, pp. 365–369 (2000)
Merz, C., Murphy, P.: UCI repository of machine learning databases (1996), http://www.ics.uci.edu/#mlearn/mlrepository.html
Talavera, L.: Feature selection and incremental learning of probabilistic concept hierarchies. In: Proc. of International Conference on Machine Learning, ICML (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Jouve, PE., Nicoloyannis, N. (2005). A Filter Feature Selection Method for Clustering. In: Hacid, MS., Murray, N.V., Raś, Z.W., Tsumoto, S. (eds) Foundations of Intelligent Systems. ISMIS 2005. Lecture Notes in Computer Science(), vol 3488. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11425274_60
Download citation
DOI: https://doi.org/10.1007/11425274_60
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-25878-0
Online ISBN: 978-3-540-31949-8
eBook Packages: Computer ScienceComputer Science (R0)