An evaluation of statistical approaches to text categorization

Y Yang - Information retrieval, 1999 - Springer
Information retrieval, 1999Springer
This paper focuses on a comparative evaluation of a wide-range of text categorization
methods, including previously published results on the Reuters corpus and new results of
additional experiments. A controlled study using three classifiers, kNN, LLSF and WORD,
was conducted to examine the impact of configuration variations in five versions of Reuters
on the observed performance of classifiers. Analysis and empirical evidence suggest that
the evaluation results on some versions of Reuters were significantly affected by the …
Abstract
This paper focuses on a comparative evaluation of a wide-range of text categorization methods, including previously published results on the Reuters corpus and new results of additional experiments. A controlled study using three classifiers, kNN, LLSF and WORD, was conducted to examine the impact of configuration variations in five versions of Reuters on the observed performance of classifiers. Analysis and empirical evidence suggest that the evaluation results on some versions of Reuters were significantly affected by the inclusion of a large portion of unlabelled documents, mading those results difficult to interpret and leading to considerable confusions in the literature. Using the results evaluated on the other versions of Reuters which exclude the unlabelled documents, the performance of twelve methods are compared directly or indirectly. For indirect compararions, kNN, LLSF and WORD were used as baselines, since they were evaluated on all versions of Reuters that exclude the unlabelled documents. As a global observation, kNN, LLSF and a neural network method had the best performance; except for a Naive Bayes approach, the other learning algorithms also performed relatively well.
Springer