Abstract
It is easy to design on-line learning algorithms for learning k out of n variable monotone disjunctions by simply keeping one weight per disjunction. Such algorithms use roughly O(n k) weights which can be prohibitively expensive. Surprisingly, algorithms like Winnow require only n weights (one per variable) and the mistake bound of these algorithms is not too much worse than the mistake bound of the more costly algorithms. The purpose of this paper is to investigate how the exponentially many weights can be collapsed into only O(n) weights. In particular, we consider probabilistic assumptions that enable the Bayes optimal algorithm’s posterior over the disjunctions to be encoded with only O(n) weights. This results in a new O(n) algorithm for learning disjunctions which is related to the Bylander’s BEG algorithm originally introduced for linear regression. Beside providing a Bayesian interpretation for this new algorithm, we are also able to obtain mistake bounds for the noise free case resembling those that have been derived for the Winnow algorithm. The same techniques used to derive this new algorithm also provide a Bayesian interpretation for a normalized version of Winnow.
The first and third authors are supported by NSF grant CCR 9700201. The second author is supported by a research fellowship from the University of Milan and by a Eurocolt grant.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Auer, P., Warmuth, M.K.: Tracking the best disjunction. Journal of Machine Learning, Special issue on concept drift. 32(2) (1998)
Bylander, T.: The binary exponentiated gradient algorithm for learning linear functions. Unpublished manuscript, private communication, (1997)
Domingo, P., Pazzani, M.: Beyond independence: conditions for the optimality of the simple Bayesian classifier. Proc. 13th International Conference on Machine Learning, (1996) 105–112
Duda, R.O., Hart, P.E.: Pattern Classification and Scene Analysis. Publisher Wiley (1973)
Gentile, C., Warmuth, M.K.: Hinge Loss and Average Margin. Advances in Neural Information Processing Systems. Morgan Kaufmann, (1998)
Littlestone, N.: Learning when Irrelevant Attributes Abound: A New Linear-threshold Algorithm. Machine Learning. 2 (1988) 285–318 (Earlier version in FOC-S87)
Littlestone, N.: Mistake Bounds and Logarithmic Linear-threshold Learning Algorithms. PhD thesis, Technical Report UCSC-CRL-89-11, University of California Santa Cruz (1989)
Littlestone, N.: Comparing several linear-threshold learning algorithms on tasks involving superfluous attributes. In Proc. 12th International Conference on Machine Learning. Morgan Kaufmann, (1995) 353–361
Littlestone, N.: “Private Communication”
Littlestone, N.: Mistake-driven Bayes sports: bounds for symmetric apobayesian learning algorithms. Technical report, NEC Research Institute, Princeton, NJ. (1996)
Littlestone, N., Mesterharm, C.: An Apobayesian Relative of Winnow. Advances in Neural Information Processing Systems. Morgan Kaufmann, (1997)
Littlestone, N., Warmuth, M.K.: The weighted majority algorithm. Information and Computation. 108 (1994) 212–261. (Earlier version in FOCS89)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Helmbold, D.P., Panizza, S., Warmuth, M.K. (1999). Direct and Indirect Algorithms for On-line Learning of Disjunctions. In: Fischer, P., Simon, H.U. (eds) Computational Learning Theory. EuroCOLT 1999. Lecture Notes in Computer Science(), vol 1572. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49097-3_12
Download citation
DOI: https://doi.org/10.1007/3-540-49097-3_12
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-65701-9
Online ISBN: 978-3-540-49097-5
eBook Packages: Springer Book Archive