Abstract
Most learning paradigms impose a particular syntax on the class of concepts to be learned; the chosen syntax can dramatically affect whether the class is learnable or not. For classification paradigms, where the task is to determine whether the underlying world does or does not have a particular property, how that property is represented has no implication on the power of a classifier that just outputs 1’s or 0’s. But is it possible to give a canonical syntactic representation of the class of concepts that are classifiable according to the particular criteria of a given paradigm? We provide a positive answer to this question for classification in the limit paradigms in a logical setting, with ordinal mind change bounds as a measure of complexity. The syntactic characterization that emerges enables to derive that if a possibly noncomputable classifier can perform the task assigned to it by the paradigm, then a computable classifier can also perform the same task. The syntactic characterization is strongly related to the difference hierarchy over the class of open sets of some topological space; this space is naturally defined from the class of possible worlds and possible data of the learning paradigm.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Ambainis, A., Freivalds, R., Smith, C.: Inductive Inference with Procrastination: Back to Definitions. Fundamenta Informaticae 40, 1–16 (1999)
Ambainis, A., Jain, S., Sharma, A.: Ordinal mind change complexity of language identification. Theoretical Computer Science 220(2), 323–343 (1999)
Freivalds, R., Smith, C.: On the role of procrastination for machine learning. Inform. Comput. 107(2), 237–271 (1993)
Gasarch, W., Pleszkoch, M., Stephan, F., Velauthapillai, M.: Classification using information. In: Annals of Mathematics and Artificial Intelligence. Selected papers from ALT 1994 and AII 1994, vol. 23, pp. 147–168 (1998)
Jain, S., Osherson, D., Royer, J., Sharma, A.: Systems that learn: An Introduction to Learning Theory, 2nd edn. The MIT Press, Cambridge (1999)
Kechris, A.: Classical Descriptive Set Theory. In: Graduate Texts in Mathematics, vol. 156. Springer, Heidelberg (1994)
Martin, E., Osherson, D.: Elements of Scientific Inquiry. MIT Press, Cambridge (1998)
Martin, E., Sharma, A., Stephan, F.: Unifying Logic, Topology and Learning in Parametric Logic. Theoretical Computer Science, special issue for ALT 2002 (2002) (to appear)
Osherson, D., Stob, M., Weinstein, S.: A universal inductive inference machine. Journal of Symbolic Logic 56(2), 661–672 (1991)
Stephan, F.: On one-sided versus two-sided classification. Archive for Mathematical Logic 40, 489–513 (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Martin, E., Sharma, A. (2005). On a Syntactic Characterization of Classification with a Mind Change Bound. In: Auer, P., Meir, R. (eds) Learning Theory. COLT 2005. Lecture Notes in Computer Science(), vol 3559. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11503415_28
Download citation
DOI: https://doi.org/10.1007/11503415_28
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-26556-6
Online ISBN: 978-3-540-31892-7
eBook Packages: Computer ScienceComputer Science (R0)