Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/167088.167200acmconferencesArticle/Chapter ViewAbstractPublication PagesstocConference Proceedingsconference-collections
Article
Free access

Efficient noise-tolerant learning from statistical queries

Published: 01 June 1993 Publication History
  • Get Citation Alerts
  • First page of PDF

    References

    [1]
    Dana Angluin and Philip Laird. Learning from noisy examples. Machine Learning, 2(4):343-370, 1988.]]
    [2]
    Eric B. Baum and Yuh-Dauh Lyuu. The transition to perfect generalization in perceptrons. Neural Computation, 3:386- 401, 1991.]]
    [3]
    Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth. Learnability and the Vapnik- Chervonenkis dimension. Journal of the Association for Computing Machinery, 36(4):929-965, October 1989.]]
    [4]
    A. Ehrenfeucht, D. Haussler, M. Kearns, and L. Valiant. A general lower bound on the number of examples needed for learning. In First Workshop on Computatinal Learning Theor//, pages 139-154, Cambridge, Mass. August 1988. Morgan Kaufmann.]]
    [5]
    Merrick L. Furst, Jeffrey C. Jackson, and Sean W. Smith. Improved learning of AC0 functions. In Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 317-325, August 1991.]]
    [6]
    E. Gardner and B. Derrida. Three unfinished works on the optimal storage capacity of networks. 3. Phys. A: Math. Gen., 22:1983-1994, 1989.]]
    [7]
    Thomas Hancock and Yishay Mansour. Learning monotone k# DNF formulas on product distributions. In Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 179-183, August 1991.]]
    [8]
    David Haussler. Quantifying inductive bias: AI learning algorithms and Valiant's learning framework. Artificial Intelligence, 36:177-221, 1988.]]
    [9]
    David Helmbold, Robert Sloan, and Manfred K. Warmuth. Learning integer lattices. SIAM Journal on Computing, 21(2):240-266, 1992.]]
    [10]
    Michael Kearns. The Computational Complexity of MacJ#ine Learning. The MIT Press, 1990.]]
    [11]
    Michael Kearns and Ming Li. Learning in the presence of malicious errors. In Proceedings of the Twentieth Annual A CM Symposium on Theory of Computing, pages 267-280, May 1988. To appear, SIAM Journal on Computing.]]
    [12]
    Michael Kearns, Ming Li, Leonard Pitt, and Leslie Valiant. On the learnability of Boolean formulae. In Proceedings of the Nineteenth Annual A CM Symposium on Theory of Computing, pages 285-295, May 1987.]]
    [13]
    Michael Kearns and Leonard Pitt. A polynomial-time algorithm for learning k-variable pattern languages from examples, in Proceedings of the Second Annual Workshop on Computational Learning Theory, pages 57-71, July 1989.]]
    [14]
    Michael J. Kearns and Robert E. Schapire. Efficient distribution-free learning of probabilistic concepts. In 31st Annual Symposium on Foundations of Computer Science, pages 382-391, October 1990. To appear, Journal of Computer and System Sciences.]]
    [15]
    Philip D. Laird. Learning from Good and Bad Data. Kluwer international series in engineering and computer science. Kluwer Academic Publishers, Boston, 1988.]]
    [16]
    Nathan Linial, Yishay Mansour, and Noam Nisan. Constant depth circuits, Fourier transform, and learnability. In 6!Oth Annual Symposium on Foundations of Computer Science, pages 574-579, October 1989.]]
    [17]
    Marvin Minsky and Seymour Papert. Perceptrons: An Introduction to Computational Geometry (Expanded Edition). The MIT Press, 1988.]]
    [18]
    Leonard Pitt and Leslie G. Valiant. Computational limitations on learning from examples. Journal of the Association for Computing Machinery, 35(4):965-984, October 1988.]]
    [19]
    Ronald L. Rivest. Learning decision lists. Machine Learning, 2(3):229-246, 1987.]]
    [20]
    Yasubumi Sakakibara. Algorithmic Learning of Formal Languages and Decision Trees. PhD thesis, Tokyo Institute of Technology, October 1991. Research Report IIAS-RR-91- 22E, International Institute for Advanced Study of Social Information Science, Fujitsu Laboratories, Ltd.]]
    [21]
    Robert E. Schapire. Learning probabilistic read-once formulas on product distributions. In Proceedings of the Fourth Annual Workshop on Computational Learning Theory, August 1991. To appear, Machine Learning.]]
    [22]
    Robert Elias Schapire. The Design and Analysis of Efficient Learning Algorithms. The MIT Press, 1992.]]
    [23]
    H.S. Seung, H. Sompolinsky, and N. Tishby. Statistical mechanics of learning from examples. Physical Review A, 45(8):6056-6091, April 1992.]]
    [24]
    Robert H. Sloan. Types of noise in data for concept learning. In Proceedings of the 1988 Workshop on Computational Learning Theory, pages 91-96, August 1988.]]
    [25]
    L. G. Valiant. A theory of the learnable. Communications o# the ACM, 27(11):1134-1142, November 1984.]]
    [26]
    L. G. Valiant. Learning disjunctions of conjunctions. In Proceedings of the 9th International Joint Conference on Artificial Intelligence, pages 560-566, August 1985.]]
    [27]
    V. N. Vapnik and A. Ya. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its applications, XVI(2):264- 280, 1971.]]

    Cited By

    View all

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    STOC '93: Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing
    June 1993
    812 pages
    ISBN:0897915917
    DOI:10.1145/167088
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 01 June 1993

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Article

    Conference

    STOC93
    Sponsor:
    STOC93: 25th Annual ACM Symposium on the Theory of Computing
    May 16 - 18, 1993
    California, San Diego, USA

    Acceptance Rates

    Overall Acceptance Rate 1,469 of 4,586 submissions, 32%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)37
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 26 Jul 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Limits of PreprocessingComputational Complexity10.1007/s00037-024-00251-633:1Online publication date: 23-May-2024
    • (2023)Hardness of Learning AES with Gradient-Based MethodsCryptology and Network Security10.1007/978-981-99-7563-1_6(126-133)Online publication date: 31-Oct-2023
    • (2021)Manipulation Attacks in Local Differential Privacy2021 IEEE Symposium on Security and Privacy (SP)10.1109/SP40001.2021.00001(883-900)Online publication date: May-2021
    • (2021)An improved algorithm for learning sparse parities in the presence of noiseTheoretical Computer Science10.1016/j.tcs.2021.04.026873(76-86)Online publication date: Jun-2021
    • (2021)On the noise estimation statisticsArtificial Intelligence10.1016/j.artint.2021.103451293(103451)Online publication date: Apr-2021
    • (2020)Part-dependent label noiseProceedings of the 34th International Conference on Neural Information Processing Systems10.5555/3495724.3496361(7597-7610)Online publication date: 6-Dec-2020
    • (2020)Limits of preprocessingProceedings of the 35th Computational Complexity Conference10.4230/LIPIcs.CCC.2020.17(1-22)Online publication date: 28-Jul-2020
    • (2020)Revisiting inherent noise floors for interconnect predictionProceedings of the Workshop on System-Level Interconnect: Problems and Pathfinding Workshop10.1145/3414622.3431907(1-7)Online publication date: 5-Nov-2020
    • (2020)Obstacles to Depth Compression of Neural NetworksArtificial Neural Networks and Machine Learning – ICANN 202010.1007/978-3-030-61616-8_9(104-115)Online publication date: 15-Sep-2020
    • (2019)Distribution-independent PAC learning of halfspaces with massart noiseProceedings of the 33rd International Conference on Neural Information Processing Systems10.5555/3454287.3454714(4749-4760)Online publication date: 8-Dec-2019
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media