Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/180139.181176acmconferencesArticle/Chapter ViewAbstractPublication PagescoltConference Proceedingsconference-collections
Article
Free access

Learning linear threshold functions in the presence of classification noise

Published: 16 July 1994 Publication History
  • Get Citation Alerts
  • Abstract

    I show that the linear threshold functions are polynomially learnable in the presence of classification noise, i.e., polynomial in n, 1/ε, 1/δ, and 1/σ, where n is the number of Boolean attributes, ε and δ are the usual accuracy and confidence parameters, and σ indicates the minimum distance of any example from the target hyperplane, which is assumed to be lower than the average distance of the examples from any hyperplane. This result is achieved by modifying the Perceptron algorithm—for each update, a weighted average of a sample of misclassified examples and a correction vector is added to the current weight vector. Similar modifications are shown for the Weighted Majority algorithm. The correction vector is simply the mean of the normalized examples. In the special case of Boolean threshold functions, the modified Perceptron algorithm performs O (n2ε−2 ) iterations over O(n4ε −2ln(n/(δε))) examples. This improves on the previous classification-noise result of Angluin and Laird to a much larger concept class with a similar number of examples, but with multiple iterations over the examples.

    References

    [1]
    D. Angluin and P. Laird. Learning from noisy examples. Machine Learning, 2(4):343-370, 1988.
    [2]
    T. Bylander. Polynomial learnability of linear threshold approximations. In Proc. Sizih Annual A CM Conf. on Computational Learning Theory, pages 297-302, Santa Cruz, California, 1993.
    [3]
    T. Bylander. Empirical tests of a new perceptron algorithm for noisy data. Technical Report, Computer Science Program, Univ. of Texas at San Antonio, 1994.
    [4]
    R. O. Duds and P. E. Hart. Pattern Classification and Scene Analysis. John Wiley, New York, 1973.
    [5]
    W. Hoeffding. Probability inequalities for sums of bounded variables. J. American Statistical Association, 58:13-30, 1963.
    [6]
    K.-U. HSffgen and H.-U. Simon. Robust trainability of single neurons. In Proc. Fifth Annual A CM Workshop on Computational Learning Theor~l, pages 428-439, Pittsburgh, Pennsylvania, 1992.
    [7]
    M. J. Kearns. Efficient noise-tolerant learning from statistical queries. In Proc. Twenty-Fifth Annual A CM Symposium o~ Theor~l of Computing, 1993.
    [8]
    P. D. Laird. l~earning from Good and Bad Data. Kluwer, NorweU, Massachusetts, 1988.
    [9]
    N. Littlestone. Mistake Bounds and Logarithmic Linear-threshold Learning Algorithms. PhD thesis, Univ. of Calif., Santa Cruz, California, 1989.
    [10]
    N. Littlestone. Redundant noisy attributes, attribute errors, and linear-threshold learning using Winnow. In Proc. Fourth Annual Workshop on Computational ~earning Theory, pages 147-156, Santa Cruz, California, 1991.
    [11]
    N. Littlestone and M. K. Warmuth. The weighted majority algorithm. In Proc. IEEE $Oth Annual S3tmposium on Foundations of Computer Science, pages 256-261, Washington, DC, 1989. IEEE Computer Society.
    [12]
    M. L. Minsky and S. A. Papert. Perceptrons. MIT Press, Cambridge, Massachusetts, 1969.
    [13]
    F. Rosenblatt. Principles of Neurod~mamics. Spartan Books, New York, 1962.

    Cited By

    View all
    • (2024)Label-Weighted Graph-Based Learning for Semi-Supervised Classification Under Label NoiseIEEE Transactions on Big Data10.1109/TBDATA.2023.331924910:1(55-65)Online publication date: Feb-2024
    • (2023)SLaMProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3669073(67505-67532)Online publication date: 10-Dec-2023
    • (2023)Surrogate Scoring RulesACM Transactions on Economics and Computation10.1145/356555910:3(1-36)Online publication date: 15-Feb-2023
    • Show More Cited By

    Index Terms

    1. Learning linear threshold functions in the presence of classification noise

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        COLT '94: Proceedings of the seventh annual conference on Computational learning theory
        July 1994
        376 pages
        ISBN:0897916557
        DOI:10.1145/180139
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 16 July 1994

        Permissions

        Request permissions for this article.

        Check for updates

        Qualifiers

        • Article

        Conference

        7COLT94
        Sponsor:
        7COLT94: 7th Annual Conference on Computational Learning Theory
        July 12 - 15, 1994
        New Jersey, New Brunswick, USA

        Acceptance Rates

        Overall Acceptance Rate 35 of 71 submissions, 49%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)93
        • Downloads (Last 6 weeks)15
        Reflects downloads up to 12 Aug 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Label-Weighted Graph-Based Learning for Semi-Supervised Classification Under Label NoiseIEEE Transactions on Big Data10.1109/TBDATA.2023.331924910:1(55-65)Online publication date: Feb-2024
        • (2023)SLaMProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3669073(67505-67532)Online publication date: 10-Dec-2023
        • (2023)Surrogate Scoring RulesACM Transactions on Economics and Computation10.1145/356555910:3(1-36)Online publication date: 15-Feb-2023
        • (2023)Label-noise robust classification with multi-view learningScience China Technological Sciences10.1007/s11431-021-2139-066:6(1841-1854)Online publication date: 22-May-2023
        • (2021)Statistical Query Algorithms for Mean Vector Estimation and Stochastic Convex OptimizationMathematics of Operations Research10.1287/moor.2020.111146:3(912-945)Online publication date: 1-Aug-2021
        • (2021)Statistical query complexity of manifold estimationProceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing10.1145/3406325.3451135(116-122)Online publication date: 15-Jun-2021
        • (2021)Efficiently learning halfspaces with Tsybakov noiseProceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing10.1145/3406325.3450998(88-101)Online publication date: 15-Jun-2021
        • (2021)SPLBoost: An Improved Robust Boosting Algorithm Based on Self-Paced LearningIEEE Transactions on Cybernetics10.1109/TCYB.2019.295710151:3(1556-1570)Online publication date: Mar-2021
        • (2021)On the noise estimation statisticsArtificial Intelligence10.1016/j.artint.2021.103451293(103451)Online publication date: Apr-2021
        • (2020)Efficiently learning adversarially robust halfspaces with noiseProceedings of the 37th International Conference on Machine Learning10.5555/3524938.3525588(7010-7021)Online publication date: 13-Jul-2020
        • Show More Cited By

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media