Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2591796.2591839acmconferencesArticle/Chapter ViewAbstractPublication PagesstocConference Proceedingsconference-collections
research-article

The power of localization for efficiently learning linear separators with noise

Published: 31 May 2014 Publication History

Abstract

We introduce a new approach for designing computationally efficient and noise tolerant algorithms for learning linear separators. We consider the malicious noise model of Valiant [41, 32] and the adversarial label noise model of Kearns, Schapire, and Sellie [34]. For malicious noise, where the adversary can corrupt an η of fraction both the label part and the feature part, we provide a polynomial-time algorithm for learning linear separators in Rd under the uniform distribution with nearly information-theoretically optimal noise tolerance of η = Ω(ε), improving on the Ω(&epsilon/d1/4) noise-tolerance of [31] and the Ω(ε2/log(d/ε) of [35]. For the adversarial label noise model, where the distribution over the feature vectors is unchanged, and the overall probability of a noisy label is constrained to be at most η, we give a polynomial-time algorithm for learning linear separators in Rd under the uniform distribution that can also handle a noise rate of η = Ω(ε). This improves over the results of [31] which either required runtime super-exponential in 1/ε (ours is polynomial in 1/ε) or tolerated less noise.
In the case that the distribution is isotropic log-concave, we present a polynomial-time algorithm for the malicious noise model that tolerates Ω(ε/log2(1/ε)) noise, and a polynomial-time algorithm for the adversarial label noise model that also handles Ω(ε/log2(1/ε)) noise. Both of these also improve on results from [35]. In particular, in the case of malicious noise, unlike previous results, our noise tolerance has no dependence on the dimension d of the space.
Our algorithms are also efficient in the active learning setting, where learning algorithms only receive the classifications of examples when they ask for them. We show that, in this model, our algorithms achieve a label complexity whose dependence on the error parameter ε is polylogarithmic (and thus exponentially better than that of any passive algorithm). This provides the first polynomial time active learning algorithm for learning linear separators in the presence of malicious noise or adversarial label noise.

Supplementary Material

MP4 File (p449-sidebyside.mp4)

References

[1]
S. Arora, L. Babai, J. Stern, and Z. Sweedyk. The hardness of approximate optima in lattices, codes, and systems of linear equations. In Proceedings of the 1993 IEEE 34th Annual Foundations of Computer Science, 1993.
[2]
P. Awasthi, M.-F. Balcan, and P. M. Long. The power of localization for efficiently learning linear separators with noise, 2014. Arxiv, 1307.8371v7.
[3]
M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In ICML, 2006.
[4]
M.-F. Balcan, A. Broder, and T. Zhang. Margin based active learning. In COLT, 2007.
[5]
M.-F. Balcan and V. Feldman. Statistical active learning algorithms. NIPS, 2013.
[6]
M.-F. Balcan and S. Hanneke. Robust interactive learning. In COLT, 2012.
[7]
M.-F. Balcan, S. Hanneke, and J. Wortman. The true sample complexity of active learning. In COLT, 2008.
[8]
M.-F. Balcan and P. M. Long. Active and passive learning of linear separators under log-concave distributions. In Conference on Learning Theory, 2013.
[9]
P. L. Bartlett, O. Bousquet, and S. Mendelson. Local Rademacher complexities. Annals of Statistics, 33(4):1497--1537, 2005.
[10]
E. B. Baum. The perceptron algorithm is fast for nonmalicious distributions. Neural Computation, 2:248--260, 1990.
[11]
A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. Agnostic active learning without constraints. In NIPS, 2010.
[12]
D. Bienstock and A. Michalka. Polynomial solvability of variants of the trust-region subproblem, 2013. Optimization Online.
[13]
A. Blum, A. Frieze, R. Kannan, and S. Vempala. A polynomial time algorithm for learning noisy linear threshold functions. Algorithmica, 22(1/2):35--52, 1997.
[14]
A. Blum, M. L. Furst, M. J. Kearns, and R. J. Lipton. Cryptographic primitives based on hard learning problems. In Proceedings of the 13th Annual International Cryptology Conference on Advances in Cryptology, 1994.
[15]
S. Boucheron, O. Bousquet, and G. Lugosi. Theory of classification: a survey of recent advances. ESAIM: Probability and Statistics, 9:9:323--375, 2005.
[16]
N. H. Bshouty, Y. Li, and P. M. Long. Using the doubling dimension to analyze the generalization of learning algorithms. JCSS, 2009.
[17]
R. Castro and R. Nowak. Minimax bounds for active learning. In COLT, 2007.
[18]
D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning, 15(2), 1994.
[19]
N. Cristianini and J. Shawe-Taylor. An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press, 2000.
[20]
S. Dasgupta. Coarse sample complexity bounds for active learning. In NIPS, volume 18, 2005.
[21]
S. Dasgupta. Active learning. Encyclopedia of Machine Learning, 2011.
[22]
S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. NIPS, 20, 2007.
[23]
O. Dekel, C. Gentile, and K. Sridharan. Selective sampling and active learning from single and multiple teachers. JMLR, 2012.
[24]
Y. Freund, H. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee algorithm. Machine Learning, 28(2-3):133--168, 1997.
[25]
M. R. Garey and D. S. Johnson. Computers and Intractability; A Guide to the Theory of NP-Completeness. 1990.
[26]
A. Gupta, M. Hardt, A. Roth, and J. Ullman. Privately releasing conjunctions and the statistical query barrier. In Proceedings of the 43rd annual ACM symposium on Theory of computing, 2011.
[27]
V. Guruswami and P. Raghavendra. Hardness of learning halfspaces with noise. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science, 2006.
[28]
S. Hanneke. A bound on the label complexity of agnostic active learning. In ICML, 2007.
[29]
S. Hanneke. Rates of convergence in active learning. The Annals of Statistics, 39(1):333--361, 2011.
[30]
D. S. Johnson and F. Preparata. The densest hemisphere problem. Theoretical Computer Science, 6(1):93--107, 1978.
[31]
A. T. Kalai, A. R. Klivans, Y. Mansour, and R. A. Servedio. Agnostically learning halfspaces. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, 2005.
[32]
M. Kearns and M. Li. Learning in the presence of malicious errors. In Proceedings of the twentieth annual ACM symposium on Theory of computing, 1988.
[33]
M. Kearns and U. Vazirani. An introduction to computational learning theory. MIT Press, Cambridge, MA, 1994.
[34]
M. J. Kearns, R. E. Schapire, and L. M. Sellie. Toward efficient agnostic learning. Mach. Learn., 17(2-3), Nov. 1994.
[35]
A. R. Klivans, P. M. Long, and R. A. Servedio. Learning halfspaces with malicious noise. Journal of Machine Learning Research, 10, 2009.
[36]
V. Koltchinskii. Rademacher complexities and bounding the excess risk in active learning. Journal of Machine Learning Research, 11:2457--2485, 2010.
[37]
C. Monteleoni. Efficient algorithms for general active learning. In Proceedings of the 19th annual conference on Learning Theory, 2006.
[38]
M. Raginsky and A. Rakhlin. Lower bounds for passive and active learning. In NIPS, 2011.
[39]
O. Regev. On lattices, learning with errors, random linear codes, and cryptography. In Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, 2005.
[40]
J. Sturm and S. Zhang. On cones of nonnegative quadratic functions. Mathematics of Operations Research, 28:246--267, 2003.
[41]
L. G. Valiant. Learning disjunction of conjunctions. In Proceedings of the 9th International Joint Conference on Artificial intelligence, 1985.
[42]
V. Vapnik. Statistical Learning Theory. Wiley-Interscience, 1998.
[43]
L. Wang. Smoothness, Disagreement Coefficient, and the Label Complexity of Agnostic Active Learning. JMLR, 2011.
[44]
T. Zhang. Information theoretical upper and lower bounds for statistical estimation. IEEE Transactions on Information Theory, 52(4):1307--1321, 2006.

Cited By

View all
  • (2024)Active Learning via Predictive Normalized Maximum Likelihood MinimizationIEEE Transactions on Information Theory10.1109/TIT.2024.340692670:8(5799-5810)Online publication date: Aug-2024
  • (2023)Reliable learning in challenging environmentsProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3668205(48035-48050)Online publication date: 10-Dec-2023
  • (2023)Testing Distributional Assumptions of Learning AlgorithmsProceedings of the 55th Annual ACM Symposium on Theory of Computing10.1145/3564246.3585117(1643-1656)Online publication date: 2-Jun-2023
  • Show More Cited By

Index Terms

  1. The power of localization for efficiently learning linear separators with noise

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    STOC '14: Proceedings of the forty-sixth annual ACM symposium on Theory of computing
    May 2014
    984 pages
    ISBN:9781450327107
    DOI:10.1145/2591796
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 31 May 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. adversarial label noise
    2. malicious noise
    3. noise tolerant learning
    4. passive and active learning

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    STOC '14
    Sponsor:
    STOC '14: Symposium on Theory of Computing
    May 31 - June 3, 2014
    New York, New York

    Acceptance Rates

    STOC '14 Paper Acceptance Rate 91 of 319 submissions, 29%;
    Overall Acceptance Rate 1,469 of 4,586 submissions, 32%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)2
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 04 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Active Learning via Predictive Normalized Maximum Likelihood MinimizationIEEE Transactions on Information Theory10.1109/TIT.2024.340692670:8(5799-5810)Online publication date: Aug-2024
    • (2023)Reliable learning in challenging environmentsProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3668205(48035-48050)Online publication date: 10-Dec-2023
    • (2023)Testing Distributional Assumptions of Learning AlgorithmsProceedings of the 55th Annual ACM Symposium on Theory of Computing10.1145/3564246.3585117(1643-1656)Online publication date: 2-Jun-2023
    • (2023)Data-Efficient Learning via Minimizing Hyperspherical EnergyIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.3290544(1-15)Online publication date: 2023
    • (2023)On Subsampled Quantile Randomized Kaczmarz2023 59th Annual Allerton Conference on Communication, Control, and Computing (Allerton)10.1109/Allerton58177.2023.10313381(1-8)Online publication date: 26-Sep-2023
    • (2022)Improved algorithms for neural active learningProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3602264(27497-27509)Online publication date: 28-Nov-2022
    • (2022)Disagreement-Based Active Learning in Online SettingsIEEE Transactions on Signal Processing10.1109/TSP.2022.315938870(1947-1958)Online publication date: 2022
    • (2022)Shattering Distribution for Active LearningIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2020.302760533:1(215-228)Online publication date: Jan-2022
    • (2022)LTP: A New Active Learning Strategy for CRF-Based Named Entity RecognitionNeural Processing Letters10.1007/s11063-021-10737-x54:3(2433-2454)Online publication date: 12-Jan-2022
    • (2022)Stronger data poisoning attacks break data sanitization defensesMachine Language10.1007/s10994-021-06119-y111:1(1-47)Online publication date: 1-Jan-2022
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media