Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3357384.3358149acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
short-paper

Adversarial Training of Gradient-Boosted Decision Trees

Published: 03 November 2019 Publication History
  • Get Citation Alerts
  • Abstract

    Adversarial training is a prominent approach to make machine learning (ML) models resilient to adversarial examples. Unfortunately, such approach assumes the use of differentiable learning models, hence it cannot be applied to relevant ML techniques, such as ensembles of decision trees. In this paper, we generalize adversarial training to gradient-boosted decision trees (GBDTs). Our experiments show that the performance of classifiers based on existing learning techniques either sharply decreases upon attack or is unsatisfactory in absence of attacks, while adversarial training provides a very good trade-off between resiliency to attacks and accuracy in the unattacked setting.

    References

    [1]
    Biggio, B., and Roli, F. Wild patterns: Ten years after the rise of adversarial machine learning. CoRR abs/1712.03141 (2017).
    [2]
    Breiman, L., Friedman, J. H., Olshen, R. A., and Stone, C. J. Classification and Regression Trees. Wadsworth, 1984.
    [3]
    Chollet, F. Deep Learning with Python, 1st ed. Manning Publications Co., Greenwich, CT, USA, 2017.
    [4]
    Friedman, J. H. Greedy function approximation: a gradient boosting machine. Annals of statistics (2001), 1189--1232.
    [5]
    Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. CoRR abs/1412.6572 (2014).
    [6]
    Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I. P., and Tygar, J. D. Adversarial machine learning. In AISec (2011), pp. 43--58.
    [7]
    Kantchelian, A., Tygar, J. D., and Joseph, A. D. Evasion and hardening of tree ensemble classifiers. In ICML (2016), pp. 2387--2396.
    [8]
    Kurakin, A., Goodfellow, I. J., and Bengio, S. Adversarial machine learning at scale. CoRR abs/1611.01236 (2016).
    [9]
    Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. CoRR abs/1706.06083 (2017).
    [10]
    Moosavi-Dezfooli, S., Fawzi, A., and Frossard, P. Deepfool: A simple and accurate method to fool deep neural networks. In CVPR (2016), pp. 2574--2582.
    [11]
    Ng, A. Y. Feature selection, l1 vs. l2 regularization, and rotational invariance. In ICML '04 (2004), ACM, pp. 78--.
    [12]
    Rumelhart, D. E., Hinton, G. E., and Williams, R. J. Parallel distributed processing: Explorations in the microstructure of cognition, vol. 1. Cambridge, MA, USA, 1986, ch. Learning Internal Representations by Error Propagation, pp. 318--362.
    [13]
    Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. J., and Fergus, R. Intriguing properties of neural networks. CoRR abs/1312.6199 (2013).

    Cited By

    View all
    • (2024)Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model QuantizationIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.336089119(3265-3278)Online publication date: 2024
    • (2023)Certifying machine learning models against evasion attacks by program analysisJournal of Computer Security10.3233/JCS-21013331:1(57-84)Online publication date: 1-Jan-2023
    • (2023)Mitigating Targeted Universal Adversarial Attacks on Time Series Power Quality Disturbances Models2023 5th IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA)10.1109/TPS-ISA58951.2023.00021(91-100)Online publication date: 1-Nov-2023
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CIKM '19: Proceedings of the 28th ACM International Conference on Information and Knowledge Management
    November 2019
    3373 pages
    ISBN:9781450369763
    DOI:10.1145/3357384
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 03 November 2019

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. adversarial learning
    2. decision trees
    3. tree ensembles

    Qualifiers

    • Short-paper

    Conference

    CIKM '19
    Sponsor:

    Acceptance Rates

    CIKM '19 Paper Acceptance Rate 202 of 1,031 submissions, 20%;
    Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

    Upcoming Conference

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)31
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 27 Jul 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model QuantizationIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.336089119(3265-3278)Online publication date: 2024
    • (2023)Certifying machine learning models against evasion attacks by program analysisJournal of Computer Security10.3233/JCS-21013331:1(57-84)Online publication date: 1-Jan-2023
    • (2023)Mitigating Targeted Universal Adversarial Attacks on Time Series Power Quality Disturbances Models2023 5th IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA)10.1109/TPS-ISA58951.2023.00021(91-100)Online publication date: 1-Nov-2023
    • (2023)Investigating adversarial attacks against Random Forest-based network attack detection systemsNOMS 2023-2023 IEEE/IFIP Network Operations and Management Symposium10.1109/NOMS56928.2023.10154328(1-6)Online publication date: 8-May-2023
    • (2022)Examining the Determinants of Patient Perception of Physician Review Helpfulness across Different Disease SeveritiesComputational Intelligence and Neuroscience10.1155/2022/86235862022Online publication date: 1-Jan-2022
    • (2022)Constraint Enforcement on Decision Trees: A SurveyACM Computing Surveys10.1145/350673454:10s(1-36)Online publication date: 13-Sep-2022
    • (2022)Are Formal Methods Applicable To Machine Learning And Artificial Intelligence?2022 2nd International Conference of Smart Systems and Emerging Technologies (SMARTTECH)10.1109/SMARTTECH54121.2022.00025(48-53)Online publication date: May-2022
    • (2022)Beyond robustnessComputers and Security10.1016/j.cose.2022.102843121:COnline publication date: 1-Oct-2022
    • (2021)Towards Stalkerware Detection with Precise WarningsProceedings of the 37th Annual Computer Security Applications Conference10.1145/3485832.3485901(957-969)Online publication date: 6-Dec-2021
    • (2021)Fairness-Aware Training of Decision Trees by Abstract InterpretationProceedings of the 30th ACM International Conference on Information & Knowledge Management10.1145/3459637.3482342(1508-1517)Online publication date: 26-Oct-2021
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media