Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3664476.3669974acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaresConference Proceedingsconference-collections
research-article
Open access

Towards realistic problem-space adversarial attacks against machine learning in network intrusion detection

Published: 30 July 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Current trends in network intrusion detection systems (NIDS) capitalize on the extraction of features from network traffic and the use of up-to-date machine and deep learning techniques to infer a detection model; in consequence, NIDS can be vulnerable to adversarial attacks. Differently from the plethora of contributions that apply (and misuse) feature-level attacks envisioned in application domains far from NIDS, this paper proposes a novel approach to adversarial attacks, which consists in a realistic problem-space perturbation of the network traffic. The perturbation is achieved through a traffic control utility. Experiments are based on normal and Denial of Service traffic in both legitimate and adversarial conditions, and the application of four popular techniques to learn the NIDS models. The results highlight the transferability of the adversarial examples generated by the proposed problem-space attack as well as the effectiveness at inducing traffic misclassifications across the NIDS models obtained.

    References

    [1]
    G. Apruzzese, M. Andreolini, L. Ferretti, M. Marchetti, and M. Colajanni. 2022. Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems. Digital Threats 3, 3 (2022), 31.
    [2]
    G. Apruzzese, L. Pajola, and M. Conti. 2022. The Cross-Evaluation of Machine Learning-Based Network Intrusion Detection Systems. IEEE Transactions on Network and Service Management 19, 4 (2022), 5152–5169.
    [3]
    N. Carlini and D. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In Symposium on Security and Privacy. IEEE, 39–57.
    [4]
    M. Catillo, A. Del Vecchio, A. Pecchia, and U. Villano. 2022. Transferability of machine learning models learned from public intrusion detection datasets: the CICIDS2017 case study. Software Quality Journal 30 (2022), 955–981.
    [5]
    M. Catillo, A. Pecchia, and U. Villano. 2022. Botnet Detection in the Internet of Things through All-in-one Deep Autoencoding. In Proc. International Conference on Availability, Reliability and Security. ACM, 90.
    [6]
    M. Catillo, A. Pecchia, and U. Villano. 2022. No more DoS? An empirical study on defense techniques for web server Denial of Service mitigation. Journal of Network and Computer Applications 202 (2022), 103363.
    [7]
    I. Corona, G. Giacinto, and F. Roli. 2013. Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues. Information Sciences 239 (2013), 201–225.
    [8]
    M. J De Lucia and C. Cotton. 2019. Adversarial machine learning for cyber security. Journal of Information Systems Applied Research 12, 1 (2019), 26.
    [9]
    A. S. Dina and D. Manivannan. 2021. Intrusion detection based on Machine Learning techniques in computer networks. Internet of Things 16 (2021), 100462.
    [10]
    J. Fulkerson. 2017. 9 Sets of Sample tc Commands to Simulate Common Network Scenarios. https://www.badunetworks.com/9-sets-of-sample-tc-commands-to-simulate-common-network-scenarios/. Accessed: 2024-03-30.
    [11]
    I. J. Goodfellow, J. Shlens, and C. Szegedy. 2015. Explaining and Harnessing Adversarial Examples. arxiv:1412.6572 [stat.ML]
    [12]
    D. Han, Z. Wang, Y. Zhong, W. Chen, J. Yang, S. Lu, X. Shi, and X. Yin. 2021. Evaluating and Improving Adversarial Robustness of Machine Learning-Based Network Intrusion Detectors. IEEE Journal on Selected Areas in Communications 39, 8 (2021), 2632–2647.
    [13]
    K. He, D. D. Kim, and M. R. Asghar. 2023. Adversarial Machine Learning for Network Intrusion Detection Systems: A Comprehensive Survey. IEEE Communications Surveys & Tutorials 25, 1 (2023), 538–566.
    [14]
    D. Heckerman. 2008. A Tutorial on Learning with Bayesian Networks. Springer, 33–82.
    [15]
    R. C. Holte. 1993. Very Simple Classification Rules Perform Well on Most Commonly Used Datasets. Mach. Learn. 11 (1993), 63–91.
    [16]
    B. Kolosnjaji, A. Demontis, B. Biggio, D. Maiorca, G. Giacinto, C. Eckert, and F. Roli. 2018. Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables. In Proc. European Signal Processing Conf.533–537.
    [17]
    X. Li and N. Ye. 2003. Decision Tree Classifiers for Computer Intrusion Detection. In Real-Time System Security. Nova Science Publishers, Inc., 77–93.
    [18]
    M.A. Merzouk, F. Cuppens, N. Boulahia-Cuppens, and et al.2022. Investigating the practicality of adversarial evasion attacks on network intrusion detection. Ann. Telecommun. 77 (2022), 763–775.
    [19]
    S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. 2016. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proc. Conference on Computer Vision and Pattern Recognition. IEEE, 2574–2582.
    [20]
    C.G. Nevill-Manning, G. Holmes, and I.H. Witten. 1995. The development of Holte’s 1R classifier. In Proc. New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems. IEEE, 239–242.
    [21]
    N. Papernot, P. McDaniel, and I. Goodfellow. 2016. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. arxiv:1605.07277 [cs.CR]
    [22]
    N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. 2016. The Limitations of Deep Learning in Adversarial Settings. In European Symposium on Security and Privacy. IEEE, 372–387.
    [23]
    A. Pecchia, D. Cotroneo, R. Ganesan, and S. Sarkar. 2014. Filtering Security Alerts for the Analysis of a Production SaaS Cloud. In Proc. International Conference on Utility and Cloud Computing. IEEE, 233–241.
    [24]
    F. Pierazzi, F. Pendlebury, J. Cortellazzi, and L. Cavallaro. 2019. Intriguing Properties of Adversarial ML Attacks in the Problem Space. arxiv:1911.02142 [cs.CR]
    [25]
    M. Ring, S. Wunderlich, D. Scheuring, D. Landes, and A. Hotho. 2019. A survey of network-based intrusion detection data sets. Computers & Security 86 (2019), 147–167.
    [26]
    I. Rosenberg, A. Shabtai, Y. Elovici, and L. Rokach. 2021. Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain. Comput. Surveys 54, 5 (2021), 108.
    [27]
    A. Sharma, Z. Kalbarczyk, J. Barlow, and R. Iyer. 2011. Analysis of security data from a large computing organization. In Proc. International Conference on Dependable Systems and Networks. IEEE, 506–517.
    [28]
    A. Shenfield, D. Day, and A. Ayesh. 2018. Intelligent intrusion detection systems using artificial neural networks. ICT Express 4, 2 (2018), 95–99.
    [29]
    P. Sun, S. Li, J. Xie, H. Xu, Z. Cheng, and R. Yang. 2023. GPMT: Generating practical malicious traffic based on adversarial attacks with little prior knowledge. Computers & Security 130 (2023), 103257.
    [30]
    C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. 2014. Intriguing properties of neural networks. In Proc. International Conference on Learning Representations. 1–10.
    [31]
    S. Tan, X. Zhong, Z. Tian, and Q. Dong. 2022. Sneaking Through Security: Mutating Live Network Traffic to Evade Learning-Based NIDS. IEEE Transactions on Network and Service Management 19, 3 (2022), 2295–2308.
    [32]
    M. Verkerken, L. D’Hooge, T. Wauters, B. Volckaert, and F. De Turck. 2021. Towards Model Generalization for Intrusion Detection: Unsupervised Machine Learning Techniques. Journal of Network and Systems Management 30 (2021), 12.
    [33]
    J. Vitorino, I. Praça, and E. Maia. 2023. SoK: Realistic adversarial attacks and defenses for intelligent network intrusion detection. Computers & Security 134 (2023), 103433.
    [34]
    Y. Zhu, L. Cui, Z. Ding, L. Li, Y. Liu, and Z. Hao. 2022. Black box attack and network intrusion detection using machine learning for malicious traffic. Computers & Security 123 (2022), 102922.
    [35]
    T. Zoppi, A. Ceccarelli, T. Puccetti, and A. Bondavalli. 2023. Which algorithm can detect unknown attacks? Comparison of supervised, unsupervised and meta-learning algorithms for intrusion detection. Computers & Security 127 (2023), 103107.

    Index Terms

    1. Towards realistic problem-space adversarial attacks against machine learning in network intrusion detection

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      ARES '24: Proceedings of the 19th International Conference on Availability, Reliability and Security
      July 2024
      2032 pages
      ISBN:9798400717185
      DOI:10.1145/3664476
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 30 July 2024

      Check for updates

      Author Tags

      1. Denial of Service
      2. adversarial examples
      3. intrusion detection
      4. machine learning
      5. supervised learning

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      • Ministero dell'Università e della Ricerca

      Conference

      ARES 2024

      Acceptance Rates

      Overall Acceptance Rate 228 of 451 submissions, 51%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 0
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 27 Jul 2024

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media