Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

When Good Machine Learning Leads to Bad Security: Big Data (Ubiquity symposium)

Published: 17 May 2018 Publication History

Abstract

While machine learning has proven to be promising in several application domains, our understanding of its behavior and limitations is still in its nascent stages. One such domain is that of cybersecurity, where machine learning models are replacing traditional rule based systems, owing to their ability to generalize and deal with large scale attacks which are not seen before. However, the naive transfer of machine learning principles to the domain of security needs to be taken with caution. Machine learning was not designed with security in mind and as such is prone to adversarial manipulation and reverse engineering. While most data based learning models rely on a static assumption of the world, the security landscape is one that is especially dynamic, with an ongoing never ending arms race between the system designer and the attackers. Any solution designed for such a domain needs to take into account an active adversary and needs to evolve over time, in the face of emerging threats. We term this as the "Dynamic Adversarial Mining" problem, and this paper provides motivation and foundation for this new interdisciplinary area of research, at the crossroads of machine learning, cybersecurity, and streaming data mining.

References

[1]
Papernot, N., McDaniel, P., and Goodfellow, I. Transferability in machine learning: From Phenomena to black-box attacks using adversarial samples. ArXiv preprint arXiv: 1605.07277. 2016.
[2]
Papernot, N. et.al. Practical black-box attacks against deep learning systems using adversarial examples. ArXiv preprint arXiv: 1602.02697. 2016.
[3]
Papernot, N., McDaniel, P., Sinha, A., and Wellman, M. Towards the science of security and privacy in machine learning. arXiv preprint arXiv:1611.03814. 2016.
[4]
Xu, W., Qi, Y., and Evans, D. Automatically evading classifiers. In Proceedings of the 2016 Network and Distributed Systems Symposium. 2016.
[5]
Wang, G., Wang, T., Zheng, H., and Zhao, B. Y. Man vs. machine: Practical adversarial detection of malicious crowdsourcing workers. In 23rd USENIX Security Symposium (USENIX Security '14). USENIX Association, Berkeley, CA, 2014, 239-254.
[6]
Walgampaya, C., and Kantardzic, M. Cracking the smart clickbot. In 2011 13th IEEE International Symposium on Web Systems Evolution (WSE). IEEE, Washington D.C., 2011, 125-134.
[7]
Tramèr, F., Zhang, F., Juels, A., Reiter, M. K., and Ristenpart, T. Stealing machine learning models via prediction APIs. In In the Proceedings of the 25th USENIX Security Symposium (Aug. 10-12, Austin)). USENIX Association, Berkeley, CA, 2016, 601-616.
[8]
Lowd, D., and Meek, C. Adversarial learning. In Proceedings of the 11th ACM SIGKDD International Conference on Knowledge Discovery in Data Mining. ACM, New York, 2005, 641-647.
[9]
Žliobaitė, I., Bifet, A., Pfahringer, B., and Holmes, G. Active learning with drifting streaming data. IEEE Transactions on Neural Networks and Learning Systems 25, 1 (2014), 27-39.
[10]
Sethi, T. S., Kantardzic, M., and Hu, H. A grid density based framework for classifying streaming data in the presence of concept drift. Journal of Intelligent Information Systems 46, 1 (2016), 179-211.
[11]
Sethi, T. S., and Kantardzic, M. Don't pay for validation: Detecting drifts from unlabeled data using margin density. Procedia Computer Science 53 (2015), 103-112.
[12]
Barreno, M., Nelson, B., Joseph, A. D., and Tygar, J. D. The security of machine learning. Machine Learning 81, 2 (2010), 121-148.
[13]
Ng, A. What Artificial Intelligence Can and Can't Do Right Now. Harvard Business Review. https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now. Nov. 9, 2016.
[14]
Sethi, T. S., and Kantardzic, M. Monitoring Classification Blindspots to Detect Drifts from Unlabeled Data. In 17th IEEE International Conference on Information Reuse and Integration (IRI). IEEE, Washington D.C., 2016.
[15]
Roli, F., Biggio, B., and Fumera, G. Pattern recognition systems under attack. In Iberoamerican Congress on Pattern Recognition. Springer Berlin Heidelberg, 2013, 1-8.

Cited By

View all
  • (2023)Learning About the AdversaryAutonomous Intelligent Cyber Defense Agent (AICA)10.1007/978-3-031-29269-9_6(105-132)Online publication date: 3-Jun-2023
  • (2022)Dynamic defenses and the transferability of adversarial examples2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA)10.1109/TPS-ISA56441.2022.00041(276-284)Online publication date: Dec-2022
  • (2022)Strengthening intrusion detection system for adversarial attacks: improved handling of imbalance classification problemComplex & Intelligent Systems10.1007/s40747-022-00739-08:6(4863-4880)Online publication date: 25-Apr-2022
  • Show More Cited By
  1. When Good Machine Learning Leads to Bad Security: Big Data (Ubiquity symposium)

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Ubiquity
    Ubiquity  Volume 2018, Issue May
    May 2018
    14 pages
    EISSN:1530-2180
    DOI:10.1145/3225056
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 May 2018
    Published in UBIQUITY Volume 2018, Issue May

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article
    • Popular
    • Un-reviewed

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)348
    • Downloads (Last 6 weeks)17
    Reflects downloads up to 09 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Learning About the AdversaryAutonomous Intelligent Cyber Defense Agent (AICA)10.1007/978-3-031-29269-9_6(105-132)Online publication date: 3-Jun-2023
    • (2022)Dynamic defenses and the transferability of adversarial examples2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA)10.1109/TPS-ISA56441.2022.00041(276-284)Online publication date: Dec-2022
    • (2022)Strengthening intrusion detection system for adversarial attacks: improved handling of imbalance classification problemComplex & Intelligent Systems10.1007/s40747-022-00739-08:6(4863-4880)Online publication date: 25-Apr-2022
    • (2021)Investigation of Machine Learning Assistance to Education2021 5th International Conference on Computing Methodologies and Communication (ICCMC)10.1109/ICCMC51019.2021.9418364(777-782)Online publication date: 8-Apr-2021
    • (2020)Moving Targets: Addressing Concept Drift in Supervised Models for Hacker Communication Detection2020 International Conference on Cyber Security and Protection of Digital Services (Cyber Security)10.1109/CyberSecurity49315.2020.9138894(1-7)Online publication date: Jun-2020

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Magazine Site

    View this article on the magazine site (external)

    Magazine Site

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media