Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3378936.3378938acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicsimConference Proceedingsconference-collections
research-article

FriendNet Backdoor: Indentifying Backdoor Attack that is safe for Friendly Deep Neural Network

Published: 07 March 2020 Publication History
  • Get Citation Alerts
  • Abstract

    Deep neural networks (DNNs) provide good performance in image recognition, speech recognition and pattern analysis. However, DNNs are vulnerable to backdoor attacks. Backdoor attacks allow attackers to proactively access training data of DNNs to train additional malicious data, including the specific trigger. In normal times, DNNs correctly classify the normal data, but the malicious data with the specific trigger trained by attackers can cause misclassification of DNNs. For example, if an attacker sets up a road sign that includes a specific trigger, an autonomous vehicle equipped with a DNN may misidentify the road sign and cause an accident. Thus, an attacker can use a backdoor attack to threaten the DNN at any time. However, this backdoor attack can be useful in certain situations, such as in military situations. Since there is a mixture of enemy and friendly force in the military situations, it is necessary to cause misclassification of the enemy equipment and classification of the friendly equipment. Therefore, it is necessary to make backdoor attacks that are correctly recognized by friendly equipment and misrecognized by the enemy equipment. In this paper, we propose a friendnet backdoor that is correctly recognized by friendly classifier and misclassified by the enemy classifier. This method additionally trains the friendly and enemy classifier with the proposed data, including the specific trigger that is correctly recognized by friendly classifier and misclassified by enemy classifier. We used MNIST and Fashion-MNIST as experimental datasets and Tensorflow as a machine learning library. Experimental results show that the proposed method in MNIST and Fashion-MNIST has 100% attack success rate of the enemy classifier and the 99.21% and 92.3% accuracy of the friendly classifier, respectively.

    References

    [1]
    J. Schmidhuber, "Deep learning in neural networks: An overview," Neural networks, vol. 61, pp. 85--117, 2015.
    [2]
    C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," in ICLR, 2014.
    [3]
    B. Biggio, B. Nelson, and P. Laskov, "Poisoning attacks against support vector machines," in Proceedings of the 29th International Coference on International Conference on Machine Learning, pp. 1467--1474, Omnipress, 2012.
    [4]
    T. Gu, B. Dolan-Gavitt, and S. Garg, "Badnets: Identifying vulnerabilities in the machine learning model supply chain," arXiv preprint arXiv:1708.06733, 2017.
    [5]
    Y. LeCun, C. Cortes, and C. J. Burges, "Mnist handwritten digit database," AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist, vol. 2, 2010.
    [6]
    H. Xiao, K. Rasul, and R. Vollgraf, "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms," arXiv preprint arXiv:1708.07747, 2017.
    [7]
    M. Barreno, B. Nelson, A. D. Joseph, and J. Tygar, "The security of machine learning," Machine Learning, vol. 81, no. 2, pp. 121--148, 2010.
    [8]
    I. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," in International Conference on Learning Representations, 2015.
    [9]
    A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples in the physical world," ICLR Workshop, 2017.
    [10]
    S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "Deepfool: a simple and accurate method to fool deep neural networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
    [11]
    N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, "The limitations of deep learning in adversarial settings," in Security and Privacy (Euro S&P), 2016 IEEE European Symposium on, pp. 372--387, 2016.
    [12]
    N. Carlini and D.Wagner, "Towards evaluating the robustness of neural networks," in Security and Privacy (SP), 2017 IEEE Symposium on, pp. 39--57, IEEE, 2017.
    [13]
    C. Yang, Q. Wu, H. Li, and Y. Chen, "Generative poisoning attack method against neural networks," arXiv preprint arXiv:1703.01340, 2017.M. Mozaffari-Kermani, S. Sur-Kolay, A. Raghunathan, and N. K. Jha, "Systematic poisoning attacks on and defenses for machine learning in healthcare," IEEE journal of biomedical and health informatics, vol. 19, no. 6, pp. 1893--1905, 2015.
    [14]
    Y. Liu, S. Ma, Y. Aafer, W.-C. Lee, J. Zhai, W. Wang, and X. Zhang, "Trojaning attack on neural networks," NDSS, 2018.
    [15]
    B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao, "Neural cleanse: Identifying and mitigating backdoor attacks in neural networks," 2019 IEEE Symposium on Security and Privacy (SP) 2019.
    [16]
    J. Clements and Y. Lao, "Hardware trojan attacks on neural networks," arXiv preprint arXiv:1806.05768, 2018.
    [17]
    Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, 1998.
    [18]
    D. Kingma and J. Ba, "Adam: A method for stochastic optimization," The International Conference on Learning Representations (ICLR), 2015.
    [19]
    M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., "Tensorflow: A system for large-scale machine learning.," in OSDI, vol. 16, pp. 265--283, 2016.
    [20]
    Kwon, Hyun, Hyunsoo Yoon, and Ki-Woong Park. " Multi-targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks.", IEICE Transactions on Information and Systems 103(04), 2020.

    Cited By

    View all
    • (2023)A Novel Framework for Smart Cyber Defence: A Deep-Dive Into Deep Learning Attacks and DefencesIEEE Access10.1109/ACCESS.2023.330633311(88527-88548)Online publication date: 2023
    • (2020)TargetNet BackdoorProceedings of the 2020 5th International Conference on Intelligent Information Technology10.1145/3385209.3385216(140-145)Online publication date: 6-Jun-2020
    • (2020)Reflection Backdoor: A Natural Backdoor Attack on Deep Neural NetworksComputer Vision – ECCV 202010.1007/978-3-030-58607-2_11(182-199)Online publication date: 7-Nov-2020

    Index Terms

    1. FriendNet Backdoor: Indentifying Backdoor Attack that is safe for Friendly Deep Neural Network

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      ICSIM '20: Proceedings of the 3rd International Conference on Software Engineering and Information Management
      January 2020
      258 pages
      ISBN:9781450376907
      DOI:10.1145/3378936
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      In-Cooperation

      • University of Science and Technology of China: University of Science and Technology of China

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 07 March 2020

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Machine learning
      2. adversarial example
      3. backdoor attack
      4. deep neural network
      5. poisoning attack

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      Conference

      ICSIM '20

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)10
      • Downloads (Last 6 weeks)1

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)A Novel Framework for Smart Cyber Defence: A Deep-Dive Into Deep Learning Attacks and DefencesIEEE Access10.1109/ACCESS.2023.330633311(88527-88548)Online publication date: 2023
      • (2020)TargetNet BackdoorProceedings of the 2020 5th International Conference on Intelligent Information Technology10.1145/3385209.3385216(140-145)Online publication date: 6-Jun-2020
      • (2020)Reflection Backdoor: A Natural Backdoor Attack on Deep Neural NetworksComputer Vision – ECCV 202010.1007/978-3-030-58607-2_11(182-199)Online publication date: 7-Nov-2020

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media