Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3384942.3406867acmconferencesArticle/Chapter ViewAbstractPublication Pagesasia-ccsConference Proceedingsconference-collections
research-article

Defense against N-pixel Attacks based on Image Reconstruction

Published: 07 October 2020 Publication History

Abstract

Since machine learning and deep learning are largely used for image recognition in real-world applications, how to avoid adversarial attacks become an important issue. It is common that attackers add adversarial perturbation to a normal image in order to fool the models. The N-pixel attack is one of the recently popular adversarial methods by simply changing a few pixels in the image. We observe that changing the few pixels leads to an obvious difference with its neighboring pixels. Therefore, this research aims to defend the N-pixel attacks based on image reconstruction. We develop a three-staged reconstructing algorithm to recover the fooling images. Experimental results show that the accuracy of CIFAR-10 test dataset can reach 92% after applying our proposed algorithm, indicating that the algorithm can maintain the original inference accuracy on normal dataset. Besides, the effectiveness of defending N-pixel attacks is also validated by reconstructing 500 attacked images using the proposed algorithm. The results show that we have a 90% to 92% chance of successful defense, where N=1,3,5,10,and 15.

References

[1]
Nicholas Carlini and David Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy (S&P). IEEE, San Jose, CA, 39--57. https://doi.org/10.1109/SP.2017.49
[2]
Dong Chen, Ruqiao Xu, and Bo Han. 2019. Patch Selection Denoiser: An Effective Approach Defending Against One-pixel Attacks. In International Conference on Neural Information Processing, T. Gedeon, K. Wong, and M. Lee (Eds.). Springer, Cham, 286--296. https://doi.org/10.1007/978--3-030--36802--9_31
[3]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems. MIT Press, Cambridge, MA, USA, 2672--2680.
[4]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples . In International Conference on Learning Representations . http://arxiv.org/abs/1412.6572
[5]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Las Vegas, NV, 770--778. https://doi.org/10.1109/CVPR.2016.90
[6]
Alex Krizhevsky, Geoffrey Hinton, et almbox. 2009. Learning Multiple Layers of Features from Tiny Images . (2009).
[7]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal Adversarial Perturbations. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Las Vegas, NV, 1765--1773. https://doi.org/10.1109/CVPR.2017.17
[8]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: A Simple and Accurate Method to Fool Deep Neural Networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Las Vegas, NV, 2574--2582. https://doi.org/10.1109/CVPR.2016.282
[9]
Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. In 2016 IEEE Symposium on Security and Privacy (S&P). IEEE, San Jose, CA, 582--597. https://doi.org/10.1109/SP.2016.41
[10]
Andras Rozsa, Ethan M Rudd, and Terrance E Boult. 2016. Adversarial Diversity and Hard Positive Generation. In 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) . IEEE, Las Vegas, NV, 25--32. https://doi.org/10.1109/CVPRW.2016.58
[11]
Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. 2019. One Pixel Attack for Fooling Deep Neural Networks . IEEE Transactions on Evolutionary Computation, Vol. 23, 5 (2019). https://doi.org/10.1109/TEVC.2019.2890858
[12]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing Properties of Neural Networks. In International Conference on Learning Representations . https://arxiv.org/abs/1312.6199
[13]
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2018. Ensemble Adversarial Training: Attacks and Defenses. In International Conference on Learning Representations . https://arxiv.org/abs/1705.07204
[14]
Danilo Vasconcellos Vargas and Jiawei Su. 2019. Understanding the One-pixel Attack: Propagation Maps and Locality Analysis . arXiv preprint arXiv:1902.02947 (2019).

Cited By

View all
  • (2023)Get Your Foes Fooled: Proximal Gradient Split Learning for Defense Against Model Inversion Attacks on IoMT DataIEEE Transactions on Network Science and Engineering10.1109/TNSE.2022.318857510:5(2607-2616)Online publication date: 1-Sep-2023
  • (2021) STPD: Defending against -norm attacks with space transformation Future Generation Computer Systems10.1016/j.future.2021.08.009Online publication date: Aug-2021

Index Terms

  1. Defense against N-pixel Attacks based on Image Reconstruction

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        SBC '20: Proceedings of the 8th International Workshop on Security in Blockchain and Cloud Computing
        October 2020
        34 pages
        ISBN:9781450376099
        DOI:10.1145/3384942
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 07 October 2020

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. adversarial examples
        2. defense
        3. image reconstruction
        4. n-pixel attacks

        Qualifiers

        • Research-article

        Funding Sources

        • Ministry of Science and Technology, Taiwan (ROC)
        • Taiwan Information Security Center at National Sun Yat-sen University (TWISC@NSYSU)
        • Telecommunication Laboratories at Chunghwa Telecom Co. Ltd. (CHTTL)

        Conference

        ASIA CCS '20
        Sponsor:

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)28
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 13 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2023)Get Your Foes Fooled: Proximal Gradient Split Learning for Defense Against Model Inversion Attacks on IoMT DataIEEE Transactions on Network Science and Engineering10.1109/TNSE.2022.318857510:5(2607-2616)Online publication date: 1-Sep-2023
        • (2021) STPD: Defending against -norm attacks with space transformation Future Generation Computer Systems10.1016/j.future.2021.08.009Online publication date: Aug-2021

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media