Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Novel Multi-Sample Generation Method for Adversarial Attacks

Published: 04 March 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Deep learning models are widely used in daily life, which bring great convenience to our lives, but they are vulnerable to attacks. How to build an attack system with strong generalization ability to test the robustness of deep learning systems is a hot issue in current research, among which the research on black-box attacks is extremely challenging. Most current research on black-box attacks assumes that the input dataset is known. However, in fact, it is difficult for us to obtain detailed information for those datasets. In order to solve the above challenges, we propose a multi-sample generation model for black-box model attacks, called MsGM. MsGM is mainly composed of three parts: multi-sample generation, substitute model training, and adversarial sample generation and attack. Firstly, we design a multi-task generation model to learn the distribution of the original dataset. The model first converts an arbitrary signal of a certain distribution into the shared features of the original dataset through deconvolution operations, and then according to different input conditions, multiple identical sub-networks generate the corresponding targeted samples. Secondly, the generated sample features achieve different outputs through querying the black-box model and training the substitute model, which are used to construct different loss functions to optimize and update the generator and substitute model. Finally, some common white-box attack methods are used to attack the substitute model to generate corresponding adversarial samples, which are utilized to attack the black-box model. We conducted a large number of experiments on the MNIST and CIFAR-10 datasets. The experimental results show that under the same settings and attack algorithms, MsGM achieves better performance than the based models.

    References

    [1]
    Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI’16). 265–283.
    [2]
    Giovanni Apruzzese, Mauro Andreolini, Michele Colajanni, and Mirco Marchetti. 2020. Hardening random forest cyber detectors against adversarial attacks. IEEE Transactions on Emerging Topics in Computational Intelligence 4, 4 (2020), 427–439.
    [3]
    Wieland Brendel, Jonas Rauber, and Matthias Bethge. 2018. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In Proceedings of the 6th International Conference on Learning Representations.
    [4]
    Thomas Brunner, Frederik Diehl, Michael Truong Le, and Alois Knoll. 2019. Guessing smart: Biased sampling for efficient black-box adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4958–4966.
    [5]
    Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy. IEEE, 39–57.
    [6]
    Jinghui Chen and Quanquan Gu. 2020. RayS: A ray searching method for hard-label adversarial attack. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1739–1747.
    [7]
    Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2017. EAD: Elastic-net attacks to deep neural networks via adversarial examples. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
    [8]
    Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 15–26.
    [9]
    Xuesong Chen, Xiyu Yan, Feng Zheng, Yong Jiang, Shu-Tao Xia, Yong Zhao, and Rongrong Ji. 2020. One-shot adversarial attacks on visual tracking with dual attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10176–10185.
    [10]
    Mingxing Duan, Kenli Li, and Keqin Li. 2017. An ensemble cnn2elm for age estimation. IEEE Transactions on Information Forensics and Security 13, 3 (2017), 758–772.
    [11]
    Mingxing Duan, Kenli Li, Xiangke Liao, Keqin Li, and Qi Tian. 2019. Features-enhanced multi-attribute estimation with convolutional tensor correlation fusion network. ACM Transactions on Multimedia Computing, Communications, and Applications 15, 3s (2019), 1–23.
    [12]
    Mingxing Duan, Kenli Li, Aijia Ouyang, Khin Nandar Win, Keqin Li, and Qi Tian. 2020. EGroupNet: A feature-enhanced network for age estimation with novel age group schemes. ACM Transactions on Multimedia Computing, Communications, and Applications 16, 2 (2020), 1–23.
    [13]
    Mingxing Duan, Aijia Ouyang, Guanghua Tan, and Qi Tian. 2020. Age estimation using aging/rejuvenation features with device-edge synergy. IEEE Transactions on Circuits and Systems for Video Technology 31, 2 (2020), 608–620.
    [14]
    Ranjie Duan, Xingjun Ma, Yisen Wang, James Bailey, A. Kai Qin, and Yun Yang. 2020. Adversarial camouflage: Hiding physical-world attacks with natural styles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1000–1008.
    [15]
    Yanbo Fan, Baoyuan Wu, Tuanhui Li, Yong Zhang, Mingyang Li, Zhifeng Li, and Yujiu Yang. 2020. Sparse adversarial attack via perturbation factorization. In Proceedings of the 16th European Conference on Computer Vision.
    [16]
    Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems. 2672–2680.
    [17]
    Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations.
    [18]
    Chuan Guo, Jacob R. Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Q. Weinberger. 2019. Simple black-box adversarial attacks. In Proceedings of the International Conference on Machine Learning. 2484–2493.
    [19]
    Jie Hang, Keji Han, Hui Chen, and Yun Li. 2020. Ensemble adversarial black-box attacks against deep learning systems. Pattern Recognition 101 (2020), 107184. DOI:
    [20]
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
    [21]
    Mengdi Huai, Jianhui Sun, Renqin Cai, Liuyi Yao, and Aidong Zhang. 2020. Malicious attacks against deep reinforcement learning interpretations. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 472–482.
    [22]
    Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box adversarial attacks with limited queries and information. In Proceedings of the International Conference on Machine Learning. PMLR, 2137–2146.
    [23]
    Surgan Jandial, Puneet Mangla, Sakshi Varshney, and Vineeth Balasubramanian. 2019. Advgan++: Harnessing latent layers for adversary generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. 0–0.
    [24]
    Fazle Karim, Somshubra Majumdar, and Houshang Darabi. 2021. Adversarial attacks on time series. IEEE Transactions on Pattern Analysis and Machine Intelligence 43, 10 (2021), 3309–3320. DOI:
    [25]
    Zelun Kong, Junfeng Guo, Ang Li, and Cong Liu. 2020. PhysGAN: Generating physical-world-resilient adversarial examples for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14254–14263.
    [26]
    Yehya Abouelnaga, Ola S. Ali, Hager Rady, and Mohamed Moustafa. 2016. Cifar-10: Knn-based ensemble of classifiers. In Proceedings of the 2016 International Conference on Computational Science and Computational Intelligence (CSCI’16). IEEE, 1192–1195.
    [27]
    Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017. Adversarial examples in the physical world. In Proceedings of the International Conference on Learning Representations.
    [28]
    Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11 (1998), 2278–2324.
    [29]
    Guanlin Li, Shuya Ding, Jun Luo, and Chang Liu. 2020. Enhancing intrinsic adversarial robustness via feature pyramid decoder. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 800–808.
    [30]
    Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2017. Delving into transferable adversarial examples and black-box attacks. In 5th International Conference on Learning Representations (ICLR’17), Toulon, France, April 24-26, 2017. OpenReview.net. https://openreview.net/forum?id=Sys6GJqxl.
    [31]
    Yantao Lu, Yunhan Jia, Jianyu Wang, Bai Li, Weiheng Chai, Lawrence Carin, and Senem Velipasalar. 2020. Enhancing cross-task black-box transferability of adversarial examples with dispersion reduction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 940–949.
    [32]
    Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In Proceedings of the 6th International Conference on Learning Representations, (ICLR’18), Vancouver, BC, Canada, April 30 - May 3, 2018. OpenReview.net. https://openreview.net/forum?id=rJzIBfZAb.
    [33]
    Duan Mingxing, Kenli Li, Lingxi Xie, Qi Tian, and Bin Xiao. 2021. Towards multiple black-boxes attack via adversarial example generation network. In Proceedings of the 29th ACM International Conference on Multimedia. 264–272.
    [34]
    Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2574–2582.
    [35]
    Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. 2019. Knockoff nets: Stealing functionality of black-box models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4954–4963.
    [36]
    Ren Pang, Xinyang Zhang, Shouling Ji, Xiapu Luo, and Ting Wang. 2020. AdvMind: Inferring adversary intent of black-box attacks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1899–1907.
    [37]
    Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. 506–519.
    [38]
    Li Pengcheng, Jinfeng Yi, and Lijun Zhang. 2018. Query-efficient black-box attack by active learning. In Proceedings of the 2018 IEEE International Conference on Data Mining (ICDM). IEEE, 1200–1205.
    [39]
    Yucheng Shi, Yahong Han, and Qi Tian. 2020. Polishing decision-based adversarial noise with a customized sampling. In Proceedings of the Conference on Computer Vision and Pattern Recognition. 1030–1038.
    [40]
    Yucheng Shi, Siyu Wang, and Yahong Han. 2019. Curls & whey: Boosting black-box adversarial attacks. In Proceedings of the Conference on Computer Vision and Pattern Recognition. 6519–6527.
    [41]
    Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, (ICLR’15), San Diego, CA, USA, May 7-9, 2015. http://arxiv.org/abs/1409.1556.
    [42]
    Du Su, Hieu Tri Huynh, Ziao Chen, Yi Lu, and Wenmiao Lu. 2020. Re-identification attack to privacy-preserving data analysis with noisy sample-mean. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1045–1053.
    [43]
    Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the International Conference on Learning Representations.
    [44]
    Ruixiang Tang, Mengnan Du, Ninghao Liu, Fan Yang, and Xia Hu. 2020. An embarrassingly simple approach for trojan attack in deep neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 218–228.
    [45]
    Florian Tramer, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick Mcdaniel. 2018. Ensemble adversarial training: Attacks and defenses. In Proceedings of the 6th International Conference on Learning Representations (ICLR’18), Vancouver, BC, Canada, April 30 - May 3, 2018. OpenReview.net. https://openreview.net/forum?id=rkZvSe-RZ.
    [46]
    Jingyuan Wang, Yufan Wu, Mingxuan Li, Xin Lin, Junjie Wu, and Chao Li. 2020. Interpretability is a kind of safety: An interpreter-based ensemble for adversary defense. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 15–24.
    [47]
    Yue Wang, Ke Wang, and Chunyan Miao. 2020. Truth discovery against strategic sybil attack in crowdsourcing. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 95–104.
    [48]
    Junqi Wu, Bolin Chen, Weiqi Luo, and Yanmei Fang. 2020. Audio steganography based on iterative adversarial attacks against convolutional neural networks. IEEE Transactions on Information Forensics and Security 15 (2020), 2282–2294. DOI:
    [49]
    Chaowei Xiao, Bo Li, Junyan Zhu, Warren He, Mingyan Liu, and Dawn Song. 2018. Generating adversarial examples with adversarial networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence. 3905–3911.
    [50]
    Wenli Xiao, Hao Jiang, and Song Xia. 2020. A new black box attack generating adversarial examples based on reinforcement learning. In Proceedings of the 2020 Information Communication Technologies Conference (ICTC). IEEE, 141–146.
    [51]
    Yinghua Zhang, Yangqiu Song, Jian Liang, Kun Bai, and Qiang Yang. 2020. Two sides of the same coin: White-box and black-box attacks for transfer learning. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2989–2997.
    [52]
    Haizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, and Atul Prakash. 2020. Efficient adversarial training with transferable adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1181–1190.
    [53]
    Mingyi Zhou, Jing Wu, Yipeng Liu, Shuaicheng Liu, and Ce Zhu. 2020. DaST: Data-free substitute training for adversarial attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 234–243.
    [54]
    Daniel Zügner, Oliver Borchert, Amir Akbarnejad, and Stephan Guennemann. 2020. Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM ACM Transactions on Knowledge Discovery from Data 14, 5 (2020), 1–31.

    Cited By

    View all
    • (2024)Unsupervised Adversarial Example Detection of Vision Transformers for Trustworthy Edge ComputingACM Transactions on Multimedia Computing, Communications, and Applications10.1145/3674981Online publication date: 2-Jul-2024
    • (2024)Backdoor Two-Stream Video Models on Federated LearningACM Transactions on Multimedia Computing, Communications, and Applications10.1145/3651307Online publication date: 7-Mar-2024
    • (2024)Attacking Click-through Rate Predictors via Generating Realistic Fake SamplesACM Transactions on Knowledge Discovery from Data10.1145/364368518:5(1-24)Online publication date: 28-Feb-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Multimedia Computing, Communications, and Applications
    ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 4
    November 2022
    497 pages
    ISSN:1551-6857
    EISSN:1551-6865
    DOI:10.1145/3514185
    • Editor:
    • Abdulmotaleb El Saddik
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 04 March 2022
    Accepted: 01 December 2021
    Revised: 01 December 2021
    Received: 01 June 2021
    Published in TOMM Volume 18, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Black-box attacks
    2. GAN
    3. multi-task
    4. substitute model

    Qualifiers

    • Research-article
    • Refereed

    Funding Sources

    • National Key-Research and Development Program of China
    • Open Fund of Science and Technology on Parallel and Distributed Processing Laboratory
    • Shenzhen Excellent Technological and Innovative Talent Training Foundation
    • Science and Education Joint Project of Natural Science Foundation of Hunan Province
    • Hong Kong Scholars Program

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)206
    • Downloads (Last 6 weeks)6
    Reflects downloads up to 12 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Unsupervised Adversarial Example Detection of Vision Transformers for Trustworthy Edge ComputingACM Transactions on Multimedia Computing, Communications, and Applications10.1145/3674981Online publication date: 2-Jul-2024
    • (2024)Backdoor Two-Stream Video Models on Federated LearningACM Transactions on Multimedia Computing, Communications, and Applications10.1145/3651307Online publication date: 7-Mar-2024
    • (2024)Attacking Click-through Rate Predictors via Generating Realistic Fake SamplesACM Transactions on Knowledge Discovery from Data10.1145/364368518:5(1-24)Online publication date: 28-Feb-2024
    • (2024)BM-FL: A Balanced Weight Strategy for Multi-Stage Federated Learning Against Multi-Client Data SkewingIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.337270836:9(4486-4498)Online publication date: Sep-2024
    • (2024)MC-Net: Realistic Sample Generation for Black-Box AttacksIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.335681219(3008-3022)Online publication date: 1-Jan-2024
    • (2024)Dynamic Hypersphere Embedding Scale Against Adversarial AttacksIEEE Transactions on Engineering Management10.1109/TEM.2022.319448771(12475-12486)Online publication date: 2024
    • (2024)BPFLInformation Sciences: an International Journal10.1016/j.ins.2024.120377665:COnline publication date: 2-Jul-2024
    • (2024)Adaptive federated few-shot feature learning with prototype rectificationEngineering Applications of Artificial Intelligence10.1016/j.engappai.2023.107125126:PDOnline publication date: 27-Feb-2024
    • (2023)Exploring the Effect of High-frequency Components in GANs TrainingACM Transactions on Multimedia Computing, Communications, and Applications10.1145/357858519:5(1-22)Online publication date: 16-Mar-2023
    • (2023)Attention, Please! Adversarial Defense via Activation Rectification and PreservationACM Transactions on Multimedia Computing, Communications, and Applications10.1145/357284319:4(1-18)Online publication date: 27-Feb-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media