Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3581783.3611846acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article
Open access

When Measures are Unreliable: Imperceptible Adversarial Perturbations toward Top-k Multi-Label Learning

Published: 27 October 2023 Publication History

Abstract

With the great success of deep neural networks, adversarial learning has received widespread attention in various studies, ranging from multi-class learning to multi-label learning. However, existing adversarial attacks toward multi-label learning only pursue the traditional visual imperceptibility but ignore the new perceptible problem coming from measures such as Precision@k and mAP@k. Specifically, when a well-trained multi-label classifier performs far below the expectation on some samples, the victim can easily realize that this performance degeneration stems from attack, rather than the model itself. Therefore, an ideal multi-labeling adversarial attack should manage to not only deceive visual perception but also evade monitoring of measures. To this end, this paper first proposes the concept of measure imperceptibility. Then, a novel loss function is devised to generate such adversarial perturbations that could achieve both visual and measure imperceptibility. Furthermore, an efficient algorithm, which enjoys a convex objective, is established to optimize this objective. Finally, extensive experiments on large-scale benchmark datasets, such as PASCAL VOC 2012, MS COCO, and NUS WIDE, demonstrate the superiority of our proposed method in attacking the top-k multi-label systems.

Supplemental Material

MP4 File
Recently, existing adversarial attacks toward multi-label learning only pursue the traditional visual imperceptibility but ignore the new perceptible problem coming from measures. Specifically, when a well-trained multi-label classifier performs far below the expectation on some samples, the victim can easily realize that this performance degeneration stems from attack, rather than the model itself. Therefore, an ideal multi-labeling adversarial attack should manage to not only deceive visual perception but also evade monitoring of measures. This paper first proposes the concept of measure imperceptibility. Then, a novel loss function is devised to generate such adversarial perturbations that could achieve both visual and measure imperceptibility. Furthermore, an efficient algorithm is established to optimize this objective. Extensive experiments on large-scale benchmark datasets demonstrate the superiority of our proposed method in attacking the top-k multi-label systems.

References

[1]
Sandhya Aneja, Nagender Aneja, Pg Emeroylariffion Abas, and Abdul Ghani Naim. 2022. Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising. CoRR, Vol. abs/2206.12685 (2022).
[2]
Carlos Bermejo, Tristan Braud, Ji Yang, Shayan Mirjafari, Bowen Shi, Yu Xiao, and Pan Hui. 2020. VIMES: A Wearable Memory Assistance System for Automatic Information Retrieval. In ACM International Conference on Multimedia. 3191--3200.
[3]
Leonard Berrada, Andrew Zisserman, and M. Pawan Kumar. 2018. Smooth Loss Functions for Deep Top-k Classification. In International Conference on Learning Representations.
[4]
Maurits J. R. Bleeker. 2022. Multi-modal Learning Algorithms and Network Architectures for Information Extraction and Retrieval. In ACM International Conference on Multimedia. 6925--6929.
[5]
Nicholas Carlini and David A. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In IEEE Symposium on Security and Privacy. 39--57.
[6]
Xiaojun Chang, Yaoliang Yu, and Yi Yang. 2017. Robust Top-k Multiclass SVM for Visual Category Recognition. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 75--83.
[7]
Junyu Chen, Qianqian Xu, Zhiyong Yang, Ke Ma, Xiaochun Cao, and Qingming Huang. 2022. Recurrent Meta-Learning against Generalized Cold-start Problem in CTR Prediction. In ACM International Conference on Multimedia. 2636--2644.
[8]
Jun-Ho Choi, Huan Zhang, Jun-Hyuk Kim, Cho-Jui Hsieh, and Jong-Seok Lee. 2019. Evaluating Robustness of Deep Image Super-Resolution Against Adversarial Attacks. In IEEE/CVF International Conference on Computer Vision. 303--311.
[9]
Tat-Seng Chua, Jinhui Tang, Richang Hong, Haojie Li, Zhiping Luo, and Yantao Zheng. 2009. NUS-WIDE: a real-world web image database from National University of Singapore. In ACM International Conference on Image and Video Retrieval.
[10]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition. 248--255.
[11]
Yingpeng Deng and Lina J. Karam. 2022. Frequency-Tuned Universal Adversarial Attacks on Texture Recognition. IEEE Trans. Image Process., Vol. 31 (2022), 5856--5868.
[12]
Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. 2018. Boosting Adversarial Attacks With Momentum. In IEEE Conference on Computer Vision and Pattern Recognition. 9185--9193.
[13]
Mark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John M. Winn, and Andrew Zisserman. 2015. The Pascal Visual Object Classes Challenge: A Retrospective. International Journal of Computer Vision (2015), 98--136.
[14]
Yanbo Fan, Siwei Lyu, Yiming Ying, and Bao-Gang Hu. 2017. Learning with Average Top-k Loss. In Annual Conference on Neural Information Processing Systems. 497--505.
[15]
Jordan Fréry, Amaury Habrard, Marc Sebban, Olivier Caelen, and Liyun He-Guelton. 2017. Efficient Top Rank Optimization with Gradient Boosting for Supervised Anomaly Detection. In European Conference on Machine Learning and Knowledge Discovery in Databases. 20--35.
[16]
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations.
[17]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition. 770--778.
[18]
Shu Hu, Lipeng Ke, Xin Wang, and Siwei Lyu. 2021. TkML-AP: Adversarial Attacks to Top-k Multi-Label Learning. In IEEE/CVF International Conference on Computer Vision. 7629--7637.
[19]
Shu Hu, Yiming Ying, Xin Wang, and Siwei Lyu. 2020. Learning by Minimizing the Sum of Ranked Range. In Annual Conference on Neural Information Processing Systems.
[20]
Mengqi Huang, Zhendong Mao, Penghui Wang, Quan Wang, and Yongdong Zhang. 2022. DSE-GAN: Dynamic Semantic Evolution Generative Adversarial Network for Text-to-Image Generation. In ACM International Conference on Multimedia. 4345--4354.
[21]
Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment. In AAAI Conference on Artificial Intelligence. 8018--8025.
[22]
Qiyu Kang, Yang Song, Qinxu Ding, and Wee Peng Tay. 2021. Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks. In Annual Conference on Neural Information Processing Systems. 14925--14937.
[23]
Valentin Khrulkov and Ivan V. Oseledets. 2018. Art of Singular Vectors and Universal Adversarial Perturbations. In IEEE Conference on Computer Vision and Pattern Recognition.
[24]
Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2017. Adversarial Machine Learning at Scale. In International Conference on Learning Representations.
[25]
Maosen Li, Yanhua Yang, Kun Wei, Xu Yang, and Heng Huang. 2022. Learning Universal Adversarial Perturbation by Adversarial Example. In AAAI Conference on Artificial Intelligence. 1350--1358.
[26]
Tong Li, Sheng Gao, and Yajing Xu. 2017. Deep Multi-Similarity Hashing for Multi-label Image Retrieval. In ACM on Conference on Information and Knowledge Management. 2159--2162.
[27]
Zhaopeng Li, Qianqian Xu, Yangbangyan Jiang, Xiaochun Cao, and Qingming Huang. 2020. Quaternion-Based Knowledge Graph Network for Recommendation. In ACM International Conference on Multimedia. 880--888.
[28]
Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision. 740--755.
[29]
Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A Joint Neural Model for Information Extraction with Global Features. In Annual Meeting of the Association for Computational Linguistics. 7999--8009.
[30]
Hong Liu, Rongrong Ji, Jie Li, Baochang Zhang, Yue Gao, Yongjian Wu, and Feiyue Huang. 2019. Universal Adversarial Perturbation via Prior Driven Uncertainty Approximation. In IEEE/CVF International Conference on Computer Vision. 2941--2949.
[31]
Li-Ping Liu, Thomas G. Dietterich, Nan Li, and Zhi-Hua Zhou. 2016. Transductive Optimization of Top k Precision. In International Joint Conference on Artificial Intelligence. 1781--1787.
[32]
Chen Ma, Liheng Ma, Yingxue Zhang, Haolun Wu, Xue Liu, and Mark Coates. 2021. Knowledge-Enhanced Top-K Recommendation in Poincaré Ball. In AAAI Conference on Artificial Intelligence. 4285--4293.
[33]
Tim Meinhardt, Alexander Kirillov, Laura Leal-Taixé, and Christoph Feichtenhofer. 2022. TrackFormer: Multi-Object Tracking with Transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8834--8844.
[34]
Stefano Melacci, Gabriele Ciravegna, Angelo Sotgiu, Ambra Demontis, Battista Biggio, Marco Gori, and Fabio Roli. 2022. Domain Knowledge Alleviates Adversarial Attacks in Multi-Label Classifiers. IEEE Trans. Pattern Anal. Mach. Intell., Vol. 44, 12 (2022), 9944--9959.
[35]
Trisha Mittal, Uttaran Bhattacharya, Rohan Chandra, Aniket Bera, and Dinesh Manocha. 2020. M3ER: Multiplicative Multimodal Emotion Recognition using Facial, Textual, and Speech Cues. In AAAI Conference on Artificial Intelligence. 1359--1367.
[36]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal Adversarial Perturbations. In IEEE Conference on Computer Vision and Pattern Recognition. 86--94.
[37]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In IEEE Conference on Computer Vision and Pattern Recognition. 2574--2582.
[38]
Wlodzimierz Ogryczak and Arie Tamir. 2003. Minimizing the sum of the k largest functions in linear time. Inform. Process. Lett. (2003), 117--122.
[39]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Annual Conference on Neural Information Processing Systems. 8024--8035.
[40]
Liang Qiao, Sanli Tang, Zhanzhan Cheng, Yunlu Xu, Yi Niu, Shiliang Pu, and Fei Wu. 2020. Text Perceptron: Towards End-to-End Arbitrary-Shaped Text Spotting. In AAAI Conference on Artificial Intelligence. 11899--11907.
[41]
Tal Ridnik, Emanuel Ben Baruch, Nadav Zamir, Asaf Noy, Itamar Friedman, Matan Protter, and Lihi Zelnik-Manor. 2021. Asymmetric Loss For Multi-Label Classification. In IEEE/CVF International Conference on Computer Vision. 82--91.
[42]
Qingquan Song, Haifeng Jin, Xiao Huang, and Xia Hu. 2018. Multi-label Adversarial Perturbations. In IEEE International Conference on Data Mining. 1242--1247.
[43]
Gaurang Sriramanan, Sravanti Addepalli, Arya Baburaj, and Venkatesh Babu R. 2020. Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses. In Annual Conference on Neural Information Processing Systems.
[44]
Ilya Sutskever, James Martens, George E. Dahl, and Geoffrey E. Hinton. 2013. On the importance of initialization and momentum in deep learning. In International Conference on Machine Learning. 1139--1147.
[45]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations.
[46]
Nurislam Tursynbek, Aleksandr Petiushko, and Ivan V. Oseledets. 2022. Geometry-Inspired Top-k Adversarial Perturbations. In IEEE/CVF Winter Conference on Applications of Computer Vision. 4059--4068.
[47]
Xuan-Son Vu, Duc-Trong Le, Christoffer Edlund, Lili Jiang, and Hoang D. Nguyen. 2020. Privacy-Preserving Visual Content Tagging using Graph Transformer Networks. In ACM International Conference on Multimedia. 2299--2307.
[48]
Hao Wang, Pu Lu, Hui Zhang, Mingkun Yang, Xiang Bai, Yongchao Xu, Mengchao He, Yongpan Wang, and Wenyu Liu. 2020. All You Need Is Boundary: Toward Arbitrary-Shaped Text Spotting. In AAAI Conference on Artificial Intelligence. 12160--12167.
[49]
Qiang Wang, Li Zhang, Luca Bertinetto, Weiming Hu, and Philip H. S. Torr. 2019. Fast Online Object Tracking and Segmentation: A Unifying Approach. In IEEE Conference on Computer Vision and Pattern Recognition. 1328--1338.
[50]
Zhibo Wang, Xiaowei Dong, Henry Xue, Zhifei Zhang, Weifeng Chiu, Tao Wei, and Kui Ren. 2022. Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10369--10378.
[51]
Zeyuan Wang, Chaofeng Sha, and Su Yang. 2021. Reinforcement Learning Based Sparse Black-box Adversarial Attack on Video Recognition Models. In International Joint Conference on Artificial Intelligence. 3162--3168.
[52]
Zitai Wang, Qianqian Xu, Zhiyong Yang, Yuan He, Xiaochun Cao, and Qingming Huang. 2023. Optimizing Partial Area Under the Top-k Curve: Theory and Practice. IEEE Trans. Pattern Anal. Mach. Intell., Vol. 45, 4 (2023), 5053--5069.
[53]
Tong Wei, Zhen Mao, Jiang-Xin Shi, Yu-Feng Li, and Min-Ling Zhang. 2022b. A Survey on Extreme Multi-label Learning. CoRR, Vol. abs/2210.03968 (2022).
[54]
Zhipeng Wei, Jingjing Chen, Zuxuan Wu, and Yu-Gang Jiang. 2022a. Cross-Modal Transferable Adversarial Attacks from Images to Videos. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15044--15053.
[55]
Peisong Wen, Qianqian Xu, Zhiyong Yang, Yuan He, and Qingming Huang. 2022. Exploring the Algorithm-Dependent Generalization of AUPRC Optimization with List Stability. In Annual Conference on Neural Information Processing Systems.
[56]
Xi-Zhu Wu and Zhi-Hua Zhou. 2017. A Unified View of Multi-Label Performance Measures. In International Conference on Machine Learning. 3780--3788.
[57]
Tao Xiang, Hangcheng Liu, Shangwei Guo, Hantao Liu, and Tianwei Zhang. 2022. Text's Armor: Optimized Local Adversarial Perturbation Against Scene Text Editing Attacks. In ACM International Conference on Multimedia. 2777--2785.
[58]
Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. 2018. Generating Adversarial Examples with Adversarial Networks. In International Joint Conference on Artificial Intelligence. 3905--3911.
[59]
Forest Yang and Sanmi Koyejo. 2020. On the consistency of top-k surrogate losses. In International Conference on Machine Learning. 10727--10735.
[60]
Yi Yu, Wenhan Yang, Yap-Peng Tan, and Alex C. Kot. 2022. Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6003--6012.
[61]
Jiutao Yue, Haofeng Li, Pengxu Wei, Guanbin Li, and Liang Lin. 2021. Robust Real-World Image Super-Resolution against Adversarial Attacks. In ACM Multimedia Conference. 5148--5157.
[62]
Chaoning Zhang, Philipp Benz, Tooba Imtiaz, and In So Kweon. 2020. Understanding Adversarial Examples From the Mutual Influence of Images and Perturbations. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14509--14518.
[63]
Zekun Zhang and Tianfu Wu. 2020. Learning Ordered Top-k Adversarial Attacks via Adversarial Distillation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3364--3373.
[64]
Jiawei Zhao, Ke Yan, Yifan Zhao, Xiaowei Guo, Feiyue Huang, and Jia Li. 2021. Transformer-based Dual Relation Graph for Multi-label Image Recognition. In IEEE/CVF International Conference on Computer Vision. 163--172.

Cited By

View all
  • (2024)Multi-Label Adversarial Attack With New Measures and Self-Paced Constraint WeightingIEEE Transactions on Image Processing10.1109/TIP.2024.341192733(3809-3822)Online publication date: 14-Jun-2024

Index Terms

  1. When Measures are Unreliable: Imperceptible Adversarial Perturbations toward Top-k Multi-Label Learning

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MM '23: Proceedings of the 31st ACM International Conference on Multimedia
      October 2023
      9913 pages
      ISBN:9798400701085
      DOI:10.1145/3581783
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 October 2023

      Check for updates

      Author Tags

      1. adversarial perturbation
      2. measure imperceptibility
      3. top-k multi-label learning

      Qualifiers

      • Research-article

      Funding Sources

      • Fundamental Research Funds for the Central Universities
      • National Natural Science Foundation of China
      • National Key R&D Program of China

      Conference

      MM '23
      Sponsor:
      MM '23: The 31st ACM International Conference on Multimedia
      October 29 - November 3, 2023
      Ottawa ON, Canada

      Acceptance Rates

      Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)254
      • Downloads (Last 6 weeks)43
      Reflects downloads up to 27 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Multi-Label Adversarial Attack With New Measures and Self-Paced Constraint WeightingIEEE Transactions on Image Processing10.1109/TIP.2024.341192733(3809-3822)Online publication date: 14-Jun-2024

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media