Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3503161.3547989acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Generating Transferable Adversarial Examples against Vision Transformers

Published: 10 October 2022 Publication History

Abstract

Vision transformers (ViTs) are prevailing among several visual recognition tasks, therefore drawing intensive interest in generating adversarial examples against them. Different from CNNs, ViTs enjoy unique architectures, e.g., self-attention and image-embedding, which are commonly-shared features among various types of transformer-based models. However, existing adversarial methods suffer from weak transferable attacking ability due to the overlook of these architectural features. To address the problem, we propose an Architecture-oriented Transferable Attacking (ATA) framework to generate transferable adversarial examples by activating the uncertain attention and perturbing the sensitive embedding.Specifically, we first locate the patch-wise attentional regions that mostly affect model perception, therefore intensively activating the uncertainty of the attention mechanism and confusing the model decisions in turn.Furthermore, we search the pixel-wise attacking positions that are more likely to derange the embedded tokens using sensitive embedding perturbation, which could serve as a strong transferable attacking pattern.By jointly confusing the unique yet widely-used architectural features among transformer-based models, we can activate strong attacking transferability among diverse ViTs. Extensive experiments on large-scale dataset ImageNet using various popular transformers demonstrate that our ATA outperforms other baselines by large margins (at least +15% Attack Success Rate). Our code is available at https://github.com/nlsde-safety-team/ATA

Supplementary Material

MP4 File (MM22-fp0992.mp4)
Presentation video for the paper "Generating Transferable Adversarial Examples against Vision Transformers"

References

[1]
Ahmed Aldahdooh, Wassim Hamidouche, and Olivier Deforges. 2021. Reveal of vision transformers robustness against adversarial attacks. arXiv preprint arXiv:2106.03734 (2021).
[2]
Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lui, and Cordelia Schmid. 2021. Vivit: A video vision transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 6836--6846.
[3]
Yutong Bai, Jieru Mei, Alan L. Yuille, and Cihang Xie. 2021. Are Transformers More Robust Than CNNs? arXiv: 2111.05464 (2021).
[4]
Srinadh Bhojanapalli, Ayan Chakrabarti, Daniel Glasner, Daliang Li, Thomas Unterthiner, and Andreas Veit. 2021. Understanding Robustness of Transformers for Image Classification. arXiv:2103.14586 (2021).
[5]
Tom B. Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. 2017. Adversarial Patch. arXiv: 1712.09665 (2017).
[6]
Hila Chefer, Shir Gur, and Lior Wolf. 2021. Transformer Interpretability Beyond Attention Visualization. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19--25, 2021. Computer Vision Foundation / IEEE, 782--791.
[7]
Chun-Fu Richard Chen, Quanfu Fan, and Rameswar Panda. 2021. Crossvit: Crossattention multi-scale vision transformer for image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 357--366.
[8]
Zhi Chen, Jingjing Li, Yadan Luo, Zi Huang, and Yang Yang. 2020. Canzsl: Cycle-Consistent Adversarial Networks for Zero-Shot Learning from Natural Language. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). 874--883.
[9]
Zhi Chen, Yadan Luo, Ruihong Qiu, Sen Wang, Zi Huang, Jingjing Li, and Zheng Zhang. 2021. Semantics Disentangling for Generalized Zero-Shot Learning. In IEEE/CVF International Conference on Computer Vision (ICCV).
[10]
Zhi Chen, Yadan Luo, Sen Wang, Ruihong Qiu, Jingjing Li, and Zi Huang. 2021. Mitigating Generation Shifts for Generalized Zero-Shot Learning. In Proceedings of the 28th ACM International Conference on Multimedia.
[11]
Zhi Chen, Sen Wang, Jingjing Li, and Zi Huang. 2020. Rethinking Generative Zero-Shot Learning: An Ensemble Learning Perspective for Recognising Visual Patches. In Proceedings of the 28th ACM International Conference on Multimedia. 3413--3421.
[12]
Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. Advances in neural information processing systems 28 (2015).
[13]
Stéphane d'Ascoli, Hugo Touvron, Matthew L. Leavitt, Ari S. Morcos, Giulio Biroli, and Levent Sagun. 2021. ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18--24 July 2021, Virtual Event (Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, 2286--2296. http://proceedings.mlr.press/v139/d-ascoli21a.html
[14]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[15]
Jiahua Dong, Yang Cong, Gan Sun, Zhen Fang, and Zhengming Ding. 2021. Where and How to Transfer: Knowledge Aggregation-Induced Transferability Perception for Unsupervised Domain Adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021), 1--1. https://doi.org/10.1109/TPAMI. 2021.3128560
[16]
Jiahua Dong, Yang Cong, Gan Sun, Bineng Zhong, and Xiaowei Xu. 2020. What Can Be Transferred: Unsupervised Domain Adaptation for Endoscopic Lesions Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 4022--4031.
[17]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR 2021.
[18]
Ranjie Duan, Xiaofeng Mao, A Kai Qin, Yuefeng Chen, Shaokai Ye, Yuan He, and Yun Yang. 2021. Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16062--16071.
[19]
Yonggan Fu, Shunyao Zhang, Shang Wu, Cheng Wan, and Yingyan Lin. 2022. Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?. In International Conference on Learning Representations. https: //openreview.net/forum?id=28ib9tf6zhr
[20]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[21]
Jindong Gu, Volker Tresp, and Yao Qin. 2021. Are Vision Transformers Robust to Patch Perturbations? arXiv: 2111.10659 (2021).
[22]
Meng-Hao Guo, Tian-Xing Xu, Jiangjiang Liu, Zheng-Ning Liu, Peng-Tao Jiang, Tai-Jiang Mu, Song-Hai Zhang, Ralph R. Martin, Ming-Ming Cheng, and Shi-Min Hu. 2021. Attention Mechanisms in Computer Vision: A Survey. arXiv: 2111.07624 (2021).
[23]
Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, Zhaohui Yang, Yiman Zhang, and Dacheng Tao. 2020. A Survey on Visual Transformer. arXiv: 2012.12556 (2020).
[24]
Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, and Yunhe Wang. 2021. Transformer in transformer. Advances in Neural Information Processing Systems 34 (2021).
[25]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.
[26]
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4700--4708.
[27]
Ameya Joshi, Gauri Jagatap, and Chinmay Hegde. 2021. Adversarial Token Attacks on Vision Transformers. arXiv:2110.04337 (2021).
[28]
Alex Krizhevsky. 2014. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997 (2014).
[29]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016).
[30]
Maosen Li, Yanhua Yang, Kun Wei, Xu Yang, and Heng Huang. 2022. Learning Universal Adversarial Perturbation by Adversarial Example. (2022).
[31]
Siyuan Liang, Xingxing Wei, Siyuan Yao, and Xiaochun Cao. 2020. Efficient adversarial attacks for visual object tracking. In European Conference on Computer Vision. Springer, 34--50.
[32]
Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, and Xiaochun Cao. 2021. Parallel rectangle flip attack: A query-based black-box attack against object detection. In ICCV.
[33]
Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, and Xiaochun Cao. 2022. Parallel rectangle flip attack: A query-based black-box attack against object detection. arXiv preprint arXiv:2201.08970 (2022).
[34]
Aishan Liu, Tairan Huang, Xianglong Liu, Yitao Xu, Yuqing Ma, Xinyun Chen, Stephen J Maybank, and Dacheng Tao. 2020. Spatiotemporal attacks for embodied agents. In European Conference on Computer Vision. Springer, 122--138.
[35]
Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, and Dacheng Tao. 2019. Perceptual-sensitive gan for generating adversarial patches. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 1028--1035.
[36]
Aishan Liu, Jiakai Wang, Xianglong Liu, Bowen Cao, Chongzhi Zhang, and Hang Yu. 2020. Bias-based universal adversarial patch attack for automatic check-out. In ECCV.
[37]
Yang Liu, Yao Zhang, Yixin Wang, Feng Hou, Jin Yuan, Jiang Tian, Yang Zhang, Zhongchao Shi, Jianping Fan, and Zhiqiang He. 2021. A Survey of Visual Transformers. arXiv: 2111.06091 (2021).
[38]
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 10012--10022.
[39]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
[40]
Kaleel Mahmood, Rigel Mahmood, and Marten van Dijk. 2021. On the Robustness of Vision Transformers to Adversarial Examples. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 7838--7847.
[41]
Kaleel Mahmood, Rigel Mahmood, and Marten van Dijk. 2021. On the Robustness of Vision Transformers to Adversarial Examples. arXiv: 2104.02610 (2021).
[42]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In CVPR.
[43]
Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Shahbaz Khan, and Fatih Porikli. 2021. On improving adversarial transferability of vision transformers. arXiv preprint arXiv:2106.04169 (2021).
[44]
Muzammal Naseer, Kanchana Ranasinghe, Salman H. Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. 2021. Intriguing Properties of Vision Transformers. arXiv: 2105.10497 (2021).
[45]
Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical Black-Box Attacks against Machine Learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security.
[46]
Sayak Paul and Pin-Yu Chen. 2021. Vision Transformers are Robust Learners. arXiv: 2105.07581 (2021).
[47]
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 115, 3 (2015), 211--252.
[48]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[49]
Robin Strudel, Ricardo Garcia, Ivan Laptev, and Cordelia Schmid. 2021. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 7262--7272.
[50]
Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. 2019. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation 23, 5 (2019), 828--841.
[51]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[52]
Shiyu Tang, Ruihao Gong, Yan Wang, Aishan Liu, Jiakai Wang, Xinyun Chen, Fengwei Yu, Xianglong Liu, Dawn Song, Alan Yuille, et al. 2021. Robustart: Benchmarking robustness on architecture design and training techniques. arXiv preprint arXiv:2109.05211 (2021).
[53]
Simen Thys, Wiebe Van Ranst, and Toon Goedemé. 2019. Fooling automated surveillance cameras: adversarial patches to attack person detection. In CVPRW.
[54]
Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. 2020. Training data-efficient image transformers & distillation through attention. arXiv: 2012.12877 (2020).
[55]
Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research 9, 11 (2008).
[56]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017. 5998--6008.
[57]
Jiakai Wang, Aishan Liu, Xiao Bai, and Xianglong Liu. 2022. Universal Adversarial Patch Attack for Automatic Checkout Using Perceptual and Attentional Bias. IEEE Transactions on Image Processing 31 (2022), 598--611. https://doi.org/10. 1109/TIP.2021.3127849
[58]
Jiakai Wang, Aishan Liu, Zixin Yin, Shunchang Liu, Shiyu Tang, and Xianglong Liu. 2021. Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 8565--8574.
[59]
Xiaolong Wang, Ross B. Girshick, Abhinav Gupta, and Kaiming He. 2018. NonLocal Neural Networks. In CVPR 2018. 7794--7803.
[60]
Kun Wei, Cheng Deng, Xu Yang, et al. 2020. Lifelong Zero-Shot Learning. In IJCAI. 551--557.
[61]
Kun Wei, Cheng Deng, Xu Yang, and Dacheng Tao. 2021. Incremental zero-shot learning. IEEE Transactions on Cybernetics (2021).
[62]
Xingxing Wei, Siyuan Liang, Ning Chen, and Xiaochun Cao. 2018. Transferable adversarial attacks for image and video object detection. arXiv preprint arXiv:1811.12641 (2018).
[63]
Zhipeng Wei, Jingjing Chen, Micah Goldblum, Zuxuan Wu, Tom Goldstein, and Yu-Gang Jiang. 2021. Towards Transferable Adversarial Attacks on Vision Transformers. arXiv: 2109.04176 (2021).
[64]
Ross Wightman. 2019. PyTorch Image Models. https://github.com/rwightman/pytorch-image-models. https://doi.org/10.5281/zenodo.4414861
[65]
Xu Yang, Cheng Deng, Kun Wei, Junchi Yan, and Wei Liu. 2020. Adversarial learning for robust deep clustering. Advances in Neural Information Processing Systems 33 (2020), 9098--9108.
[66]
Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zi-Hang Jiang, Francis EH Tay, Jiashi Feng, and Shuicheng Yan. 2021. Tokens-to-token vit: Training vision transformers from scratch on imagenet. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 558--567.
[67]
Zixiang Zhao, Shuang Xu, Chunxia Zhang, Junmin Liu, Pengfei Li, and Jiangshe Zhang. 2020. DIDFuse: Deep image decomposition for infrared and visible image fusion. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI) 2020.
[68]
Zixiang Zhao, Shuang Xu, Chunxia Zhang, Junmin Liu, and Jiangshe Zhang. 2020. Bayesian fusion for infrared and visible images. Signal Processing 177 (2020), 107734.
[69]
Zixiang Zhao, Shuang Xu, Jiangshe Zhang, Chengyang Liang, Chunxia Zhang, and Junmin Liu. 2021. Efficient and Model-Based Infrared and Visible Image Fusion via Algorithm Unrolling. IEEE Transactions on Circuits and Systems for Video Technology (2021).
[70]
Zixiang Zhao, Jiangshe Zhan, Shuang Xu, Kai Sun, Lu Huang, Junmin Liu, and Chunxia Zhang. 2021. FGF-GAN: A Lightweight Generative Adversarial Network for Pansharpening via Fast Guided Filter. In 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 1--6.
[71]
Zhe Zhou, Di Tang, Xiaofeng Wang, Weili Han, Xiangyu Liu, and Kehuan Zhang. 2018. Invisible mask: Practical attacks on face recognition with infrared. arXiv preprint arXiv:1803.04683 (2018).
[72]
Xiaopei Zhu, Xiao Li, Jianmin Li, Zheyao Wang, and Xiaolin Hu. 2021. Fooling thermal infrared pedestrian detectors in real world using small bulbs. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 3616--3624.

Cited By

View all
  • (2024)Comparative Study of Adversarial Defenses: Adversarial Training and Regularization in Vision Transformers and CNNsElectronics10.3390/electronics1313253413:13(2534)Online publication date: 27-Jun-2024
  • (2024)GIST: Generated Inputs Sets Transferability in Deep LearningACM Transactions on Software Engineering and Methodology10.1145/367245733:8(1-38)Online publication date: 13-Jun-2024
  • (2024)PIP: Detecting Adversarial Examples in Large Vision-Language Models via Attention Patterns of Irrelevant Probe QuestionsProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3685510(11175-11183)Online publication date: 28-Oct-2024
  • Show More Cited By

Index Terms

  1. Generating Transferable Adversarial Examples against Vision Transformers

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '22: Proceedings of the 30th ACM International Conference on Multimedia
    October 2022
    7537 pages
    ISBN:9781450392037
    DOI:10.1145/3503161
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 10 October 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. adversarial attacks
    2. transferability
    3. vision transformer

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    MM '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)205
    • Downloads (Last 6 weeks)27
    Reflects downloads up to 22 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Comparative Study of Adversarial Defenses: Adversarial Training and Regularization in Vision Transformers and CNNsElectronics10.3390/electronics1313253413:13(2534)Online publication date: 27-Jun-2024
    • (2024)GIST: Generated Inputs Sets Transferability in Deep LearningACM Transactions on Software Engineering and Methodology10.1145/367245733:8(1-38)Online publication date: 13-Jun-2024
    • (2024)PIP: Detecting Adversarial Examples in Large Vision-Language Models via Attention Patterns of Irrelevant Probe QuestionsProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3685510(11175-11183)Online publication date: 28-Oct-2024
    • (2024)Transferable Multimodal Attack on Vision-Language Pre-training Models2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00102(1722-1740)Online publication date: 19-May-2024
    • (2024)RPID: Boosting Transferability of Adversarial Attacks on Vision Transformers2024 IEEE International Conference on Systems, Man, and Cybernetics (SMC)10.1109/SMC54092.2024.10831175(1063-1069)Online publication date: 6-Oct-2024
    • (2024)NAPGuard: Towards Detecting Naturalistic Adversarial Patches2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.02300(24367-24376)Online publication date: 16-Jun-2024
    • (2024)Transformers: A Security PerspectiveIEEE Access10.1109/ACCESS.2024.350937212(181071-181105)Online publication date: 2024
    • (2024)How Deep Learning Sees the World: A Survey on Adversarial Attacks & DefensesIEEE Access10.1109/ACCESS.2024.339511812(61113-61136)Online publication date: 2024
    • (2024)Enhancing adversarial robustness for deep metric learning via neural discrete adversarial trainingComputers & Security10.1016/j.cose.2024.103899143(103899)Online publication date: Aug-2024
    • (2024)Patch Attacks on Vision Transformer via Skip Attention GradientsPattern Recognition and Computer Vision10.1007/978-981-97-8685-5_39(554-567)Online publication date: 3-Nov-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media