Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

PointAPA: Towards Availability Poisoning Attacks in 3D Point Clouds

  • Conference paper
  • First Online:
Computer Security – ESORICS 2024 (ESORICS 2024)

Abstract

Recently, the realm of deep learning applied to 3D point clouds has witnessed significant progress, accompanied by a growing concern about the emerging security threats to point cloud models. While adversarial attacks and backdoor attacks have gained continuous attention, the potentially more detrimental availability poisoning attack (APA) remains unexplored in this domain. In response, we propose the first APA approach in 3D point cloud domain (PointAPA), which utilizes class-wise rotations to serve as shortcuts for poisoning, thus satisfying efficiency, effectiveness, concealment, and the black-box setting. Drawing inspiration from the prevalence of shortcuts in deep neural networks, we exploit the impact of rotation in 3D data augmentation on feature extraction in point cloud networks. This rotation serves as a shortcut, allowing us to apply varying degrees of rotation to training samples from different categories, creating effective shortcuts that contaminate the training process. The natural and efficient rotating operation makes our attack highly inconspicuous and easy to launch. Furthermore, our poisoning scheme is more concealed due to keeping the labels clean (i.e., clean-label APA). Extensive experiments on benchmark datasets of 3D point clouds (including real-world datasets for autonomous driving) have provided compelling evidence that our approach largely compromises 3D point cloud models, resulting in a reduction in model accuracy ranging from 40.6% to 73.1% compared to clean training. Additionally, our method demonstrates resilience against statistical outlier removal (SOR) and three types of random data augmentation defense schemes. Our code is available at https://github.com/wxldragon/PointAPA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Default settings in this paper consist of using PGD [29], CIFAR10 [23], and ResNet50 [16] in 2D images, JGBA [28], ModelNet10 [43], and PointNet [31] in 3D point clouds, both with 40 iterations and a batch size of 16.

References

  1. Biggio, B., Nelson, B., Laskov, P.: Support vector machines under adversarial label noise. In: Proceedings of the 3rd Asian Conference on Machine Learning (ACML 2011), pp. 97–112 (2011)

    Google Scholar 

  2. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: Proceedings of the 38th IEEE Symposium on Security and Privacy (SP 2017), pp. 39–57 (2017)

    Google Scholar 

  3. Chang, A.X., et al.: ShapeNet: an information-rich 3D model repository. arXiv preprint arXiv:1512.03012 (2015)

  4. Chen, S., et al.: Self-ensemble protection: training checkpoints are good data protectors. In: Proceedings of the 11th International Conference on Learning Representations (ICLR 2023) (2023)

    Google Scholar 

  5. Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multi-view 3D object detection network for autonomous driving. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), pp. 1907–1915 (2017)

    Google Scholar 

  6. Chu, W., Li, L., Li, B.: TPC: transformation-specific smoothing for point cloud models. In: Proceedings of the 39th International Conference on Machine Learning (ICML2022), pp. 4035–4056 (2022)

    Google Scholar 

  7. Fan, L., He, F., Guo, Q., Tang, W., Hong, X., Li, B.: Be careful with rotation: a uniform backdoor pattern for 3D shape. arXiv preprint arXiv:2211.16192 (2022)

  8. Feng, J., Cai, Q.Z., Zhou, Z.H.: Learning to confuse: generating training time adversarial data with auto-encoder. In: Proceedings of the 33rd Neural Information Processing Systems (NeurIPS 2019), pp. 11971–11981 (2019)

    Google Scholar 

  9. Fowl, L., Goldblum, M., Chiang, P.Y., Geiping, J., Czaja, W., Goldstein, T.: Adversarial examples make strong poisons. In: Proceedings of the 35th Neural Information Processing Systems (NeurIPS 2021), pp. 30339–30351 (2021)

    Google Scholar 

  10. Fu, S., He, F., Liu, Y., Shen, L., Tao, D.: Robust unlearnable examples: protecting data against adversarial learning. In: Proceedings of the 10th International Conference on Learning Representations (ICLR 2022) (2022)

    Google Scholar 

  11. Gao, K., Bai, J., Wu, B., Ya, M., Xia, S.T.: Imperceptible and robust backdoor attack in 3D point cloud. IEEE Trans. Inf. Forensics Secur. (TIFS 2023), pp. 1267–1282 (2023)

    Google Scholar 

  12. Geirhos, R., et al.: Shortcut learning in deep neural networks. Nat. Mach. Intell. 2, 665–673 (2020)

    Article  Google Scholar 

  13. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  14. Guo, M.H., Cai, J.X., Liu, Z.N., Mu, T.J., Martin, R.R., Hu, S.M.: PCT: point cloud transformer. Comput. Vis. Media 7, 187–199 (2021)

    Article  Google Scholar 

  15. Hau, Z., Demetriou, S., Muñoz-González, L., Lupu, E.C.: Shadow-catcher: looking into shadows to detect ghost objects in autonomous vehicle 3D sensing. In: Bertino, E., Shulman, H., Waidner, M. (eds.) ESORICS 2021. LNCS, vol. 12972, pp. 691–711. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88418-5_33

    Chapter  Google Scholar 

  16. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2016), pp. 770–778 (2016)

    Google Scholar 

  17. Hu, S., et al.: PointCRT: detecting backdoor in 3D point cloud via corruption robustness. In: Proceedings of the 31st ACM International Conference on Multimedia (MM 2023), pp. 666–675 (2023)

    Google Scholar 

  18. Hu, S., et al.: PointCA: evaluating the robustness of 3d point cloud completion models against adversarial examples. In: Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI 2023), pp. 872–880 (2023)

    Google Scholar 

  19. Hu, S., et al.: BadHash: invisible backdoor attacks against deep hashing with clean label. In: Proceedings of the 30th ACM International Conference on Multimedia (MM 2022), pp. 678–686 (2022)

    Google Scholar 

  20. Huang, H., Ma, X., Erfani, S.M., Bailey, J., Wang, Y.: Unlearnable examples: making personal data unexploitable. In: Proceedings of the 9th International Conference on Learning Representations (ICLR 2021) (2021)

    Google Scholar 

  21. Huang, Q., Dong, X., Chen, D., Zhou, H., Zhang, W., Yu, N.: Shape-invariant 3D adversarial point clouds. In: Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022), pp. 15335–15344 (2022)

    Google Scholar 

  22. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  23. Krizhevsky, A.: Learning multiple layers of features from tiny images. Master’s thesis, University of Tront (2009)

    Google Scholar 

  24. Li, X., et al.: PointBA: towards backdoor attacks in 3D point cloud. In: Proceedings of the 18th IEEE/CVF International Conference on Computer Vision (ICCV 2021), pp. 16492–16501 (2021)

    Google Scholar 

  25. Li, Y., Bu, R., Sun, M., Wu, W., Di, X., Chen, B.: PointCNN: convolution on X-transformed points. In: Proceedings of the 32nd Neural Information Processing Systems (NeurIPS 2018), pp. 828–838 (2018)

    Google Scholar 

  26. Liu, D., Yu, R., Su, H.: Extending adversarial attacks and defenses to deep 3D point cloud classifiers. In: Proceedings of the 26th IEEE International Conference on Image Processing (ICIP 2019), pp. 2279–2283 (2019)

    Google Scholar 

  27. Lorenz, T., Ruoss, A., Balunović, M., Singh, G., Vechev, M.: Robustness certification for point cloud models. In: Proceedings of the 18th IEEE/CVF International Conference on Computer Vision (ICCV 2021), pp. 7608–7618 (2021)

    Google Scholar 

  28. Ma, C., Meng, W., Wu, B., Xu, S., Zhang, X.: Efficient joint gradient based attack against SOR defense for 3D point cloud classification. In: Proceedings of the 28th ACM International Conference on Multimedia (MM 2020), pp. 1819–1827 (2020)

    Google Scholar 

  29. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  30. Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), pp. 3061–3070 (2015)

    Google Scholar 

  31. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), pp. 652–660 (2017)

    Google Scholar 

  32. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: Proceedings of the 31st Neural Information Processing Systems (NeurIPS 2017), pp. 5099–5108 (2017)

    Google Scholar 

  33. Sadasivan, V.S., Soltanolkotabi, M., Feizi, S.: CUDA: convolution-based unlearnable datasets. arXiv preprint arXiv:2303.04278 (2023)

  34. Sandoval-Segura, P., Singla, V., Geiping, J., Goldblum, M., Goldstein, T., Jacobs, D.W.: Autoregressive perturbations for data poisoning. In: Proceedings of the 36th Neural Information Processing Systems (NeurIPS 2022) (2022)

    Google Scholar 

  35. Singh, S.P., Wang, L., Gupta, S., Goli, H., Padmanabhan, P., Gulyás, B.: 3D deep learning on medical images: a review. Sensors 20(18), 5097 (2020)

    Article  Google Scholar 

  36. Tao, L., Feng, L., Yi, J., Huang, S.J., Chen, S.: Better safe than sorry: preventing delusive adversaries with adversarial training. In: Proceedings of the 35th Neural Information Processing Systems (NeurIPS 2021), pp. 16209–16225 (2021)

    Google Scholar 

  37. Wang, X., et al.: Corrupting convolution-based unlearnable datasets with pixel-based image transformations. arXiv preprint arXiv:2311.18403 (2023)

  38. Wang, Y., et al.: PointPatchMix: point cloud mixing with patch scoring. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI 2024), pp. 5686–5694 (2024)

    Google Scholar 

  39. Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M., Solomon, J.M.: Dynamic graph CNN for learning on point clouds. ACM Trans. Graph. (TOG 2019), pp. 1–12 (2019)

    Google Scholar 

  40. Wen, R., Zhao, Z., Liu, Z., Backes, M., Wang, T., Zhang, Y.: Is adversarial training really a silver bullet for mitigating data poisoning. In: Proceedings of the 11th International Conference on Learning Representations (ICLR 2023) (2023)

    Google Scholar 

  41. Wicker, M., Kwiatkowska, M.: Robustness of 3D deep learning in an adversarial setting. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), pp. 11767–11775 (2019)

    Google Scholar 

  42. Wu, S., Chen, S., Xie, C., Huang, X.: One-pixel shortcut: on the learning preference of deep neural networks. In: Proceedings of the 11th International Conference on Learning Representations (ICLR 2023) (2023)

    Google Scholar 

  43. Wu, Z., et al.: 3D ShapeNets: a deep representation for volumetric shapes. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), pp. 1912–1920 (2015)

    Google Scholar 

  44. Xiang, C., Qi, C.R., Li, B.: Generating 3D adversarial point clouds. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), pp. 9136–9144 (2019)

    Google Scholar 

  45. Xiang, Z., Miller, D.J., Chen, S., Li, X., Kesidis, G.: A backdoor attack against 3D point cloud classifiers. In: Proceedings of the 18th IEEE/CVF International Conference on Computer Vision (ICCV 2021), pp. 7597–7607 (2021)

    Google Scholar 

  46. Yang, J., Zhang, Q., Fang, R., Ni, B., Liu, J., Tian, Q.: Adversarial attack and defense on point sets. arXiv preprint arxiv:1902.10899 (2019)

  47. Ye, Z., Liu, C., Tian, W., Kan, C.: A deep learning approach for the identification of small process shifts in additive manufacturing using 3D point clouds. Procedia Manuf. 48, 770–775 (2020)

    Article  Google Scholar 

  48. Yu, D., Zhang, H., Chen, W., Yin, J., Liu, T.Y.: Availability attacks create shortcuts. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2022), pp. 2367–2376 (2022)

    Google Scholar 

  49. Yuan, C.H., Wu, S.H.: Neural tangent generalization attacks. In: Proceedings of the 38th International Conference on Machine Learning (ICML 2021), pp. 12230–12240 (2021)

    Google Scholar 

  50. Yue, X., Wu, B., Seshia, S.A., Keutzer, K., Sangiovanni-Vincentelli, A.L.: A LiDAR point cloud generator: from a virtual world to autonomous driving. In: Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval (ICMR 2018), pp. 458–464 (2018)

    Google Scholar 

  51. Zhang, H., et al.: Detector collapse: backdooring object detection to catastrophic overload or blindness. In: Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024) (2024)

    Google Scholar 

  52. Zhang, J., et al.: Unlearnable clusters: towards label-agnostic unlearnable examples. In: Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2023) (2023)

    Google Scholar 

  53. Zhao, B., Lao, Y.: CLPA: clean-label poisoning availability attacks using generative adversarial nets. In: Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI 2022), pp. 9162–9170 (2022)

    Google Scholar 

  54. Zheng, T., Chen, C., Yuan, J., Li, B., Ren, K.: PointCloud saliency maps. In: Proceedings of the 17th IEEE/CVF International Conference on Computer Vision (ICCV 2019), pp. 1598–1606 (2019)

    Google Scholar 

  55. Zhou, H., Chen, K., Zhang, W., Fang, H., Zhou, W., Yu, N.: DUP-Net: denoiser and upsampler network for 3D adversarial point clouds defense. In: Proceedings of the 17th IEEE/CVF International Conference on Computer Vision (ICCV 2019), pp. 1961–1970 (2019)

    Google Scholar 

Download references

Acknowledgements

We sincerely appreciate the valuable feedback provided by the anonymous reviewers for our paper. Shengshan’s work is supported in part by the National Natural Science Foundation of China (Grant No. 62372196). Minghui’s work is supported in part by the National Natural Science Foundation of China (Grant No. 62202186). Minghui Li and Peng Xu are co-corresponding authors.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Minghui Li or Peng Xu .

Editor information

Editors and Affiliations

A Appendix

A Appendix

Datasets and Models. We conduct extensive experiments on four 3D point cloud benchmark datasets: ModelNet40 [43], ModelNet10 [43], ShapeNetPart [3], and a real-world dataset KITTI [30] for autonomous driving, while training on five widely-used 3D point cloud models (including CNNs and Transformer), PointNet [31], PointNet++ [32], DGCNN [39], PointCNN [25], and PCT [14]. The ModelNet40 dataset is a point cloud dataset used for classification, consisting of 40 categories. The training set contains 9,843 point cloud data, while the test set contains 2,468 point cloud data. ModelNet10 is a subset of ModelNet40 dataset, consisting of 10 categories. ShapeNetPart is a subset of ShapeNet that includes 16 categories. The training set consists of 12,137 samples, while the test set consists of 2,874 samples. Consistent with [17, 24], we split KITTI object clouds into class “vehicle” and “human” containing 1000 training data and 662 test data. The KITTI point cloud object consists of 256 points and the point cloud objects from the remaining three datasets consist of 1024 points, which are then normalized to the range of \([-1,1]^3\).

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, X. et al. (2024). PointAPA: Towards Availability Poisoning Attacks in 3D Point Clouds. In: Garcia-Alfaro, J., Kozik, R., Choraś, M., Katsikas, S. (eds) Computer Security – ESORICS 2024. ESORICS 2024. Lecture Notes in Computer Science, vol 14982. Springer, Cham. https://doi.org/10.1007/978-3-031-70879-4_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-70879-4_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-70878-7

  • Online ISBN: 978-3-031-70879-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics