Abstract
Recently, the realm of deep learning applied to 3D point clouds has witnessed significant progress, accompanied by a growing concern about the emerging security threats to point cloud models. While adversarial attacks and backdoor attacks have gained continuous attention, the potentially more detrimental availability poisoning attack (APA) remains unexplored in this domain. In response, we propose the first APA approach in 3D point cloud domain (PointAPA), which utilizes class-wise rotations to serve as shortcuts for poisoning, thus satisfying efficiency, effectiveness, concealment, and the black-box setting. Drawing inspiration from the prevalence of shortcuts in deep neural networks, we exploit the impact of rotation in 3D data augmentation on feature extraction in point cloud networks. This rotation serves as a shortcut, allowing us to apply varying degrees of rotation to training samples from different categories, creating effective shortcuts that contaminate the training process. The natural and efficient rotating operation makes our attack highly inconspicuous and easy to launch. Furthermore, our poisoning scheme is more concealed due to keeping the labels clean (i.e., clean-label APA). Extensive experiments on benchmark datasets of 3D point clouds (including real-world datasets for autonomous driving) have provided compelling evidence that our approach largely compromises 3D point cloud models, resulting in a reduction in model accuracy ranging from 40.6% to 73.1% compared to clean training. Additionally, our method demonstrates resilience against statistical outlier removal (SOR) and three types of random data augmentation defense schemes. Our code is available at https://github.com/wxldragon/PointAPA.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Biggio, B., Nelson, B., Laskov, P.: Support vector machines under adversarial label noise. In: Proceedings of the 3rd Asian Conference on Machine Learning (ACML 2011), pp. 97–112 (2011)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: Proceedings of the 38th IEEE Symposium on Security and Privacy (SP 2017), pp. 39–57 (2017)
Chang, A.X., et al.: ShapeNet: an information-rich 3D model repository. arXiv preprint arXiv:1512.03012 (2015)
Chen, S., et al.: Self-ensemble protection: training checkpoints are good data protectors. In: Proceedings of the 11th International Conference on Learning Representations (ICLR 2023) (2023)
Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multi-view 3D object detection network for autonomous driving. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), pp. 1907–1915 (2017)
Chu, W., Li, L., Li, B.: TPC: transformation-specific smoothing for point cloud models. In: Proceedings of the 39th International Conference on Machine Learning (ICML2022), pp. 4035–4056 (2022)
Fan, L., He, F., Guo, Q., Tang, W., Hong, X., Li, B.: Be careful with rotation: a uniform backdoor pattern for 3D shape. arXiv preprint arXiv:2211.16192 (2022)
Feng, J., Cai, Q.Z., Zhou, Z.H.: Learning to confuse: generating training time adversarial data with auto-encoder. In: Proceedings of the 33rd Neural Information Processing Systems (NeurIPS 2019), pp. 11971–11981 (2019)
Fowl, L., Goldblum, M., Chiang, P.Y., Geiping, J., Czaja, W., Goldstein, T.: Adversarial examples make strong poisons. In: Proceedings of the 35th Neural Information Processing Systems (NeurIPS 2021), pp. 30339–30351 (2021)
Fu, S., He, F., Liu, Y., Shen, L., Tao, D.: Robust unlearnable examples: protecting data against adversarial learning. In: Proceedings of the 10th International Conference on Learning Representations (ICLR 2022) (2022)
Gao, K., Bai, J., Wu, B., Ya, M., Xia, S.T.: Imperceptible and robust backdoor attack in 3D point cloud. IEEE Trans. Inf. Forensics Secur. (TIFS 2023), pp. 1267–1282 (2023)
Geirhos, R., et al.: Shortcut learning in deep neural networks. Nat. Mach. Intell. 2, 665–673 (2020)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Guo, M.H., Cai, J.X., Liu, Z.N., Mu, T.J., Martin, R.R., Hu, S.M.: PCT: point cloud transformer. Comput. Vis. Media 7, 187–199 (2021)
Hau, Z., Demetriou, S., Muñoz-González, L., Lupu, E.C.: Shadow-catcher: looking into shadows to detect ghost objects in autonomous vehicle 3D sensing. In: Bertino, E., Shulman, H., Waidner, M. (eds.) ESORICS 2021. LNCS, vol. 12972, pp. 691–711. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88418-5_33
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2016), pp. 770–778 (2016)
Hu, S., et al.: PointCRT: detecting backdoor in 3D point cloud via corruption robustness. In: Proceedings of the 31st ACM International Conference on Multimedia (MM 2023), pp. 666–675 (2023)
Hu, S., et al.: PointCA: evaluating the robustness of 3d point cloud completion models against adversarial examples. In: Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI 2023), pp. 872–880 (2023)
Hu, S., et al.: BadHash: invisible backdoor attacks against deep hashing with clean label. In: Proceedings of the 30th ACM International Conference on Multimedia (MM 2022), pp. 678–686 (2022)
Huang, H., Ma, X., Erfani, S.M., Bailey, J., Wang, Y.: Unlearnable examples: making personal data unexploitable. In: Proceedings of the 9th International Conference on Learning Representations (ICLR 2021) (2021)
Huang, Q., Dong, X., Chen, D., Zhou, H., Zhang, W., Yu, N.: Shape-invariant 3D adversarial point clouds. In: Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022), pp. 15335–15344 (2022)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Krizhevsky, A.: Learning multiple layers of features from tiny images. Master’s thesis, University of Tront (2009)
Li, X., et al.: PointBA: towards backdoor attacks in 3D point cloud. In: Proceedings of the 18th IEEE/CVF International Conference on Computer Vision (ICCV 2021), pp. 16492–16501 (2021)
Li, Y., Bu, R., Sun, M., Wu, W., Di, X., Chen, B.: PointCNN: convolution on X-transformed points. In: Proceedings of the 32nd Neural Information Processing Systems (NeurIPS 2018), pp. 828–838 (2018)
Liu, D., Yu, R., Su, H.: Extending adversarial attacks and defenses to deep 3D point cloud classifiers. In: Proceedings of the 26th IEEE International Conference on Image Processing (ICIP 2019), pp. 2279–2283 (2019)
Lorenz, T., Ruoss, A., Balunović, M., Singh, G., Vechev, M.: Robustness certification for point cloud models. In: Proceedings of the 18th IEEE/CVF International Conference on Computer Vision (ICCV 2021), pp. 7608–7618 (2021)
Ma, C., Meng, W., Wu, B., Xu, S., Zhang, X.: Efficient joint gradient based attack against SOR defense for 3D point cloud classification. In: Proceedings of the 28th ACM International Conference on Multimedia (MM 2020), pp. 1819–1827 (2020)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), pp. 3061–3070 (2015)
Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), pp. 652–660 (2017)
Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: Proceedings of the 31st Neural Information Processing Systems (NeurIPS 2017), pp. 5099–5108 (2017)
Sadasivan, V.S., Soltanolkotabi, M., Feizi, S.: CUDA: convolution-based unlearnable datasets. arXiv preprint arXiv:2303.04278 (2023)
Sandoval-Segura, P., Singla, V., Geiping, J., Goldblum, M., Goldstein, T., Jacobs, D.W.: Autoregressive perturbations for data poisoning. In: Proceedings of the 36th Neural Information Processing Systems (NeurIPS 2022) (2022)
Singh, S.P., Wang, L., Gupta, S., Goli, H., Padmanabhan, P., Gulyás, B.: 3D deep learning on medical images: a review. Sensors 20(18), 5097 (2020)
Tao, L., Feng, L., Yi, J., Huang, S.J., Chen, S.: Better safe than sorry: preventing delusive adversaries with adversarial training. In: Proceedings of the 35th Neural Information Processing Systems (NeurIPS 2021), pp. 16209–16225 (2021)
Wang, X., et al.: Corrupting convolution-based unlearnable datasets with pixel-based image transformations. arXiv preprint arXiv:2311.18403 (2023)
Wang, Y., et al.: PointPatchMix: point cloud mixing with patch scoring. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI 2024), pp. 5686–5694 (2024)
Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M., Solomon, J.M.: Dynamic graph CNN for learning on point clouds. ACM Trans. Graph. (TOG 2019), pp. 1–12 (2019)
Wen, R., Zhao, Z., Liu, Z., Backes, M., Wang, T., Zhang, Y.: Is adversarial training really a silver bullet for mitigating data poisoning. In: Proceedings of the 11th International Conference on Learning Representations (ICLR 2023) (2023)
Wicker, M., Kwiatkowska, M.: Robustness of 3D deep learning in an adversarial setting. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), pp. 11767–11775 (2019)
Wu, S., Chen, S., Xie, C., Huang, X.: One-pixel shortcut: on the learning preference of deep neural networks. In: Proceedings of the 11th International Conference on Learning Representations (ICLR 2023) (2023)
Wu, Z., et al.: 3D ShapeNets: a deep representation for volumetric shapes. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), pp. 1912–1920 (2015)
Xiang, C., Qi, C.R., Li, B.: Generating 3D adversarial point clouds. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), pp. 9136–9144 (2019)
Xiang, Z., Miller, D.J., Chen, S., Li, X., Kesidis, G.: A backdoor attack against 3D point cloud classifiers. In: Proceedings of the 18th IEEE/CVF International Conference on Computer Vision (ICCV 2021), pp. 7597–7607 (2021)
Yang, J., Zhang, Q., Fang, R., Ni, B., Liu, J., Tian, Q.: Adversarial attack and defense on point sets. arXiv preprint arxiv:1902.10899 (2019)
Ye, Z., Liu, C., Tian, W., Kan, C.: A deep learning approach for the identification of small process shifts in additive manufacturing using 3D point clouds. Procedia Manuf. 48, 770–775 (2020)
Yu, D., Zhang, H., Chen, W., Yin, J., Liu, T.Y.: Availability attacks create shortcuts. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2022), pp. 2367–2376 (2022)
Yuan, C.H., Wu, S.H.: Neural tangent generalization attacks. In: Proceedings of the 38th International Conference on Machine Learning (ICML 2021), pp. 12230–12240 (2021)
Yue, X., Wu, B., Seshia, S.A., Keutzer, K., Sangiovanni-Vincentelli, A.L.: A LiDAR point cloud generator: from a virtual world to autonomous driving. In: Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval (ICMR 2018), pp. 458–464 (2018)
Zhang, H., et al.: Detector collapse: backdooring object detection to catastrophic overload or blindness. In: Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024) (2024)
Zhang, J., et al.: Unlearnable clusters: towards label-agnostic unlearnable examples. In: Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2023) (2023)
Zhao, B., Lao, Y.: CLPA: clean-label poisoning availability attacks using generative adversarial nets. In: Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI 2022), pp. 9162–9170 (2022)
Zheng, T., Chen, C., Yuan, J., Li, B., Ren, K.: PointCloud saliency maps. In: Proceedings of the 17th IEEE/CVF International Conference on Computer Vision (ICCV 2019), pp. 1598–1606 (2019)
Zhou, H., Chen, K., Zhang, W., Fang, H., Zhou, W., Yu, N.: DUP-Net: denoiser and upsampler network for 3D adversarial point clouds defense. In: Proceedings of the 17th IEEE/CVF International Conference on Computer Vision (ICCV 2019), pp. 1961–1970 (2019)
Acknowledgements
We sincerely appreciate the valuable feedback provided by the anonymous reviewers for our paper. Shengshan’s work is supported in part by the National Natural Science Foundation of China (Grant No. 62372196). Minghui’s work is supported in part by the National Natural Science Foundation of China (Grant No. 62202186). Minghui Li and Peng Xu are co-corresponding authors.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
A Appendix
A Appendix
Datasets and Models. We conduct extensive experiments on four 3D point cloud benchmark datasets: ModelNet40 [43], ModelNet10 [43], ShapeNetPart [3], and a real-world dataset KITTI [30] for autonomous driving, while training on five widely-used 3D point cloud models (including CNNs and Transformer), PointNet [31], PointNet++ [32], DGCNN [39], PointCNN [25], and PCT [14]. The ModelNet40 dataset is a point cloud dataset used for classification, consisting of 40 categories. The training set contains 9,843 point cloud data, while the test set contains 2,468 point cloud data. ModelNet10 is a subset of ModelNet40 dataset, consisting of 10 categories. ShapeNetPart is a subset of ShapeNet that includes 16 categories. The training set consists of 12,137 samples, while the test set consists of 2,874 samples. Consistent with [17, 24], we split KITTI object clouds into class “vehicle” and “human” containing 1000 training data and 662 test data. The KITTI point cloud object consists of 256 points and the point cloud objects from the remaining three datasets consist of 1024 points, which are then normalized to the range of \([-1,1]^3\).
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, X. et al. (2024). PointAPA: Towards Availability Poisoning Attacks in 3D Point Clouds. In: Garcia-Alfaro, J., Kozik, R., Choraś, M., Katsikas, S. (eds) Computer Security – ESORICS 2024. ESORICS 2024. Lecture Notes in Computer Science, vol 14982. Springer, Cham. https://doi.org/10.1007/978-3-031-70879-4_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-70879-4_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-70878-7
Online ISBN: 978-3-031-70879-4
eBook Packages: Computer ScienceComputer Science (R0)