Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

PointTree: Transformation-Robust Point Cloud Encoder with Relaxed K-D Trees

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13663))

Included in the following conference series:

Abstract

Being able to learn an effective semantic representation directly on raw point clouds has become a central topic in 3D understanding. Despite rapid progress, state-of-the-art encoders are restrictive to canonicalized point clouds, and have weaker than necessary performance when encountering geometric transformation distortions. To overcome this challenge, we propose PointTree, a general-purpose point cloud encoder that is robust to transformations based on relaxed K-D trees. Key to our approach is the design of the division rule in K-D trees by using principal component analysis (PCA). We use the structure of the relaxed K-D tree as our computational graph, and model the features as border descriptors which are merged with pointwise-maximum operation. In addition to this novel architecture design, we further improve the robustness by introducing pre-alignment – a simple yet effective PCA-based normalization scheme. Our PointTree encoder combined with pre-alignment consistently outperforms state-of-the-art methods by large margins, for applications from object classification to semantic segmentation on various transformed versions of the widely-benchmarked datasets. Code and pre-trained models are available at https://github.com/immortalCO/PointTree.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Agarwal, P.K., Arge, L., Danner, A.: From point cloud to grid DEM: A scalable approach. In: International Symposium on Spatial Data Handling (2006)

    Google Scholar 

  2. Armeni, I., Sax, A., Zamir, A.R., Savarese, S.: Joint 2D–3D-semantic data for indoor scene understanding. arXiv: 1702.01105 (2017)

  3. Chang, A.X., et al.: ShapeNet: An information-rich 3D model repository. arXiv: 1512.03012 (2015)

  4. Duch, A., Estivill-Castro, V., Martinez, C.: Randomized K-dimensional binary search trees. In: International Symposium on Algorithms and Computation (1998)

    Google Scholar 

  5. Gadelha, M., Wang, R., Maji, S.: Multiresolution tree networks for 3D point cloud processing. arxiv: 1807.03520 (2018)

  6. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: CVPR (2012)

    Google Scholar 

  7. Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L.: Bennamoun: Deep learning for 3D point clouds: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 43(12), 4338–4364 (2021)

    Article  Google Scholar 

  8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

    Google Scholar 

  9. Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: IEEE and ACM International Symposium on Mixed and Augmented Reality (2007)

    Google Scholar 

  10. Klokov, R., Lempitsky, V.: Escape from cells: Deep Kd-Networks for the recognition of 3D point cloud models. In: ICCV (2017)

    Google Scholar 

  11. Lei, H., Akhtar, N., Mian, A.S.: Octree guided CNN with spherical kernels for 3D point clouds. In: CVPR (2019)

    Google Scholar 

  12. Li, F., Fujiwara, K., Okura, F., Matsushita, Y.: A closer look at rotation-invariant deep point cloud analysis. In: ICCV (2021)

    Google Scholar 

  13. Li, X., Li, R., Chen, G., Fu, C.W., Cohen-Or, D., Heng, P.A.: A rotation-invariant framework for deep point cloud analysis. IEEE Trans. Vis. Comput. Graph. 4503–4514 (2021)

    Google Scholar 

  14. Li, Y., Bu, R., Sun, M., Wu, W., Di, X., Chen, B.: PointCNN: Convolution on X-transformed points. In: NeurIPS (2018)

    Google Scholar 

  15. Liu, Y., Fan, B., Meng, G., Lu, J., Xiang, S., Pan, C.: DensePoint: Learning densely contextual representation for efficient point cloud processing. In: ICCV (2019)

    Google Scholar 

  16. Lv, X., Wang, B., Dou, Z., Ye, D., Wang, S.: LCCNet: LiDAR and camera self-calibration using cost volume network. In: CVPRW (2021)

    Google Scholar 

  17. Ma, X., Qin, C., You, H., Ran, H., Fu, Y.: Rethinking network design and local geometry in point cloud: A simple residual MLP framework. In: ICLR (2022)

    Google Scholar 

  18. Qi, C.R., Su, H., Kaichun, M., Guibas, L.J.: PointNet: Deep learning on point sets for 3D classification and segmentation. In: CVPR (2017)

    Google Scholar 

  19. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: Deep hierarchical feature learning on point sets in a metric space. In: NeurIPS (2017)

    Google Scholar 

  20. Qiu, S., Anwar, S., Barnes, N.: Geometric feedback network for point cloud classification. arXiv: 1911.12885 (2019)

  21. Que, Z., Lu, G., Xu, D.: VoxelContext-Net: An octree based framework for point cloud compression. arXiv: 2105.02158 (2021)

  22. Riegler, G., Ulusoy, A., Geiger, A.: OctNet: Learning deep 3D representations at high resolutions. In: CVPR (2017)

    Google Scholar 

  23. Shen, W., Zhang, B., Huang, S., Wei, Z., Zhang, Q.: 3D-rotation-equivariant quaternion neural networks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12365, pp. 531–547. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58565-5_32

    Chapter  Google Scholar 

  24. Siekański, P., Paśko, S., Malowany, K., Malesa, M.: Online correction of the mutual miscalibration of multimodal VIS-IR sensors and 3D data on a UAV platform for surveillance applications. Remote Sensing 11(21) (2019)

    Google Scholar 

  25. Sun, J., Zhang, Q., Kailkhura, B., Yu, Z., Xiao, C., Mao, Z.M.: Benchmarking robustness of 3D point cloud recognition against common corruptions. arXiv: 2201.12296 (2022)

  26. Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M., Solomon, J.M.: Dynamic graph CNN for learning on point clouds. ACM Trans. Graph. 38(5) (2019)

    Google Scholar 

  27. Wu, Z., Song, S., Khosla, A., Tang, X., Xiao, J.: 3D shapenets for 2.5D object recognition and next-best-view prediction. arXiv: 1406.5670 (2014)

  28. Xiang, B., Tu, J., Yao, J., Li, L.: A novel octree-based 3-D fully convolutional neural network for point cloud classification in road environment. IEEE Trans. Geosci. Remote Sens. 57(10), 7799–7818 (2019)

    Article  Google Scholar 

  29. Xiang, T., Zhang, C., Song, Y., Yu, J., Cai, W.: Walk in the cloud: Learning curves for point clouds shape analysis. In: ICCV (2021)

    Google Scholar 

  30. Xu, M., Zhang, J., Zhou, Z., Xu, M., Qi, X., Qiao, Y.: Learning geometry-disentangled representation for complementary understanding of 3D object point cloud. In: AAAI (2021)

    Google Scholar 

  31. Yi, L., et al.: Large-scale 3D shape reconstruction and segmentation from ShapeNet Core55. arXiv: 1710.06104 (2017)

  32. Yuan, W., Held, D., Mertz, C., Hebert, M.: Iterative transformer network for 3D point cloud. arXiv: 1811.11209 (2018)

  33. Zeng, W., Gevers, T.: 3DContextNet: K-d tree guided hierarchical learning of point clouds using local and global contextual cues. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11131, pp. 314–330. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11015-4_24

    Chapter  Google Scholar 

  34. Zhang, X., Zhu, S., Guo, S., Li, J., Liu, H.: Line-based automatic extrinsic calibration of lidar and camera. In: ICRA (2021)

    Google Scholar 

  35. Zhang, Y., Rabbat, M.G.: A graph-CNN for 3D point cloud classification. In: ICASSP (2018)

    Google Scholar 

  36. Zhao, C., Yang, J., Xiong, X., Zhu, A., Cao, Z., Li, X.: Rotation invariant point cloud classification: Where local geometry meets global topology. Pattern Recogn. 127(C) (2019)

    Google Scholar 

Download references

Acknowledgement

This work was supported in part by NSF Grant 2106825, the Jump ARCHES endowment through the Health Care Engineering Systems Center, the New Frontiers Initiative, the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign through the NCSA Fellows program, and the IBM-Illinois Discovery Accelerator Institute.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun-Kun Chen .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1499 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, JK., Wang, YX. (2022). PointTree: Transformation-Robust Point Cloud Encoder with Relaxed K-D Trees. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13663. Springer, Cham. https://doi.org/10.1007/978-3-031-20062-5_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20062-5_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20061-8

  • Online ISBN: 978-3-031-20062-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics