Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Relationship-Based Point Cloud Completion

Published: 01 December 2022 Publication History

Abstract

We propose a partial point cloud completion approach for scenes that are composed of multiple objects. We focus on pairwise scenes where two objects are in close proximity and are contextually related to each other, such as a chair tucked in a desk, a fruit in a basket, a hat on a hook and a flower in a vase. Different from existing point cloud completion methods, which mainly focus on single objects, we design a network that encodes not only the geometry of the individual shapes, but also the spatial relations between different objects. More specifically, we complete missing parts of the objects in a conditional manner, where the partial or completed point cloud of the other object is used as an additional input to help predict missing parts. Based on the idea of conditional completion, we further propose a two-path network, which is guided by a consistency loss between different sequences of completion. Our method can handle difficult cases where the objects heavily occlude each other. Also, it only requires a small set of training data to reconstruct the interaction area compared to existing completion approaches. We evaluate our method qualitatively and quantitatively via ablation studies and in comparison to the state-of-the-art point cloud completion methods.

References

[1]
A. Avetisyan, M. Dahnert, A. Dai, M. Savva, A. X. Chang, and M. Nießner, “Scan2CAD: Learning CAD model alignment in RGB-D scans,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 2609–2618.
[2]
S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser, “Semantic scene completion from a single depth image,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 190–198.
[3]
M. Kazhdan, M. Bolitho, and H. Hoppe, “Poisson surface reconstruction,” in Proc. 4th Eurographics Symp. Geometry Process., 2006, pp. 61–70.
[4]
S. Song and J. Xiao, “Sliding shapes for 3D object detection in depth images,” in Proc. Eur. Conf. Comput. Vis., 2014, pp. 634–651.
[5]
S. Song and J. Xiao, “Deep sliding shapes for amodal 3D object detection in RGB-D images,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 808–816.
[6]
J. Hou, A. Dai, and M. Nießner, “3D-SIC: 3D semantic instance completion for RGB-D scans,” 2019,. [Online]. Available: http://arxiv.org/abs/1904.12012
[7]
A. Dai, C. Diller, and M. Niessner, “SG-NN: Sparse generative neural networks for self-supervised scene completion of RGB-D scans,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 849–858.
[8]
A. Garcia-Garcia, F. Gomez-Donoso, J. Garcia-Rodriguez, S. Orts-Escolano, M. Cazorla, and J. Azorin-Lopez, “PointNet: A 3D convolutional neural network for real-time object class recognition,” in Proc. Int. Joint Conf. Neural Netw., 2016, pp. 1578–1584.
[9]
C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep hierarchical feature learning on point sets in a metric space,” in Proc. 31st Int. Conf. Adv. Neural Inf. Process. Syst., 2017, vol. 30, pp. 5099–5108.
[10]
Y. Yang, C. Feng, Y. Shen, and D. Tian, “FoldingNet: Point cloud auto-encoder via deep grid deformation,” in Proc. IEEE/CVF Conf. Comput. Visi. Pattern Recognit., 2018, pp. 206–215.
[11]
W. Yuan, T. Khot, D. Held, C. Mertz, and M. Hebert, “PCN: Point completion network,” in Proc. Int. Conf. 3D Vis., 2018, pp. 728–737.
[12]
L. P. Tchapmi, V. Kosaraju, H. Rezatofighi, I. Reid, and S. Savarese, “TopNet: Structural point cloud decoder,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 383–392.
[13]
M. Liu, L. Sheng, S. Yang, J. Shao, and S.-M. Hu, “Morphing and sampling network for dense point cloud completion,” in Proc. AAAI Conf. Artif. Intell., 2020, vol. 34, pp. 11596–11603.
[14]
Z. Huang, Y. Yu, J. Xu, F. Ni, and X. Le, “PF-NET: Point fractal network for 3D point cloud completion,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 7659–7667.
[15]
Y. Zou, Z. Luo, and J.-B. Huang, “DF-NET: Unsupervised joint learning of depth and flow using cross-task consistency,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 36–53.
[16]
U. Kusupati, S. Cheng, R. Chen, and H. Su, “Normal assisted stereo depth estimation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 2189–2199.
[17]
P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1125–1134.
[18]
J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2242–2251.
[19]
X. Wang, A. Jabri, and A. A. Efros, “Learning correspondence from the cycle-consistency of time,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 2561–2571.
[20]
D. Dwibedi, Y. Aytar, J. Tompson, P. Sermanet, and A. Zisserman, “Temporal cycle-consistency learning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 1801–1810.
[21]
Q.-X. Huang and L. Guibas, “Consistent shape maps via semidefinite programming,” in Proc. 11th Eurographics/ACMSIGGRAPH Symp. Geometry Process., 2013, pp. 177–186.
[22]
Q. Huang, F. Wang, and L. Guibas, “Functional map networks for analyzing and exploring large shape collections,” ACM Trans. Graph., vol. 33, no. 4, pp. 1–11, Jul. 2014.
[23]
L. Cosmo, E. Rodola, A. Albarelli, F. Mémoli, and D. Cremers, “Consistent partial matching of shape collections via sparse modeling,” Comput. Graph. Forum, vol. 36, no. 1, pp. 209–221, 2017.
[24]
D. Xu, W. Ouyang, X. Wang, and N. Sebe, “PAD-Net: Multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 675–684.
[25]
Z. Zhang, Z. Liang, L. Wu, X. Zhou, and Q. Huang, “Path-invariant map networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 11084–11094.
[26]
A. R. Zamiret al., “Robust learning through cross-task consistency,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 11197–11206.
[27]
M. Fisher, D. Ritchie, M. Savva, T. Funkhouser, and P. Hanrahan, “Example-based synthesis of 3D object arrangements,” ACM Trans. Graph., vol. 31, no. 6, pp. 1–11, 2012.
[28]
X. Zhao, H. Wang, and T. Komura, “Indexing 3D scenes using the interaction bisector surface,” ACM Trans. Graph., vol. 33, no. 3, pp. 1–14, 2014.
[29]
R. Hu, C. Zhu, O. van Kaick, L. Liu, A. Shamir, and H. Zhang, “Interaction context (ICON): Towards a geometric functionality descriptor,” ACM Trans. Graph., vol. 34, no. 4, pp 1–12, 2015.
[30]
X. Qi, R. Liao, J. Jia, S. Fidler, and R. Urtasun, “3D graph neural networks for RGBD semantic segmentation,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 5209–5218.
[31]
Y. Song, Z. Sun, Y. Wu, and H. Li, “Coarse-to-fine segmentation for indoor scenes with progressive supervision,” Comput. Aided Geometric Des., vol. 75, 2019, Art. no.
[32]
M. Alberti, P. Jensfelt, and J. Folkesson, “Relational approaches for joint object classification and scene similarity measurement in indoor environments,” in Proc. AAAI Spring Symp. Qualitative Representations Robots, 2014.
[33]
M. Sunkel, S. Jansen, M. Wand, and H.-P. Seidel, “A correlated parts model for object detection in large 3D scans,” Comput. Graph. Forum, vol. 32, no. 2, pp. 205–214, 2013.
[34]
Y. Liu, B. Fan, S. Xiang, and C. Pan, “Relation-shape convolutional neural network for point cloud analysis,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 8895–8904.
[35]
Z. Li, J. Zhang, G. Li, Y. Liu, and S. Li, “Graph attention neural networks for point cloud recognition,” in Proc. IEEE Int. Conf. Multimedia Expo, 2019, pp. 387–392.
[36]
A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner, “ScanNet: Richly-annotated 3D reconstructions of indoor scenes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 2432–2443.
[37]
H. Xie, H. Yao, S. Zhou, J. Mao, S. Zhang, and W. Sun, “GRNet: Gridding residual network for dense point cloud completion,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 365–381.

Cited By

View all
  • (2024)Reverse2Complete: Unpaired Multimodal Point Cloud Completion via Guided DiffusionProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680590(5892-5901)Online publication date: 28-Oct-2024
  • (2024)Point Cloud Completion: A SurveyIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.334493530:10(6880-6899)Online publication date: 1-Oct-2024
  • (2024)Point Cloud Completion via Self-Projected View Augmentation and Implicit Field ConstraintIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.342477634:11_Part_2(11564-11578)Online publication date: 8-Jul-2024
  • Show More Cited By

Index Terms

  1. Relationship-Based Point Cloud Completion
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image IEEE Transactions on Visualization and Computer Graphics
          IEEE Transactions on Visualization and Computer Graphics  Volume 28, Issue 12
          Dec. 2022
          1222 pages

          Publisher

          IEEE Educational Activities Department

          United States

          Publication History

          Published: 01 December 2022

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 19 Feb 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)Reverse2Complete: Unpaired Multimodal Point Cloud Completion via Guided DiffusionProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680590(5892-5901)Online publication date: 28-Oct-2024
          • (2024)Point Cloud Completion: A SurveyIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.334493530:10(6880-6899)Online publication date: 1-Oct-2024
          • (2024)Point Cloud Completion via Self-Projected View Augmentation and Implicit Field ConstraintIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.342477634:11_Part_2(11564-11578)Online publication date: 8-Jul-2024
          • (2023)Perceptual Quality Assessment of Colored 3D Point CloudsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2022.316715129:8(3642-3655)Online publication date: 1-Aug-2023
          • (2023)Using Foliation Leaves to Extract Reeb Graphs on SurfacesIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2022.314176429:4(2117-2131)Online publication date: 1-Apr-2023

          View Options

          View options

          Figures

          Tables

          Media

          Share

          Share

          Share this Publication link

          Share on social media