Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Few ‘Zero Level Set’-Shot Learning of Shape Signed Distance Functions in Feature Space

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13692))

Included in the following conference series:

Abstract

We explore a new idea for learning based shape reconstruction from a point cloud, based on the recently popularized implicit neural shape representations. We cast the problem as a few-shot learning of implicit neural signed distance functions in feature space, that we approach using gradient based meta-learning. We use a convolutional encoder to build a feature space given the input point cloud. An implicit decoder learns to predict signed distance values given points represented in this feature space. Setting the input point cloud, i.e. samples from the target shape function’s zero level set, as the support (i.e. context) in few-shot learning terms, we train the decoder such that it can adapt its weights to the underlying shape of this context with a few (5) tuning steps. We thus combine two types of implicit neural network conditioning mechanisms simultaneously for the first time, namely feature encoding and meta-learning. Our numerical and qualitative evaluation shows that in the context of implicit reconstruction from a sparse point cloud, our proposed strategy, i.e. meta-learning in feature space, outperforms existing alternatives, namely standard supervised learning in feature space, and meta-learning in euclidean space, while still providing fast inference.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Amenta, N., Choi, S., Kolluri, R.K.: The power crust, unions of balls, and the medial axis transform. Comput. Geom. 19, 127-153 (2001)

    Google Scholar 

  2. Atzmon, M., Lipman, Y.: SAL: Sign agnostic learning of shapes from raw data. In: CVPR (2020)

    Google Scholar 

  3. Atzmon, M., Lipman, Y.: SALD: Sign agnostic learning with derivatives. In: ICML (2020)

    Google Scholar 

  4. Bernardini, F., Mittleman, J., Rushmeier, H., Silva, C., Taubin, G.: The ball-pivoting algorithm for surface reconstruction. Trans. Vis. Comput. Graph. 5, 349-359 (1999)

    Google Scholar 

  5. Bogo, F., Romero, J., Loper, M., Black, M.J.: FAUST: Dataset and evaluation for 3D mesh registration. In: CVPR (2014)

    Google Scholar 

  6. Cai, R., et al.: Learning gradient fields for shape generation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12348, pp. 364–381. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58580-8_22

    Chapter  Google Scholar 

  7. Carr, J.C., et al.: Reconstruction and representation of 3d objects with radial basis functions. In: SIGGRAPH (2001)

    Google Scholar 

  8. Cazals, F., Giesen, J.: Effective Computational Geometry for Curves and Surfaces (2006)

    Google Scholar 

  9. Chabra, R., et al.: Deep local shapes: learning local SDF priors for detailed 3D reconstruction. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12374, pp. 608–625. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58526-6_36

    Chapter  Google Scholar 

  10. Chang, A.X., et al.: ShapeNet: An information-rich 3D model repository. arXiv preprint arXiv:1512.03012 (2015)

  11. Chen, Z., Tagliasacchi, A., Zhang, H.: Bsp-net: Generating compact meshes via binary space partitioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  12. Chen, Z., Zhang, H.: Learning implicit fields for generative shape modeling. In: CVPR (2019)

    Google Scholar 

  13. Chibane, J., Alldieck, T., Pons-Moll, G.: Implicit functions in feature space for 3d shape reconstruction and completion. In: CVPR (2020)

    Google Scholar 

  14. Chibane, J., Mir, A., Pons-Moll, G.: Neural unsigned distance fields for implicit function learning. In: NeurIPS (2020)

    Google Scholar 

  15. Choy, C.B., Xu, D., Gwak, J.Y., Chen, K., Savarese, S.: 3D-R2N2: a unified approach for single and multi-view 3D object reconstruction. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 628–644. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_38

    Chapter  Google Scholar 

  16. Deng, B., Genova, K., Yazdani, S., Bouaziz, S., Hinton, G., Tagliasacchi, A.: CvxNet: Learnable convex decomposition. In: CVPR (2020)

    Google Scholar 

  17. Deprelle, T., Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: Learning elementary structures for 3D shape generation and matching. In: NeurIPS (2019)

    Google Scholar 

  18. Duan, Y., Zhu, H., Wang, H., Yi, L., Nevatia, R., Guibas, L.J.: Curriculum DeepSDF. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12353, pp. 51–67. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58598-3_4

    Chapter  Google Scholar 

  19. Erler, P., Guerrero, P., Ohrhallinger, S., Mitra, N.J., Wimmer, M.: Points2Surf learning implicit surfaces from point clouds. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12350, pp. 108–124. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58558-7_7

    Chapter  Google Scholar 

  20. Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3D object reconstruction from a single image. In: CVPR (2017)

    Google Scholar 

  21. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML (2017)

    Google Scholar 

  22. Gao, C., Shih, Y., Lai, W.S., Liang, C.K., Huang, J.B.: Portrait neural radiance fields from a single image. arXiv preprint arXiv:2012.05903 (2020)

  23. Genova, K., Cole, F., Sud, A., Sarna, A., Funkhouser, T.: Local deep implicit functions for 3D shape. In: CVPR (2020)

    Google Scholar 

  24. Genova, K., Cole, F., Vlasic, D., Sarna, A., Freeman, W.T., Funkhouser, T.: Learning shape templates with structured implicit functions. In: ICCV (2019)

    Google Scholar 

  25. Gropp, A., Yariv, L., Haim, N., Atzmon, M., Lipman, Y.: Implicit geometric regularization for learning shapes. In: ICML (2020)

    Google Scholar 

  26. Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: A papier-mâché approach to learning 3D surface generation. In: CVPR (2018)

    Google Scholar 

  27. Hart, J.C.: Sphere tracing: A geometric method for the antialiased ray tracing of implicit surfaces. Vis. Comput. 12, (1996)

    Google Scholar 

  28. Jiang, C., Sud, A., Makadia, A., Huang, J., Nießner, M., Funkhouser, T., et al.: Local implicit grid representations for 3D scenes. In: CVPR (2020)

    Google Scholar 

  29. Jiang, Y., Ji, D., Han, Z., Zwicker, M.: SDFDiff: Differentiable rendering of signed distance fields for 3d shape optimization. In: CVPR (2020)

    Google Scholar 

  30. Kato, H., Ushiku, Y., Harada, T.: Neural 3D mesh renderer. In: CVPR (2018)

    Google Scholar 

  31. Kazhdan, M., Hoppe, H.: Screened poisson surface reconstruction. ACM Transa. Graph. 32, 1–13 (2013)

    Google Scholar 

  32. Kellnhofer, P., Jebe, L.C., Jones, A., Spicer, R., Pulli, K., Wetzstein, G.: Neural lumigraph rendering. In: CVPR (2021)

    Google Scholar 

  33. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: ICLR (2015)

    Google Scholar 

  34. Kolluri, R.: Provably good moving least squares. ACM Trans. Algorithm 4, 1–25 (2008)

    Google Scholar 

  35. Li, Z., Zhou, F., Chen, F., Li, H.: Meta-SGD: Learning to learn quickly for few-shot learning. In: NeurIPS (2017)

    Google Scholar 

  36. Liao, Y., Donne, S., Geiger, A.: Deep marching cubes: Learning explicit surface representations. In: CVPR (2018)

    Google Scholar 

  37. Lin, C.H., Wang, C., Lucey, S.: SDF-SRN: Learning signed distance 3D object reconstruction from static images. In: NeurIPS (2020)

    Google Scholar 

  38. Lipman, Y.: Phase transitions, distance functions, and implicit neural representations. In: ICML (2021)

    Google Scholar 

  39. Liu, C., Yang, J., Ceylan, D., Yumer, E., Furukawa, Y.: PlaneNet: Piece-wise planar reconstruction from a single RGB image. In: CVPR (2018)

    Google Scholar 

  40. Liu, M., Zhang, X., Su, H.: Meshing point clouds with predicted intrinsic-extrinsic ratio guidance. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12353, pp. 68–84. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58598-3_5

    Chapter  Google Scholar 

  41. Liu, S.L., Guo, H.X., Pan, H., Wang, P.S., Tong, X., Liu, Y.: Deep implicit moving least-squares functions for 3D reconstruction. In: CVPR (2021)

    Google Scholar 

  42. Liu, S., Saito, S., Chen, W., Li, H.: Learning to infer implicit surfaces without 3D supervision. In: NeurIPS (2019)

    Google Scholar 

  43. Lorensen, W.E., Cline, H.E.: Marching cubes: A high resolution 3D surface construction algorithm. In: SIGGRAPH (1987)

    Google Scholar 

  44. Ma, B., Han, Z., Liu, Y.S., Zwicker, M.: Neural-pull: Learning signed distance functions from point clouds by learning to pull space onto surfaces. In: ICML (2021)

    Google Scholar 

  45. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: Learning 3D reconstruction in function space. In: CVPR (2019)

    Google Scholar 

  46. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. Nerf: Representing scenes as neural radiance fields for view synthesis, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24

    Chapter  Google Scholar 

  47. Nichol, A., Achiam, J., Schulman, J.: On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999 (2018)

  48. Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.: Differentiable volumetric rendering: Learning implicit 3D representations without 3D supervision. In: CVPR (2020)

    Google Scholar 

  49. Ohtake, Y., Belyaev, A., Alexa, M.: Sparse low-degree implicit surfaces with applications to high quality rendering, feature extraction, and smoothing. In: SGP (2005)

    Google Scholar 

  50. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: Learning continuous signed distance functions for shape representation. In: CVPR (2019)

    Google Scholar 

  51. Paszke, A., et al.: PyTorch: An imperative style, high-performance deep learning library. In: NeurIPS (2019)

    Google Scholar 

  52. Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.: Convolutional occupancy networks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12348, pp. 523–540. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58580-8_31

    Chapter  Google Scholar 

  53. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: Deep learning on point sets for 3D classification and segmentation. In: CVPR (2017)

    Google Scholar 

  54. Rakotosaona, M.J., Aigerman, N., Mitra, N., Ovsjanikov, M., Guerrero, P.: Differentiable surface triangulation. In: SIGGRAPH Asia (2021)

    Google Scholar 

  55. Riegler, G., Osman Ulusoy, A., Geiger, A.: Octnet: Learning deep 3D representations at high resolutions. In: CVPR (2017)

    Google Scholar 

  56. Schölkopf, B., Giesen, J., Spalinger, S.: Kernel methods for implicit surface modeling. In: NeurIPS (2004)

    Google Scholar 

  57. Sitzmann, V., Chan, E.R., Tucker, R., Snavely, N., Wetzstein, G.: MetaSDF: Meta-learning signed distance functions. In: NeurIPS (2020)

    Google Scholar 

  58. Sitzmann, V., Martel, J., Bergman, A., Lindell, D., Wetzstein, G.: Implicit neural representations with periodic activation functions. In: NeurIPS (2020)

    Google Scholar 

  59. Sitzmann, V., Rezchikov, S., Freeman, W.T., Tenenbaum, J.B., Durand, F.: Light field networks: Neural scene representations with single-evaluation rendering. In: NeurIPS (2021)

    Google Scholar 

  60. Sitzmann, V., Zollhoefer, M., Wetzstein, G.: Scene representation networks: Continuous 3D-structure-aware neural scene representations. In: NeurIPS (2019)

    Google Scholar 

  61. Snell, J., Swersky, K., Zemel, R.S.: Prototypical networks for few-shot learning. In: NeurIPS (2017)

    Google Scholar 

  62. Takikawa, T., et al.: Neural geometric level of detail: Real-time rendering with implicit 3D shapes. In: CVPR (2021)

    Google Scholar 

  63. Tancik, M., et al.: Learned initializations for optimizing coordinate-based neural representations. In: CVPR (2021)

    Google Scholar 

  64. Tancik, M., et al.: Fourier features let networks learn high frequency functions in low dimensional domains. In: NeurIPS (2020)

    Google Scholar 

  65. Tatarchenko, M., Dosovitskiy, A., Brox, T.: Octree generating networks: Efficient convolutional architectures for high-resolution 3D outputs. In: ICCV (2017)

    Google Scholar 

  66. Tretschk, E., Tewari, A., Golyanik, V., Zollhöfer, M., Stoll, C., Theobalt, C.: PatchNets: patch-based generalizable deep implicit 3d shape representations. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12361, pp. 293–309. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58517-4_18

    Chapter  Google Scholar 

  67. Tulsiani, S., Su, H., Guibas, L.J., Efros, A.A., Malik, J.: Learning shape abstractions by assembling volumetric primitives. In: CVPR (2017)

    Google Scholar 

  68. Venkatesh, R., et al.: Deep implicit surface point prediction networks. In: CVPR (2021)

    Google Scholar 

  69. Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. In: NeurIPS (2016)

    Google Scholar 

  70. Wang, N., Zhang, Y., Li, Z., Fu, Y., Liu, W., Jiang, Y.-G.: Pixel2Mesh: generating 3D mesh models from single RGB images. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 55–71. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01252-6_4

    Chapter  Google Scholar 

  71. Wang, P.S., Liu, Y., Guo, Y.X., Sun, C.Y., Tong, X.: O-CNN: Octree-based convolutional neural networks for 3D shape analysis. ACM Trans. Graph. 36, 1–11 (2017)

    Google Scholar 

  72. Williams, F., et al.: Neural fields as learnable kernels for 3D reconstruction. In: CVPR (2022)

    Google Scholar 

  73. Williams, F., Schneider, T., Silva, C., Zorin, D., Bruna, J., Panozzo, D.: Deep geometric prior for surface reconstruction. In: CVPR (2019)

    Google Scholar 

  74. Williams, F., Trager, M., Bruna, J., Zorin, D.: Neural splines: Fitting 3D surfaces with infinitely-wide neural networks. In: CVPR (2021)

    Google Scholar 

  75. Wu, J., Zhang, C., Xue, T., Freeman, W.T., Tenenbaum, J.B.: Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In: NeurIPS (2016)

    Google Scholar 

  76. Wu, Z., et al.: 3D ShapeNets: a deep representation for volumetric shapes. In: CVPR (2015)

    Google Scholar 

  77. Xu, Q., Wang, W., Ceylan, D., Mech, R., Neumann, U.: DISN: Deep implicit surface network for high-quality single-view 3D reconstruction. In: NeurIPS (2019)

    Google Scholar 

  78. Yariv, L., et al.: Multiview neural surface reconstruction by disentangling geometry and appearance. In: NeurIPS (2020)

    Google Scholar 

  79. Yavartanoo, M., Chung, J., Neshatavar, R., Lee, K.M.: 3DIAS: 3D shape reconstruction with implicit algebraic surfaces. In: ICCV (2021)

    Google Scholar 

  80. Zheng, Z., Yu, T., Dai, Q., Liu, Y.: Deep implicit templates for 3D shape representation. In: CVPR (2021)

    Google Scholar 

  81. Zou, C., Yumer, E., Yang, J., Ceylan, D., Hoiem, D.: 3D-PRNN: Generating shape primitives with recurrent neural networks. In: CVPR (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adnane Boukhayma .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 296 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ouasfi, A., Boukhayma, A. (2022). Few ‘Zero Level Set’-Shot Learning of Shape Signed Distance Functions in Feature Space. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13692. Springer, Cham. https://doi.org/10.1007/978-3-031-19824-3_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19824-3_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19823-6

  • Online ISBN: 978-3-031-19824-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics