Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

A method of generating depth images for view-based shape retrieval of 3D CAD models from partial point clouds

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Laser scanners can easily acquire the geometric data of physical environments in the form of point clouds. Industrial 3D reconstruction processes generally recognize objects from point clouds, which should include both geometric and semantic data. However, the recognition process is often a bottleneck in 3D reconstruction because it is labor intensive and requires expertise in domain knowledge. To address this problem, various methods have been developed to recognize objects by retrieving their corresponding models from a database via input geometric queries. In recent years, geometric data conversion to images and view-based 3D shape retrieval applications have demonstrated high accuracies. Depth images that encode the depth values as pixel intensities are frequently used for view-based 3D shape retrieval. However, geometric data collected from objects are often incomplete owing to occlusions and line-of-sight limitations. Images generated by occluded point clouds lower the view-based 3D object retrieval performance owing to loss of information. In this paper, we propose a viewpoint and image-resolution estimation method for view-based 3D shape retrieval from point cloud queries. Further, automatic selection of viewpoint and image resolution are proposed using the data acquisition rate and density calculations from sampled viewpoints and image resolutions. The retrieval performances for images generated by the proposed method are investigated via experiments and compared for various datasets. Additionally, view-based 3D shape retrieval performance with a deep convolutional neural network was investigated using the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

References

  1. Anil EB, Akinci B, Huber D (2011) Representation requirements of as-is building information models generated from laser scanned point cloud data. Proc 28th Int Sympos Autom Robotics Construction Seoul South Korea 29:355–360

    Google Scholar 

  2. Biasotti S, Marini S, Mortara M, Patane G, Spagnuolo M, Falcidieno B (2003) 3D shape matching through topological structures. In 2003 international conference on discrete geometry for computer imagery, lecture notes in computer science, springer, Berlin, Heidelberg, 2886, 194–203

  3. Chua CS, Jarvis R (1997) Point signatures: a new representation for 3d object recognition. Int J Comput Vis 25(1):63–85

    Article  Google Scholar 

  4. Cohen I, Ayache N, Sulger P (1992) Tracking points on deformable objects using curvature information. In 1992 European conference on computer vision, lecture notes in computer science, Springer, Berlin, Heidelberg, 588:458–466

  5. Dey TK, Giesen J, Goswami S (2003) Shape segmentation and matching with flow discretization. In 2003 workshop on algorithms and data structures, lecture notes in computer science, Springer, Berlin, Heidelberg, 2748:25–36

  6. Feixas M, Sbert M, Gonzalez F (2009) A unified information-theoretic framework for viewpoint selection and mesh saliency. ACM Trans Appl Percept 6(1):1–23

    Article  Google Scholar 

  7. Jegou H, Perronnin F, Douze M, Sanchez J, Perez P, Schmid C (2012) Aggregating local image descriptors into compact codes. IEEE Trans Pattern Anal Mach Intell 34(9):1704–1716

    Article  Google Scholar 

  8. Kanezaki A, Matsushita Y, Nishida Y (2016) RotationNet: joint object categorization and pose estimation using multiviews from unsupervised viewpoints. arXiv preprint arXiv:1603.06208

  9. Kazhdan M, Funkhouser T, Rusinkiewicz S (2003) Rotation invariant spherical harmonic representation of 3D shape descriptors. Proc 2003 Eurographics/ACM SIGGRAPH Sympos Geometry Process 6:156–164

    Google Scholar 

  10. Liu YJ, Luo X, Joneja A, Ma CX, Fu XL, Song D (2013) User-adaptive sketch-based 3-D CAD model retrieval. IEEE Trans Autom Sci Eng 10(3):783–795

    Article  Google Scholar 

  11. Lowe DG (1999) Object recognition from local scale-invariant features. In proceedings of the 7th IEEE international conference on computer vision, 2, 1150–1157.

  12. Makadia A, Patterson A, Daniilidis K (2006) Fully automatic registration of 3D point clouds. In 2006 IEEE computer society conference on computer vision and. Pattern Recogn 1:1297–1304

    Google Scholar 

  13. Marton ZC, Rusu RB, Beetz M (2009) On fast surface reconstruction methods for large and noisy point clouds. In 2009 IEEE international conference on robotics and automation, Kobe, 3218–3223

  14. Maruyame T, Kanai S, Date H, Tada M (2016) Motion-capture-based walking simulation of digital human adapted to laser-scanned 3D as-is environments for accessibility evaluation. J Computation Design Eng 3(3):250–265

    Article  Google Scholar 

  15. Maturana D, Scherer S (2015) VoxNet: a 3D convolutional neural network for real-time object recognition. In 2015 IEEE/RSJ international conference on intelligent robots and systems, Hamburg, 922–928

  16. McWherter D, Peabody M, Shokoufandeh AC, Regli W (2001) Database techniques for archival of solid models. In proceedings of the 6th ACM symposium on solid Modeling and applications, New York, 78–87

  17. Novotni M, Klein R (2003) 3D Zernike descriptors for content based shape retrieval. In proceedings of the 8th ACM symposium on solid Modeling and applications, 216–225

  18. Ohbuchi R, Furuya T (2009) Scale-weighted dense bag of visual features for 3D model retrieval from a partial view 3D model. In 2009 IEEE 12th international conference on computer vision workshops, Kyoto, 63–70

  19. Ohbuchi R, Minamitani T, Takei T (2005) Shape-similarity search of 3D models by using enhanced shape functions. Int J Comput Appl Tech 23(2–4):70–85

  20. Osada R, Funkhouser T, Chazelle B, Dobkin D (2002) Shape distributions. ACM Trans Graph 21(4):807–832

    Article  MathSciNet  Google Scholar 

  21. Paquet E, Rioux M, Murching A, Naveen T, Tabatabai A (2000) Description of shape information for 2-D and 3-D objects. Signal Process Image Commun 16(1):103–122

    Article  Google Scholar 

  22. Qi CR, Su H, Mo K, Guibas LJ (2016) PointNet: deep learning on point sets for 3D classification and segmentation. arXiv preprint arXiv:1612.00593

  23. Rusu RB, Blodow N, Marton ZC, Beetz M (2008) Aligning point cloud views using persistent feature histograms. In 2008 IEEE/RSJ international conference on intelligent robots and systems, 3384–3391

  24. Sanchez J, Perronnin F, Mensink T, Verbeek J (2013) Image classification with the fisher vector: theory and practice. Int J Comput Vis 105(3):222–245

    Article  MathSciNet  Google Scholar 

  25. Sánchez-Cruz H, Bribiesca E (2003) A method of optimum transformation of 3D objects used as a measure of shape dissimilarity. Image Vis Comput 21(12):1027–1036

    Article  Google Scholar 

  26. Savelonas MA, Pratikakis I, Sfikas K (2015) An overview of partial 3D object retrieval methodologies. Multimed Tools Appl 74(24):11783–11808

    Article  Google Scholar 

  27. Son H, Kim C, Kim C (2013) Fully automated as-built 3D pipeline segmentation based on curvature computation from laser-scanned data. In Comput Civil Eng:765–772.

  28. Su H, Maji S, Kalogerakis E, Learned-Miller E (2015) Multi-view convolutional neural networks for 3D shape recognition. In proceedings of the 2015 IEEE international conference on computer vision, 945–953

  29. Sundar H, Silver D, Gagvani N, Dickinson S (2003) Skeleton based shape matching and retrieval. In 2003 IEEE shape Modeling international conference, Seoul, South Korea, 130–139.

  30. Tang P, Huber D, Akinci B, Lipman R, Lytle A (2010) Automatic reconstruction of as-built building information models from laser-scanned point clouds: a review of related techniques. Autom Constr 19(7):829–843

    Article  Google Scholar 

  31. Tangelder JW, Veltkamp RC (2008) A survey of content based 3D shape retrieval methods. Multimed Tools Appl 39(3):441–471

    Article  Google Scholar 

  32. Tasse FP, Kosinka J, Dodgson N (2015) Cluster-based point set saliency. In 2015 IEEE international conference on computer vision, Santiago, 163–171

  33. Vazquez P, Feixas M, Sbert M, Heidrich W (2001) Viewpoint selection using viewpoint entropy. In proceedings of the 2001 vision, modeling, and visualization conference, Stuttgart, 273–280

  34. Woo H, Kang E, Wang S, Lee KH (2002) A new segmentation method for point cloud data. Int J Mach Tools Manuf 42(2):167–178

    Article  Google Scholar 

  35. Yeo C, Kim S, Kim H, Kim S, Mun D (2020) Deep learning applications in an industrial process plant: repository of segmented point clouds for pipework components. JMST Advances 2(1):15–24

    Article  Google Scholar 

  36. Zehtaban L, Elazhary O, Roller D (2016) A framework for similarity recognition of CAD models. J Computation Design Eng 3(3):274–285

    Article  Google Scholar 

  37. Zhang C Chen T (2001) Efficient feature extraction for 2D/3D objects in mesh representation. In proceedings of the 2001 IEEE international conference on image processing, 3, 935–938.

Download references

Acknowledgments

This research was supported by the Industrial Core Technology Development Program (Project ID: 20000725) funded by the Korean government (MOTIE) and the Basic Science Research Program (Project ID: NRF-2019R1F1A1053542) through the National Research Foundation of Korea (NRF) funded by the Korean government (MSIT) and by “Research Base Construction Fund Support Program” funded by Jeonbuk National University in 2020.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Duhwan Mun.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, H., Yeo, C., Cha, M. et al. A method of generating depth images for view-based shape retrieval of 3D CAD models from partial point clouds. Multimed Tools Appl 80, 10859–10880 (2021). https://doi.org/10.1007/s11042-020-10283-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-10283-z

Keywords