Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Unsupervised method for identifying shape instances on 3D CAD models

Published: 04 March 2024 Publication History

Abstract

Increasingly complex 3D CAD models are essential during different life-cycle stages of modern engineering projects. Even though these models contain several repeated geometries, instancing information is often not available, resulting in increased requirements for storage, transmission, and rendering. Previous research have successfully applied shape matching techniques to identify repeated geometries and thus reduce memory requirements and improve rendering performance. However, these approaches require consistent vertex topology, prior knowledge about the scene, and/or the laborious creation of labeled datasets. In this paper, we present an unsupervised deep-learning method that overcomes these limitations and is capable of identifying repeated geometries and computing their instancing transformations. The method also guarantees a maximum visual error and preserves intrinsic characteristics of surfaces. Results on real-world 3D CAD models demonstrate the effectiveness of our approach: the datasets are reduced by up to 83.93% in size. Our approach achieves better results than previous work that does not rely on supervised learning. The proposed method is applicable to any kind of 3D scene and geometry.

Graphical abstract

Display Omitted

Highlights

Unsupervised deep learning method for 3D shape registration.
Does not require any previous knowledge of the 3D geometries.
Does not require a labeled dataset for any supervised training.
Guarantees an upper bound on any visual errors.
Generalizes for any 3D scene and geometry.

References

[1]
Process Industries STEP Consortium, STEP in the process industries: Process plant engineering activity model, 1994.
[2]
Gielingh W., An assessment of the current state of product data technologies, Comput Aided Des 40 (7) (2008) 750–759.
[3]
Kim B.C., Jeon Y., Park S., Teijgeler H., Leal D., Mun D., Toward standardized exchange of plant 3D CAD models using ISO 15926, Comput Aided Des 83 (2017) 80–95.
[4]
Eastman C., Eastman C., Teicholz P., Sacks R., BIM handbook: A guide to building information modeling for owners, managers, designers, engineers and contractors, 2011.
[5]
Hardin B., McCool D., BIM and construction management: Proven tools, methods, and workflows, 2015.
[6]
Kritzinger W., Karner M., Traar G., Henjes J., Sihn W., Digital twin in manufacturing: A categorical literature review and classification, IFAC-PapersOnLine 51 (11) (2018) 1016–1022.
[7]
Shao G., Helu M., Framework for a digital twin in manufacturing: Scope and requirements, Manuf Lett 24 (2020) 105–107.
[8]
Qi Q., Tao F., Zuo Y., Zhao D., Digital twin service towards smart manufacturing, Procedia Cirp 72 (2018) 237–242.
[9]
Xue J., Zhao G., Interactive rendering and modification of massive aircraft CAD models in immersive environment, Comput-Aided Des Appl 12 (4) (2015) 393–402.
[10]
Santos P.I.N., Celes Filho W., Instanced rendering of massive CAD models using shape matching, in: 2014 27th SIBGRAPI conference on graphics, patterns and images, IEEE, 2014, pp. 335–342.
[11]
Pharr M., Fernando R., Gpu gems 2: Programming techniques for high-performance graphics and general-purpose computation, Addison-Wesley Professional, 2005.
[12]
Hanocka R., Fish N., Wang Z., Giryes R., Fleishman S., Cohen-Or D., Alignet: Partial-shape agnostic alignment via unsupervised learning, ACM Trans Graph 38 (1) (2018) 1–14.
[13]
Qi CR, Su H, Mo K, Guibas LJ. Pointnet: Deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017, p. 652–60.
[14]
Qi C.R., Yi L., Su H., Guibas L.J., Pointnet++: Deep hierarchical feature learning on point sets in a metric space, in: Advances in neural information processing systems, 2017, pp. 5099–5108.
[15]
Li L, Sung M, Dubrovina A, Yi L, Guibas LJ. Supervised fitting of geometric primitives to 3D point clouds. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2019, p. 2652–60.
[16]
Figueiredo L., Ivson P., Celes W., Deep learning-based framework for shape instance registration on 3D CAD models, Comput Graph 101 (2021) 72–81.
[17]
Mitra N.J., Guibas L.J., Pauly M., Partial and approximate symmetry detection for 3D geometry, ACM Trans Graph 25 (3) (2006) 560–568.
[18]
Gal R., Cohen-Or D., Salient geometric features for partial shape matching and similarity, ACM Trans Graph 25 (1) (2006) 130–150.
[19]
Alt H., Mehlhorn K., Wagener H., Welzl E., Congruence, similarity, and symmetries of geometric objects, Discrete Comput Geom 3 (3) (1988) 237–256.
[20]
Martinet A., Soler C., Holzschuch N., Sillion F.X., Accurate detection of symmetries in 3D shapes, ACM Trans Graph 25 (2) (2006) 439–464.
[21]
Pauly M., Mitra N.J., Wallner J., Pottmann H., Guibas L.J., Discovering structural regularity in 3D geometry, in: ACM SIGGRAPH 2008 papers, 2008, pp. 1–11.
[22]
Horn B.K., Closed-form solution of absolute orientation using unit quaternions, Josa a 4 (4) (1987) 629–642.
[23]
Umeyama S., Least-squares estimation of transformation parameters between two point patterns, IEEE Trans Pattern Anal Mach Intell 13 (04) (1991) 376–380.
[24]
Kanatani K.i., Analysis of 3-D rotation fitting, IEEE Trans Pattern Anal Mach Intell 16 (5) (1994) 543–549.
[25]
Eggert D.W., Lorusso A., Fisher R.B., Estimating 3-D rigid body transformations: A comparison of four major algorithms, Mach Vis Appl 9 (5) (1997) 272–290.
[26]
Besl P.J., McKay N.D., Method for registration of 3-D shapes, in: Sensor fusion IV: Control paradigms and data structures, Vol. 1611, International Society for Optics and Photonics, 1992, pp. 586–606.
[27]
Kurobe A., Sekikawa Y., Ishikawa K., Saito H., Corsnet: 3D point cloud registration by deep neural network, IEEE Robot Autom Lett 5 (3) (2020) 3960–3966.
[28]
Wang Y, Solomon JM. Deep closest point: Learning representations for point cloud registration. In: Proceedings of the IEEE/CVF international conference on computer vision. 2019, p. 3523–32.
[29]
Yew ZJ, Lee GH. 3dfeat-net: Weakly supervised local 3D features for point cloud registration. In: Proceedings of the European conference on computer vision. 2018, p. 607–23.
[30]
Aoki Y, Goforth H, Srivatsan RA, Lucey S. Pointnetlk: Robust & efficient point cloud registration using pointnet. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019, p. 7163–72.
[31]
Ben-Shabat Y., Lindenbaum M., Fischer A., 3Dmfv: Three-dimensional point cloud classification in real-time using convolutional neural networks, IEEE Robot Autom Lett 3 (4) (2018) 3145–3152.
[32]
Gomez-Donoso F., Garcia-Garcia A., Garcia-Rodriguez J., Orts-Escolano S., Cazorla M., Lonchanet: A sliced-based cnn architecture for real-time 3D object recognition, in: 2017 International joint conference on neural networks, IEEE, 2017, pp. 412–418.
[33]
Wang Z., Zhang L., Zhang L., Li R., Zheng Y., Zhu Z., A deep neural network with spatial pooling (DNNSP) for 3-D point cloud classification, IEEE Trans Geosci Remote Sens 56 (8) (2018) 4594–4604.
[34]
Ravanbakhsh S., Schneider J., Poczos B., Deep learning with sets and point clouds, 2016, arXiv preprint arXiv:1611.04500.
[35]
Ben-Shabat Y., Lindenbaum M., Fischer A., 3D point cloud classification and segmentation using 3D modified fisher vector representation for convolutional neural networks, 2017, arXiv preprint arXiv:1711.08241.
[36]
Wang Y., Cao J., Li Y., Tu C., APM: Adaptive permutation module for point cloud classification, Comput Graph 97 (2021) 217–224.
[37]
Wu B., Zhou X., Zhao S., Yue X., Keutzer K., Squeezesegv2: Improved model structure and unsupervised domain adaptation for road-object segmentation from a lidar point cloud, in: 2019 International conference on robotics and automation, (ICRA), IEEE, 2019, pp. 4376–4382.
[38]
Hermosilla P., Ritschel T., Vázquez P.P., Vinacua À., Ropinski T., Monte carlo convolution for learning on non-uniformly sampled point clouds, ACM Trans Graph 37 (6) (2018) 1–12.
[39]
Hegde S., Gangisetty S., PIG-net: Inception based deep learning architecture for 3D point cloud segmentation, Comput Graph 95 (2021) 13–22.
[40]
Te G, Hu W, Zheng A, Guo Z. Rgcnn: Regularized graph cnn for point cloud segmentation. In: Proceedings of the 26th ACM international conference on multimedia. 2018, p. 746–54.
[41]
Bengio Y., Courville A., Vincent P., Representation learning: A review and new perspectives, IEEE Trans Pattern Anal Mach Intell 35 (8) (2013) 1798–1828.
[42]
Remelli E., Baque P., Fua P., Neuralsampler: Euclidean point cloud auto-encoder and sampler, 2019, arXiv preprint arXiv:1901.09394.
[43]
Jiang J., Lu X., Ouyang W., Wang M., Unsupervised representation learning for 3D point cloud data, 2021, arXiv preprint arXiv:2110.06632.
[44]
Rao Y, Lu J, Zhou J. Global-local bidirectional reasoning for unsupervised representation learning of 3D point clouds. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020, p. 5376–85.
[45]
Hassani K., Haley M., Unsupervised multi-task feature learning on point clouds, in: Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 8160–8171.
[46]
Achlioptas P., Diamanti O., Mitliagkas I., Guibas L., Learning representations and generative models for 3D point clouds, in: International conference on machine learning, PMLR, 2018, pp. 40–49.
[47]
Bachmann J., Blomqvist K., Förster J., Siegwart R., Points2Vec: Unsupervised object-level feature learning from point clouds, 2021, arXiv preprint arXiv:2102.04136.
[48]
Zamorski M., Zieba M., Klukowski P., Nowak R., Kurach K., Stokowiec W., et al., Adversarial autoencoders for compact representations of 3D point clouds, Comput Vis Image Underst 193 (2020).
[49]
Osada R., Funkhouser T., Chazelle B., Dobkin D., Shape distributions, ACM Trans Graph 21 (4) (2002) 807–832.
[50]
Ioffe S., Szegedy C., Batch normalization: Accelerating deep network training by reducing internal covariate shift, in: International conference on machine learning, PMLR, 2015, pp. 448–456.
[51]
Kingma D.P., Ba J., Adam: A method for stochastic optimization, 2014, arXiv preprint arXiv:1412.6980.
[52]
Bahmani B., Moseley B., Vattani A., Kumar R., Vassilvitskii S., Scalable k-means++, 2012, arXiv preprint arXiv:1203.6402.
[53]
Campello R.J., Moulavi D., Sander J., Density-based clustering based on hierarchical density estimates, in: Pacific-Asia conference on knowledge discovery and data mining, Springer, 2013, pp. 160–172.
[54]
Paszke A., Gross S., Massa F., Lerer A., Bradbury J., Chanan G., et al., Pytorch: An imperative style, high-performance deep learning library, Adv Neural Inf Process Syst 32 (2019).
[55]
Raschka S., Patterson J., Nolet C., Machine learning in python: Main developments and technology trends in data science, machine learning, and artificial intelligence, 2020, arXiv preprint arXiv:2002.04803.
[56]
Van Der Walt S., Colbert S.C., Varoquaux G., The numpy array: A structure for efficient numerical computation, Comput Sci Eng 13 (2) (2011) 22–30.
[57]
Caron M., Bojanowski P., Joulin A., Douze M., Deep clustering for unsupervised learning of visual features, in: Proceedings of the European conference on computer vision, 2018, pp. 132–149.
[58]
Crespo J.B., Aguiar P.M., Revisiting complex moments for 2-D shape representation and image normalization, IEEE Trans Image Process 20 (10) (2011) 2896–2911.
[59]
Wagner G., Delerue R., Raposo A., Corseuil E., Santos I., Calculating paint area in engineering virtual models, in: XI Simposium on virtual and augmented reality–SVR 2009, Porto Alegre, Brasil, 2009, pp. 161–167.
[60]
Nguyen T, Pham QH, Le T, Pham T, Ho N, Hua BS. Point-set distances for learning representations of 3D point clouds. In: Proceedings of the IEEE/CVF international conference on computer vision. 2021, p. 10478–87.
[61]
Chang A.X., Funkhouser T., Guibas L., Hanrahan P., Huang Q., Li Z., et al., Shapenet: An information-rich 3D model repository, 2015, arXiv preprint arXiv:1512.03012.
[62]
Wu Z, Song S, Khosla A, Yu F, Zhang L, Tang X, et al. 3d shapenets: A deep representation for volumetric shapes. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015, p. 1912–20.
[63]
Song C., Liu F., Huang Y., Wang L., Tan T., Auto-encoder based data clustering, in: IberoAmerican congress on pattern recognition, Springer, 2013, pp. 117–124.
[64]
Cheng Y., Mean shift, mode seeking, and clustering, IEEE Trans Pattern Anal Mach Intell 17 (8) (1995) 790–799.
[65]
Liu W., Rabinovich A., Berg A.C., Parsenet: Looking wider to see better, 2015, arXiv preprint arXiv:1506.04579.
[66]
Yan S, Yang Z, Ma C, Huang H, Vouga E, Huang Q. Hpnet: Deep primitive segmentation using hybrid representations. In: Proceedings of the IEEE/CVF international conference on computer vision. 2021, p. 2753–62.
[67]
Wang T., Liu M., Ng K.S., Spatially invariant unsupervised 3D object-centric learning and scene decomposition, in: Computer vision–ECCV 2022: 17th European conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIII, Springer, 2022, pp. 120–135.
[68]
Song Z., Yang B., OGC: Unsupervised 3D object segmentation from rigid dynamics of point clouds, 2022, arXiv preprint arXiv:2210.04458.
[69]
Xiao A., Huang J., Guan D., Zhang X., Lu S., Shao L., Unsupervised point cloud representation learning with deep neural networks: A survey, IEEE Trans Pattern Anal Mach Intell (2023).

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Computers and Graphics
Computers and Graphics  Volume 116, Issue C
Nov 2023
518 pages

Publisher

Pergamon Press, Inc.

United States

Publication History

Published: 04 March 2024

Author Tags

  1. 3D CAD models
  2. Point cloud
  3. Shape matching

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media