Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Automatic multi-view registration of point clouds via a high-quality descriptor and a novel 3D transformation estimation technique

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Generally, performing multiple scans is necessary to cover entire scanning area, and multiple point clouds are thus obtained. These point clouds need to be registered together, which is the focus of our work. However, influenced by the noise, point density variation, partial overlap and so on, the success rate of registration is still low. For this reason, a pairwise registration method is first proposed. It includes two main components: the triple local coordinate image (TriLCI) descriptor and a new 3D transformation estimation technique. The descriptor has high descriptiveness and strong robustness, by which more right correspondences can be achieved. The 3D transformation estimation technique is employed to calculate the rotation matrix and translation vector under the condition that most of correspondences are false. In this technique, a new geometric distance constraint is developed, and is then applied to eliminate incorrect correspondences. The robust estimation based on the median is introduced to calculate the rotation matrix and translation vector. For multi-view registration, the connected graph algorithm and shape growing based method are applied. A comparative study is presented. The experimental results well demonstrate that the proposed pairwise registration method has high success rate. The comparative study gives the merits and demerits of the two multi-view registration methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability statements

The datasets generated during and/or analysed during the current study are available in the websites http://staffhome.ecm.uwa.edu.au/~00053650/recognition.html and http://redwood-data.org/indoor/regbasic.html.

References

  1. Zhao, B., Hua, X., Yu, K., Xuan, W., Chen, X., Tao, W.: Indoor point cloud segmentation using iterative Gaussian mapping and improved model fitting. IEEE Trans. Geosci. Remote Sens. 58(11), 7890–7907 (2020). https://doi.org/10.1109/TGRS.2020.2984943

    Article  Google Scholar 

  2. Tian, P., Hua, X., Yu, K., Tao, W.: Robust segmentation of building planar features from unorganized point cloud. IEEE Access. 8, 30873–30884 (2020). https://doi.org/10.1109/ACCESS.2020.2973580

    Article  Google Scholar 

  3. Chen, X., Yu, K., Wu, H.: Determination of minimum detectable deformation of terrestrial laser scanning based on error entropy model. IEEE Trans. Geosci. Remote Sens. 56(1), 105–116 (2018). https://doi.org/10.1109/TGRS.2017.2737471

    Article  Google Scholar 

  4. Chen, X., Ban, Y., Hua, X., Lu, T., Tao, W., An, Q.: A method for the calculation of detectable landslide using terrestrial laser scanning data. Measurement 160, 1–9 (2020). https://doi.org/10.1016/j.measurement.2020.107852

    Article  Google Scholar 

  5. Tang, K., Song, P., Chen, X.: 3D object recognition in cluttered scenes with robust shape description and correspondence selection. IEEE Access. 5, 1833–1845 (2017). https://doi.org/10.1109/ACCESS.2017.2658681

    Article  Google Scholar 

  6. Tao, W., Hua, X., Yu, K., Chen, X., Zhao, B.: A pipeline for 3-D object recognition based on local shape description in cluttered scenes. IEEE Trans. Geosci. Remote Sens. 59(1), 801–816 (2021). https://doi.org/10.1109/TGRS.2020.2998683

    Article  Google Scholar 

  7. Liu, J., Xu, D., Hyyppä, J., Liang, Y.: A survey of applications with combined BIM and 3D laser scanning in the life cycle of buildings. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 14, 5627–5637 (2021). https://doi.org/10.1109/JSTARS.2021.3068796

    Article  Google Scholar 

  8. Tao, W., Hua, X., Chen, Z., Tian, P.: Fast and automatic registration of terrestrial point clouds using 2D line features. Remote Sensing. 12(8), 1283–1299 (2020). https://doi.org/10.3390/rs12081283

    Article  Google Scholar 

  9. Kusari, A., Glennie, C.L., Brooks, B.A., Ericksen, T.L.: Precise registration of laser mapping data by planar feature extraction for deformation monitoring. IEEE Trans. Geosci. Remote Sens. 7(6), 3404–3422 (2019). https://doi.org/10.1109/TGRS.2018.2884712

    Article  Google Scholar 

  10. Zhou, W., Ma, C., Yao, T., Chang, P., Zhang, Q., Kuijper, A.: Histograms of Gaussian normal distribution for 3D feature matching in cluttered scenes. Vis. Comput. 35, 489–505 (2019). https://doi.org/10.1007/s00371-018-1478-x

    Article  Google Scholar 

  11. Quan, S., Ma, J., Hu, F., Fang, B., Ma, T.: Local voxelized structure for 3D binary feature representation and robust registration of point clouds from low-cost sensors. Inf. Sci. 444, 153–171 (2018). https://doi.org/10.1016/j.ins.2018.02.070

    Article  Google Scholar 

  12. Yang, J., Fan, S., Huang, Z., Quan, S., Wang, W., Zhang, Y.: VOID: 3D object recognition based on voxelization in invariant distance space. Vis. Comput. (2022). https://doi.org/10.1007/s00371-022-02514-1

    Article  Google Scholar 

  13. Tao, W., Hua, X., Wang, R., Xu, D.: Quintuple local coordinate images for local shape description. Photogramm. Eng. Remote Sens. 86(2), 121–132 (2020). https://doi.org/10.14358/PERS.86.2.121

    Article  Google Scholar 

  14. Aiger, D., Mitra, N.J., Cohen-Or, D.: 4-points congruent sets for robust pairwise surface registration. ACM Trans. Graph. 27(3), 85–95 (2008). https://doi.org/10.1145/1360612.1360684

    Article  Google Scholar 

  15. Mohamad, M., Ahmed, M.T., Rappaport, D., Greenspan, M.: Super generalized 4PCS for 3D registration. In: 2015 International Conference on 3D Vision. pp. 598–606 (2015). DOI: https://doi.org/10.1109/3DV.2015.74

  16. Johnson, A.E., Hebert, M.: Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Trans. Pattern Anal. Mach. Intell. 21(5), 433–449 (1999). https://doi.org/10.1109/34.765655

    Article  Google Scholar 

  17. Rusu, R.B., Blodow, N., Beetz, M.: Fast point feature histograms (FPFH) for 3D registration. In: 2009 IEEE International Conference on Robotics and Automation. pp. 3212–3217 (2009)

  18. Zhao, B., Xi, J.: Efficient and accurate 3D modeling based on a novel local feature descriptor. Inf. Sci. 512, 295–314 (2020). https://doi.org/10.1016/j.ins.2019.04.020

    Article  Google Scholar 

  19. Malassiotis, S., Strintzis, M.G.: Snapshots: a novel local surface descriptor and matching algorithm for robust 3D surface alignment. IEEE Trans. Pattern Anal. Mach. Intell. 29(7), 1285–1290 (2007). https://doi.org/10.1109/TPAMI.2007.1060

    Article  Google Scholar 

  20. Tombari, F., Salti, S., Di Stefano, L.: Unique signatures of histograms for local surface description. In: European Conference on Computer Vision Conference on Computer Vision. pp. 356–369 (2010)

  21. Frome, A., Huber, D., Kolluri, R., Bülow, T., Malik, J.: Recognizing objects in range data using regional point descriptors. In: Proceedings of the European Conference on Computer Vision. pp. 224–237 (2004)

  22. Guo, Y., Sohel, F., Bennamoun, M., Lu, M., Wan, J.: Rotational projection statistics for 3D local surface description and object recognition. Int. J. Comput. Vision 105(1), 63–86 (2013). https://doi.org/10.1007/s11263-013-0627-y

    Article  MathSciNet  Google Scholar 

  23. Zhao, H., Tang, M., Ding, H.: HoPPF: a novel local surface descriptor for 3D object recognition. Pattern Recogn. 103, 107272–107285 (2020). https://doi.org/10.1016/j.patcog.2020.107272

    Article  Google Scholar 

  24. Zhang, Y., Li, C., Guo, B., Guo, C., Zhang, S.: KDD: A kernel density based descriptor for 3D point clouds. Pattern Recogn. 111, 107691–107704 (2021). https://doi.org/10.1016/j.patcog.2020.107691

    Article  Google Scholar 

  25. Yang, J., Quan, S., Wang, P., Zhang, Y.: Evaluating local geometric feature representations for 3D rigid data matching. IEEE Trans. Image Process. 29, 2522–2535 (2020). https://doi.org/10.1109/TIP.2019.2959236

    Article  Google Scholar 

  26. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM. 24(6), 381–395 (1981). https://doi.org/10.1145/358669.358692

    Article  MathSciNet  Google Scholar 

  27. Guo, Y., Bennamoun, M., Sohel, F., Lu, M., Wan, J.: An integrated framework for 3-D modeling, object detection, and pose estimation from point-clouds. IEEE Trans. Instrum. Meas. 64(3), 683–693 (2015). https://doi.org/10.1109/TIM.2014.2358131

    Article  Google Scholar 

  28. Chen, H., Bhanu, B.: 3D free-form object recognition in range images using local surface patches. Pattern Recogn. Lett. 28(10), 1252–1262 (2007). https://doi.org/10.1016/j.patrec.2007.02.009

    Article  Google Scholar 

  29. Albarelli, A., Rodola, E., Torsello, A.: A game-theoretic approach to fine surface registration without initial motion estimation. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 430–437 (2010). DOI: https://doi.org/10.1109/CVPR.2010.5540183

  30. Newcombe, R.A., et al.: KinectFusion: real-time dense surface mapping and tracking.In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality. pp. 127–136 (2011). DOI: https://doi.org/10.1109/ISMAR.2011.6092378

  31. Huber, D.F., Hebert, M.: Fully automatic registration of multiple 3D data sets. Image Vis. Comput. 21(7), 637–650 (2003). https://doi.org/10.1016/S0262-8856(03)00060-X

    Article  Google Scholar 

  32. Mian, A.S., Bennamoun, M., Owens, R.: Three-dimensional model based object recognition and segmentation in cluttered scenes. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1584–1601 (2006). https://doi.org/10.1109/TPAMI.2006.213

    Article  Google Scholar 

  33. Guo, Y., Sohel, F., Bennamoun, M., Wan, J., Lu, M.: An accurate and robust range image registration algorithm for 3D object modeling. IEEE Trans. Multimed. 16(5), 1377–1390 (2014). https://doi.org/10.1109/TMM.2014.2316145

    Article  Google Scholar 

  34. Lim, J., Lee, K.: 3D object recognition using scale-invariant features. Vis. Comput. 35(1), 71–84 (2019). https://doi.org/10.1007/s00371-017-1453-y

    Article  Google Scholar 

  35. Zai, D., Li, J., Guo, Y., Cheng, M., Huang, P., Cao, X., Wang, C.: Pairwise registration of TLS point clouds using covariance descriptors and a non-cooperative game. ISPRS J. Photogramm. Remote. Sens. 134, 15–29 (2017). https://doi.org/10.1016/j.isprsjprs.2017.10.001

    Article  Google Scholar 

  36. Yang, J., Xian, K., Wang, P., Zhang, Y.: A performance evaluation of correspondence grouping methods for 3D rigid data matching. IEEE Trans. Pattern Anal. Mach. Intell. 43(6), 1859–1874 (2021). https://doi.org/10.1109/TPAMI.2019.2960234

    Article  Google Scholar 

  37. Rusu, R.B., Cousins, S.: 3D is here: point cloud library (PCL). In: Proceedings IEEE International Conference Robotics and Automation pp. 1–4 (2011)

  38. Tao, W., Hua, X., Li, P., Wu, F., Feng, S., Xu, D.: An iterated reweighting total least squares algorithm formulated by standard least-squares theory. Surv. Rev. 53(380), 454–463 (2020). https://doi.org/10.1080/00396265.2020.1831829

    Article  Google Scholar 

  39. Peter, J.R., Mia, H.: Anomaly detection by robust statistics. WIREs Data Min. Knowl. Discovery 8, 1–14 (2018). https://doi.org/10.1002/widm.1236

    Article  Google Scholar 

  40. Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992). https://doi.org/10.1109/34.121791

    Article  Google Scholar 

  41. Choi, S., Zhou, Q., Koltun, V.: Robust reconstruction of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 5556–5565 (2015). DOI: https://doi.org/10.1109/CVPR.2015.7299195

  42. Yang, J., Zhang, Q., Xiao, Y., Cao, Z.: TOLDI: an effective and robust approach for 3D local shape description. Pattern Recogn. 65, 175–187 (2017). https://doi.org/10.1016/j.patcog.2016.11.019

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to acknowledge the publishers of the UWA3M and ICL-NUIM datasets. This work is supported by the National Natural Science Foundation of China (41674005, 42104023), Jiangxi University of Science and Technology High-level Talent Research Startup Project (2021205200100564) and Spatial Cognition Augmented High-usability High-precision Smartphone Indoor Positioning (41874031).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wuyong Tao.

Ethics declarations

Conflict of interest

No potential conflict of interest was reported by the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tao, W., Hua, X., He, X. et al. Automatic multi-view registration of point clouds via a high-quality descriptor and a novel 3D transformation estimation technique. Vis Comput 40, 2615–2630 (2024). https://doi.org/10.1007/s00371-023-02942-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-02942-7

Keywords