Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Probability driven approach for point cloud registration of indoor scene

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Point cloud registration is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. In this paper, we present a novel probability driven algorithm for point cloud registration of the indoor scene based on RGB-D images. Firstly, we extract the key points in RGB-D images and map the key points to 3D space as preprocessing. Then, we build the distance matrix and the difference matrix for each point cloud, respectively in scalarization and vectorization, to encode the structural proximity. And establish the corresponding point set by computing the matching probabilities. At last, we solve the transform matrix that aligns the source point cloud to the target point cloud. The entire registration framework consists of two phases: coarse registration based on the distance matrix (in scalarization) and fine registration based on the difference matrix (in vectorization). The two-phase registration strategy is able to greatly reduce the influence of inherent noise. Experiments demonstrate that our method outperforms in registration accuracy than the state-of-the-art methods. Furthermore, our method is more efficient than existing methods in computing speed because we utilize the location relationship between key points instead of point features. The source code is provided at our project website https://github.com/BeCoolGuy/Probability-Driven-Approach-for-Point-Cloud-Registration-of-Indoor-Scene.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Bobenrieth, C., Seo, H., H, A., Cordier, F.: Indoor scene reconstruction from a sparse set of 3D shots. In: Proceedings of the Computer Graphics International Conference. 27:1–27:5, Yokohama, Japan (2017)

  2. Yang, S., Maturana, D., Scherer, S.A.: Real-time 3D scene layout from a single image using convolutional neural networks. In: IEEE International Conference on Robotics and Automation, pp. 2183–2189, Stockholm, Sweden (2016)

  3. Li, Z., Liu, X., Xie, X., Li, G., Mai, S., Wang, Z.: An optical tracker based registration method using feedback for robot-assisted insertion surgeries. In: IEEE International Symposium on Circuits and Systems. 1–4, Baltimore, MD, USA (2017)

  4. Lan, Z., Yew, Z.J., Lee, G.H.: Robust point cloud based reconstruction of large-scale outdoor scenes. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 9690–9698, Long Beach, CA, USA (2019)

  5. Gao, Q.H., Wan, T.R., Tang, W., Chen, L., Zhang, K.B.: An Improved Augmented Reality Registration Method Based on Visual SLAM. E-Learning and Games, pp. 11–19, Bournemouth, UK (2017)

  6. Nassar, A., Amer, K., ElHakim, R., ElHelw, M.: A deep CNN-based framework for enhanced aerial imagery registration with applications to UAV Geolocalization. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1513–1523, Salt Lake City, UT, USA (2018)

  7. Muñoz, F.I.I., Comport, A.I.: A proof that fusing measurements using Point-to-hyperplane registration is invariant to relative scale. In: IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp. 517–522, Baden-Baden, Germany (2016)

  8. Yang, G., Sun, C., Chen, Y., Tang, L., Shu, H., Dillenseger, J-L.: Automatic whole heart segmentation in CT images based on multi-atlas image registration. Statistical Atlases and Computational Models of the Heart, pp. 250–257, Quebec City, Canada (2017)

  9. Das, M.P., Dong, Z., Scherer, S.A.: Joint point cloud and image based localization for efficient inspection in mixed reality. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 6357–6363, Madrid, Spain (2018)

  10. Hu, L., Xiao, J., Wang, Y.: An automatic 3D registration method for rock mass point clouds based on plane detection and polygon matching. Vis. Comput. 36, 669–681 (2020)

    Article  MathSciNet  Google Scholar 

  11. Wang, K., Zhang, G., Yang, J., et al.: Dynamic human body reconstruction and motion tracking with low-cost depth cameras. Vis. Comput. 2020, 1432–2315 (2020)

    Google Scholar 

  12. Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14, 239–256 (1992)

    Article  Google Scholar 

  13. Yang, J., Li, H., Campbell, D., Jia, Y.: Go-ICP: a globally optimal solution to 3D ICP point-set registration. IEEE Trans. Pattern Anal. Mach. Intell. 38, 2241–2254 (2016)

    Article  Google Scholar 

  14. Servos, J., Waslander, S.L.: Multi-channel generalized-ICP: a robust framework for multi-channel scan registration. Robot. Auton. Syst. 87, 247–257 (2017)

    Article  Google Scholar 

  15. Point, S.I.C.: Sofien Bouaziz, Andrea Tagliasacchi, Mark Pauly. Comput. Graph. Forum. 32, 113–123 (2013)

    Google Scholar 

  16. Ren, B., Wu, J.-C., Lv, Y.-L., Cheng, M.-M., Lu, S.-P.: Geometry-aware ICP for scene reconstruction from RGB-D camera. J. Comput. Sci. Technol. 34, 581–593 (2019)

    Article  Google Scholar 

  17. Tazir, M.L., Gokhool, T., Checchin, P., Malaterre, L., Trassoudaine, L.: CICP: cluster iterative closest point for sparse-dense point cloud registration. Robot. Auton. Syst. 108, 66–86 (2018)

    Article  Google Scholar 

  18. Altantsetseg, E., Khorloo, O., Konno, K.: Rigid registration of noisy point clouds based on higher-dimensional error metrics. Vis. Comput. 34, 1021–1030 (2018)

    Article  Google Scholar 

  19. Aiger, D., Mitra, N.J., Cohen-Or, D.: 4-points congruent sets for robust pairwise surface registration. ACM Trans. Graph. 27, 85 (2018)

    Google Scholar 

  20. Mellado, N., Aiger, D., Mitra, N.J.: Super 4PCS fast global pointcloud registration via smart indexing. Comput. Graph. Forum. 33, 205–215 (2014)

    Article  Google Scholar 

  21. Gao, W., Tedrake, R.: FilterReg: robust and efficient probabilistic point-set registration using gaussian filter and twist parameterization. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 11095–11104, Long Beach, CA, USA (2019)

  22. Chang, W., Wu, C.: Candidate-based matching of 3-D point clouds with axially switching pose estimation. Vis. Comput. 36, 593–607 (2020)

    Article  Google Scholar 

  23. Li, M., Zhang, M., Niu, D., et al.: Point set registration based on feature point constraints. Vis. Comput. 36, 1725–C1738 (2020)

    Article  Google Scholar 

  24. Le, H.M., Do, T.-T., Hoang, T., Cheung, N.-M.: SDRSAC: semidefinite-based randomized approach for robust point cloud registration without correspondences. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 124–133, Long Beach, CA, USA (2019)

  25. Horaud, R., Forbes, F., Yguel, M., Dewaele, G., Zhang, J.: Rigid and articulated point registration with expectation conditional maximization. IEEE Trans. Pattern Anal. Mach. Intell. 33, 587–602 (2011)

    Article  Google Scholar 

  26. Jian, B., Vemuri, B.C.: Robust point set registration using gaussian mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 33, 1633–1645 (2011)

    Article  Google Scholar 

  27. Lan, Z., Yew, Z.J., Lee, G.H.: Robust point cloud based reconstruction of large-scale outdoor scenes. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 9690–9698. Long Beach, CA, USA (2019)

  28. Zhou, Q.-Y., Koltun, V.: Depth camera tracking with contour cues. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 632–638, Boston, MA, USA (2015)

  29. Park, J., Zhou, Q.-Y., Koltun, V.: Colored point cloud registration revisited. In: IEEE International Conference on Computer Vision, pp. 143–152, Venice, Italy (2017)

  30. Campbell, D., Petersson, L.: An adaptive data representation for robust point-set registration and merging. In: IEEE International Conference on Computer Vision, pp. 4292–4300, Santiago, Chile (2015)

  31. Stoyanov, T., Magnusson, M., Lilienthal, A.J.: Point set registration through minimization of the L2 distance between 3D-NDT models. In: IEEE International Conference on Robotics and Automation, pp. 5196–5201, Paul, Minnesota (2012)

  32. Jian, B., Vemuri, B.C.: A robust algorithm for point set registration using mixture of Gaussians. In: IEEE International Conference on Computer Vision, pp. 1246–1251, Beijing, China (2005)

  33. Eckart, B., Kim, K., Kautz, J.: Fast and Accurate Point Cloud Registration using Trees of Gaussian Mixtures. CoRR. abs/1807.02587 (2018)

  34. Rusu, R.B., Blodow, N., Beetzichael, M.: Fast point feature histograms (FPFH) for 3D registration. In: IEEE International Conference on Robotics and Automation, pp. 3212–3217, Kobe, Japan (2009)

  35. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM. 24, 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  36. Zhou, Q.-Y., Park, J., Koltun, V.: Fast global registration. In: Computer Vision, pp. 766–782, Amsterdam, The Netherlands (2016)

  37. Yasir, R., Eramian, M.G., Stavness, I., Shirtliffe, S., Duddu, H.S.: Data-driven multispectral image registration. In: Computer and Robot Vision, pp. 230–237, Toronto, ON, Canada (2018)

  38. Li, Y., Chen, C., Yang, F., Huang, J.: Deep sparse representation for robust image registration. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4894–4901, Boston, MA, USA (2015)

  39. Lê-Huu, D.K., Paragios, N.: Alternating direction graph matching. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4914-4922, Honolulu, HI, USA (2017)

  40. Yang, X., Liu, Z.-Y.: Adaptive graph matching. IEEE Trans. Cybernet. 48, 1432–1445 (2018)

    Article  Google Scholar 

  41. Shi, J., Tomasi, C.: Good features to track. In: Conference on Computer Vision and Pattern Recognition, pp. 593–600, Seattle, WA, USA (1994)

  42. Xiao, J., Owens, A., Torralba, A.: SUN3D: a database of big spaces reconstructed using SfM and object labels. In: IEEE International Conference on Computer Vision, pp. 1625–1632, Sydney, Australia (2013)

  43. Harris, C.G., Stephens, M.: A combined corner and edge detector. In: Proceedings of the Alvey Vision Conference, pp. 1–6, Manchester, UK (1988)

  44. Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from RGBD images. European Conference on Computer Vision, pp. 746–760, Florence, Italy (2012)

  45. Halko, N., Martinsson, P., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53, 217–288 (2011)

    Article  MathSciNet  Google Scholar 

  46. Handa, A., Whelan, T., McDonald, J., Davison, A.J.: A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM. In: IEEE International Conference on Robotics and Automation, pp. 1524–1531. Hong Kong, China (2014)

Download references

Acknowledgements

This work was supported by the the NSFC-Zhejiang Joint Fund of the Integration of Informatization and Industrialization (U1909210), the National Natural Science Foundation of China (NSFC) (61772312), the Key Research and Development Project of Shandong Province (2017GGX10110, 2019GGX101007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuanfeng Zhou.

Ethics declarations

Conflict of interest

All authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dong, K., Gao, S., Xin, S. et al. Probability driven approach for point cloud registration of indoor scene. Vis Comput 38, 51–63 (2022). https://doi.org/10.1007/s00371-020-01999-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-020-01999-y

Keywords