Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Appearance-based loop closure detection combining lines and learned points for low-textured environments

Published: 01 March 2022 Publication History

Abstract

Hand-crafted point descriptors have been traditionally used for visual loop closure detection. However, in low-textured environments, it is usually difficult to find enough point features and, hence, the performance of such algorithms degrade. Under this context, this paper proposes a loop closure detection method that combines lines and learned points to work, particularly, in scenarios where hand-crafted points fail. To index previous images, we adopt separate incremental binary Bag-of-Words (BoW) schemes for points and lines. Moreover, we adopt a binarization procedure for features’ descriptors to benefit from the advantages of learned features into a binary BoW model. Furthermore, image candidates from each BoW instance are merged using a novel query-adaptive late fusion approach. Finally, a spatial verification stage, which integrates appearance and geometry perspectives, allows us to enhance the global performance of the method. Our approach is validated using several public datasets, outperforming other state-of-the-art solutions in most cases, especially in low-textured scenarios.

References

[1]
Angeli A, Filliat D, Doncieux S, and Meyer JA A fast and incremental method for loop-closure detection using bags of visual words IEEE Transactions on Robotics 2008 24 5 1027-1037
[2]
Arroyo, R., Alcantarilla, P. F., Bergasa, L. M., Romera, E. (2016). Fusion and binarization of CNN features for robust topological localization across seasons. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4656–4663.
[3]
Arroyo, R., Alcantarilla, P. F., Bergasa, L. M., Yebes, J. J., Bronte, S. (2014). Fast and effective visual place recognition using binary codes and disparity information. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3089–3094.
[4]
Bampis L, Amanatiadis A, and Gasteratos A Fast loop-closure detection using visual-word-vectors from image sequences International Journal of Robotics Research 2018 37 1 62-82
[5]
Bay, H., Tuytelaars, T., Van Gool, L. (2006). Surf: Speeded up robust features. In: European Conference on Computer Vision, pp. 404–417.
[6]
Bhowmik, N., González, R., Gouet-Brunet, V., Pedrini, H., Bloch, G. (2014). Efficient fusion of multidimensional descriptors for image retrieval. In: IEEE International Conference on Image Processing, pp. 5766–5770.
[7]
Bian, J., Lin, W., Matsushita, Y., Yeung, S., Nguyen, T., Cheng, M. (2017). Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence. In: International Conference on Computer Vision and Pattern Recognition, pp. 2828–2837.
[8]
Blanco JL, Moreno FA, and Gonzalez J A collection of outdoor robotic datasets with centimeter-accuracy ground truth Autonomous Robots 2009 27 4 327
[9]
Burri M, Nikolic J, Gohl P, Schneider T, Rehder J, Omari S, Achtelik MW, and Siegwart R The EuRoC micro aerial vehicle datasets International Journal of Robotics Research 2016 35 10 1157-1163
[10]
Cadena C, Carlone L, Carrillo H, Latif Y, Scaramuzza D, Neira J, Reid I, and Leonard JJ Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age IEEE Transactions on Robotics 2016 32 6 1309-1332
[11]
Calonder, M., Lepetit, V., Strecha, C., Fua, P. (2010). Brief: Binary robust independent elementary features. In: European Conference on Computer Vision, pp. 778–792.
[12]
Chen, Z., Jacobson, A., Sünderhauf, N., Upcroft, B., Liu, L., Shen, C., Reid, I., Milford, M. (2017). Deep learning features at scale for visual place recognition. In: IEEE International Conference on Robotics and Automation, pp. 3223–3230.
[13]
Chen, Z., Lam, O., Jacobson, A., Milford, M. (2014). Convolutional neural network-based place recognition. In: Australasian Conference on Robotics and Automation, pp. 1–8.
[14]
Company-Corcoles, J. P., Garcia-Fidalgo, E., Ortiz, A. (2020). LiPo-LCD: Combining lines and points for appearance-based loop closure detection. In: British Machine Vision Conference.
[15]
Cummins M and Newman P FAB-MAP: Probabilistic localization and mapping in the space of appearance International Journal of Robotics Research 2008 27 6 647-665
[16]
Cummins, M., Newman, P. (2011). Appearance-only SLAM at large scale with FAB-MAP 2.0. International Journal of Robotics Research30(9), 1100–1123.
[17]
Dasgupta, S. (2013). Experiments with random projection. arXiv preprint arXiv:1301.3849
[18]
DeTone, D., Malisiewicz, T., Rabinovich, A. (2018). Superpoint: Self-supervised interest point detection and description. In: International Conference on Computer Vision and Pattern Recognition, pp. 224–236.
[19]
Dusmanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sivic, J., Torii, A., Sattler, T. (2019). D2-net: A trainable CNN for joint description and detection of local features. In: International Conference on Computer Vision and Pattern Recognition, pp. 8092–8101.
[20]
Fischler MA and Bolles RC Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography Communications of the ACM 1981 24 6 381-395
[21]
Galvez-López D and Tardos JD Bags of binary words for fast place recognition in image sequences IEEE Transactions on Robotics 2012 28 5 1188-1197
[22]
Garcia-Fidalgo E and Ortiz A Vision-based topological mapping and localization methods: a survey Robotics and Autonomous Systems 2015 64 1-20
[23]
Garcia-Fidalgo E and Ortiz A iBoW-LCD: An appearance-based loop-closure detection approach using incremental bags of binary words IEEE Robotics and Automation Letters 2018 3 4 3051-3057
[24]
Garg, S., Milford, M. (2020). Fast, compact and highly scalable visual place recognition through sequence-based matching of overloaded representations. In: IEEE International Conference on Robotics and Automation, pp. 3341–3348.
[25]
Garg, S., Milford, M. (2021). Seqnet: Learning descriptors for sequence-based hierarchical place recognition. IEEE Robotics and Automation Letters, 4305–4312.
[26]
Gehrig, M., Stumm, E., Hinzmann, T., Siegwart, R. (2017). Visual place recognition with probabilistic voting. In: IEEE International Conference on Robotics and Automation, pp. 3192–3199.
[27]
Geiger, A., Lenz, P., Urtasun, R. (2012). Are we ready for autonomous driving? The KITTI vision benchmark suite. In: International Conference on Computer Vision and Pattern Recognition, pp. 3354–3361.
[28]
Gomez-Ojeda R, Moreno F, Zuñiga-Noël D, Scaramuzza D, and Gonzalez-Jimenez J PL-SLAM: A stereo SLAM system through the combination of points and line segments IEEE Transactions on Robotics 2019 35 3 734-746
[29]
Grompone von Gioi, R., Jakubowicz, J., Morel, J., Randall, G. (2010). LSD: A fast line segment detector with a false detection control. IEEE Transactions on Pattern Analysis and Machine Intelligence32(4), 722–732.
[30]
Han, J., Dong, R., Kan, J. (2021). A novel loop closure detection method with the combination of points and lines based on information entropy. Journal of Field Robotics, 38, 386–401.
[31]
Hausler, S., Milford, M. (2020). Hierarchical multi-process fusion for visual place recognition. arXiv preprint arXiv:2002.03895
[32]
Kenshimov C, Bampis L, Amirgaliyev B, Arslanov M, and Gasteratos A Deep learning features exception for cross-season visual place recognition Pattern Recognition Letters 2017 100 124-130
[33]
Khan, S., Wollherr, D. (2015). IBuILD: Incremental bag of binary words for appearance based loop closure detection. In: IEEE International Conference on Robotics and Automation, pp. 5441–5447.
[34]
Labbé M and Michaud F Appearance-based loop closure detection for online large-scale and long-term operation IEEE Transactions on Robotics 2013 29 3 734-745
[35]
Lin, K., Lu, J., Chen, C., Zhou, J. (2016). Learning compact binary descriptors with unsupervised deep neural networks. In: International Conference on Computer Vision and Pattern Recognition, pp. 1183–1192.
[36]
Lin K, Lu J, Chen C, Zhou J, and Sun M Unsupervised deep learning of compact binary descriptors IEEE Transactions on Pattern Analysis and Machine Intelligence 2019 41 6 1501-1514
[37]
Lopez-Antequera M, Gomez-Ojeda R, Petkov N, and Gonzalez-Jimenez J Appearance-invariant place recognition by discriminatively training a convolutional neural network Pattern Recognition Letters 2017 92 89-95
[38]
Lowe DG Distinctive image features from scale-invariant keypoints International Journal of Computer Vision 2004 60 2 91-110
[39]
Lowry, S., Andreasson, H. (2018). Logos: Local geometric support for high-outlier spatial verification. In: IEEE International Conference on Robotics and Automation, pp. 7262–7269.
[40]
Lowry S, Sünderhauf N, Newman P, Leonard JJ, Cox D, Corke P, and Milford MJ Visual place recognition: A survey IEEE Transactions on Robotics 2016 32 1 1-19
[41]
Ma J, Zhao J, Jiang J, Zhou H, and Guo X Locality preserving matching International Journal of Computer Vision 2019 127 5 512-531
[42]
Milford, M. J., Wyeth, G. F. (2012). SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights. In: IEEE International Conference on Robotics and Automation, pp. 1643–1649.
[43]
Mur-Artal, R., Tardós, J. D. (2014). Fast relocalisation and loop closing in keyframe-based SLAM. In: IEEE International Conference on Robotics and Automation, pp. 846–853.
[44]
Mur-Artal R and Tardós JD ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras IEEE Transactions on Robotics 2017 33 5 1255-1262
[45]
Neubert, P., Schubert, S., Protzel, P. (2019). A neurologically inspired sequence processing model for mobile robot place recognition. IEEE Robotics and Automation Letters 3200–3207.
[46]
Nister, D., Stewenius, H. (2006). Scalable recognition with a vocabulary tree. In: International Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 2161–2168.
[47]
Pumarola, A., Vakhitov, A., Agudo, A., Sanfeliu, A., Moreno-Noguer, F. (2017). PL-SLAM: real-time monocular visual SLAM with points and lines. In: IEEE International Conference on Robotics and Automation, pp. 4503–4508
[48]
Revaud, J., Weinzaepfel, P., De Souza, C., Pion, N., Csurka, G., Cabon, Y., Humenberger, M. (2019). R2d2: Repeatable and reliable detector and descriptor. arXiv preprint arXiv:1906.06195
[49]
Rublee, E., Rabaud, V., Konolige, K., Bradski, G. (2011). ORB: An efficient alternative to SIFT or SURF. In: International Conference on Computer Vision, pp. 2564–2571.
[50]
Sivic, Z. (2003). Video google: A text retrieval approach to object matching in videos. In: International Conference on Computer Vision, Vol. 2, 1470–1477.
[51]
Stewart, B., Ko, J., Fox, D., Konolige, K. (2002). The revisiting problem in mobile robot map building: A hierarchical bayesian approach. In: Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, p. 551-558.
[52]
Sünderhauf, N., Protzel, P. (2011). BRIEF-Gist—closing the loop by simple means. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1234–1241.
[53]
Sünderhauf, N., Shirazi, S., Dayoub, F., Upcroft, B., Milford, M. (2015). On the performance of ConvNet features for place recognition. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4297–4304.
[54]
Tsintotas, K. A., Bampis, L., Gasteratos, A. (2018). Assigning visual words to places for loop closure detection. In: IEEE International Conference on Robotics and Automation, pp. 5979–5985
[55]
Tsintotas KA, Bampis L, and Gasteratos A Probabilistic appearance-based place recognition through bag of tracked words IEEE Robotics and Automation Letters 2019 4 2 1737-1744
[56]
Tsintotas KA, Bampis L, and Gasteratos A Modest-vocabulary loop-closure detection with incremental bag of tracked words Robotics and Autonomous Systems 2021 141
[57]
Williams B, Klein G, and Reid I Automatic relocalization and loop closing for real-time monocular slam IEEE Transactions on Pattern Analysis and Machine Intelligence 2011 33 9 1699-1712
[58]
Yefeng Z and Doermann D Robust point matching for nonrigid shapes by preserving local neighborhood structures IEEE Transactions on Pattern Analysis and Machine Intelligence 2006 28 4 643-649
[59]
Yue, H., Miao, J., Yu, Y., Chen, W., Wen, C. (2019). Robust loop closure detection based on bag of superpoints and graph verification. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3787–3793.
[60]
Zaffar, M., Ehsan, S., Milford, M., Flynn, D., McDonald-Maier, K. (2020). VPR-Bench: An open-source visual place recognition evaluation framework with quantifiable viewpoint and appearance change. arXiv preprint arXiv:2005.08135
[61]
Zhang F, Rui T, Yang C, and Shi J Lap-SLAM: A line-assisted point-based monocular VSLAM Electronics 2019 8 2 243
[62]
Zhang, G., Lilly, M.J., Vela, P. A. (2016). Learning binary features online from motion dynamics for incremental loop-closure detection and place recognition. In: IEEE International Conference on Robotics and Automation, pp. 765–772.
[63]
Zhang L and Koch R An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency Journal of Visual Communication and Image Representation 2013 24 7 794-805
[64]
Zheng, L., Wang, S., Tian, L., Fei, H., Liu, Z., Tian, Q. (2015). Query-adaptive late fusion for image search and person re-identification. In: International Conference on Computer Vision and Pattern Recognition, pp. 1741–1750.
[65]
Zuo, X., Xie, X., Liu, Y., Huang, G. (2017). Robust visual SLAM with point and line features. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1775–1782.

Cited By

View all
  • (2024)BASL-AD SLAM: A Robust Deep-Learning Feature-Based Visual SLAM System With Adaptive Motion ModelIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.336790625:9(11794-11804)Online publication date: 18-Mar-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Autonomous Robots
Autonomous Robots  Volume 46, Issue 3
Mar 2022
59 pages

Publisher

Kluwer Academic Publishers

United States

Publication History

Published: 01 March 2022
Accepted: 20 December 2021
Received: 20 April 2021

Author Tags

  1. Appearance-based localization
  2. Loop closure detection
  3. Visual place recognition
  4. Simultaneous localization and mapping

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)BASL-AD SLAM: A Robust Deep-Learning Feature-Based Visual SLAM System With Adaptive Motion ModelIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.336790625:9(11794-11804)Online publication date: 18-Mar-2024

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media