Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

A Versatile Visual Navigation System for Autonomous Vehicles

  • Conference paper
  • First Online:
Modelling and Simulation for Autonomous Systems (MESAS 2018)

Abstract

We present a universal visual navigation method which allows a vehicle to autonomously repeat paths previously taught by a human operator. The method is computationally efficient and does not require camera calibration. It can learn and autonomously traverse arbitrarily shaped paths and is robust to appearance changes induced by varying outdoor illumination and naturally-occurring environment changes. The method does not perform explicit position estimation in the 2d/3d space, but it relies on a novel mathematical theorem, which allows fusing exteroceptive and interoceptive sensory data in a way that ensures navigation accuracy and reliability. The experiments performed indicate that the proposed navigation method can accurately guide different autonomous vehicles along the desired path. The presented system, which was already deployed in patrolling scenarios, is provided as open source at www.github.com/gestom/stroll_bearnav.

The work has been supported by the Czech Science Foundation project 17-27006Y and by the Segurancas roboTicos coOPerativos (STOP) research project (CENTRO-01-0247-FEDER-017562), co-funded by the Agencia Nacional de Inovacao within the Portugal2020 programme.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://stop.ingeniarius.pt.

References

  1. Arvin, F., Krajník, T., Turgut, A.E., Yue, S.: COS\(\varPhi \): artificial pheromone system for robotic swarms research. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2015)

    Google Scholar 

  2. Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006). https://doi.org/10.1007/11744023_32

    Chapter  Google Scholar 

  3. Behzadian, B., Agarwal, P., Burgard, W., Tipaldi, G.D.: Monte carlo localization in hand-drawn maps. In: IROS, pp. 4291–4296. IEEE (2015)

    Google Scholar 

  4. Biber, P., Duckett, T.: Dynamic maps for long-term operation of mobile service robots. In: RSS (2005)

    Google Scholar 

  5. Blanc, G., Mezouar, Y., Martinet, P.: Indoor navigation of a wheeled mobile robot along visual routes. In: IEEE International Conference on Robotics and Automation (ICRA) (2005)

    Google Scholar 

  6. Calonder, M., Lepetit, V., Strecha, C., Fua, P.: BRIEF: binary robust independent elementary features. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 778–792. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_56

    Chapter  Google Scholar 

  7. Chen, Z., Birchfield, S.T.: Qualitative vision-based path following. IEEE Trans. Robot. Autom. 25, 749–754 (2009). https://doi.org/10.1109/TRO.2009.2017140

    Article  Google Scholar 

  8. Churchill, W.S., Newman, P.: Experience-based navigation for long-term localisation. IJRR 32, 1645–1661 (2013). https://doi.org/10.1177/0278364913499193

    Article  Google Scholar 

  9. Dayoub, F., Cielniak, G., Duckett, T.: Long-term experiments with an adaptive spherical view representation for navigation in changing environments. Robot. Auton. Syst. 59, 285–295 (2011)

    Article  Google Scholar 

  10. De Cristóforis, P., Nitsche, M., Krajník, T.: Real-time monocular image-based path detection. J. Real Time Image Process. 11, 335–348 (2013)

    Article  Google Scholar 

  11. DeSouza, G.N., Kak, A.C.: Vision for mobile robot navigation: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 24, 237–267 (2002). https://doi.org/10.1109/34.982903

    Article  Google Scholar 

  12. Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 834–849. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10605-2_54

    Chapter  Google Scholar 

  13. Faigl, J., Krajník, T., Vonásek, V., Přeučil, L.: Surveillance planning with localization uncertainty for UAVs. In: 3rd Israeli Conference on Robotics (2010)

    Google Scholar 

  14. Furgale, P., Barfoot, T.D.: Visual teach and repeat for long-range rover autonomy. J. Field Robot. 27(5), 534–560 (2010). https://doi.org/10.1002/rob.20342

    Article  Google Scholar 

  15. Halodová, L.: Map Management for Long-term Navigation of Mobile Robots. Bachelor thesis, Czech Technical University, May 2018

    Google Scholar 

  16. Halodová, L., Dvořáková, E., Majer, F., Ulrich, J., Vintr, T., Krajník, T.: Adaptive image processing methods for outdoor autonomous vehicles. In: Mazal. J. (ed.) MESAS 2018. LNCS, vol. 11472, pp. 456–476 (2018)

    Google Scholar 

  17. Zhang, N., Warren, M., Barfoot, T.: Learning place-and-time-dependent binary descriptors for long-term visual localization. In: IEEE International Conference on Robotics and Automation (ICRA). IEEE (2016)

    Google Scholar 

  18. Holmes, S., Klein, G., Murray, D.W.: A square root unscented kalman filter for visual monoSLAM. In: International Conference on Robotics and Automation (ICRA), pp. 3710–3716 (2008)

    Google Scholar 

  19. Kosaka, A., Kak, A.C.: Fast vision-guided mobile robot navigation using model-based reasoning and prediction of uncertainties. CVGIP: Image Underst. 56(3), 271–329 (1992)

    Article  Google Scholar 

  20. Krajník, T., et al.: A practical multirobot localization system. J. Intell. Robot. Syst. 76, 539–562 (2014). https://doi.org/10.1007/s10846-014-0041-x

    Article  Google Scholar 

  21. Krajník, T., Cristóforis, P., Kusumam, K., Neubert, P., Duckett, T.: Image features for visual teach-and-repeat navigation in changing environments. Robot. Auton. Syst. 88, 127–141 (2017)

    Article  Google Scholar 

  22. Krajník, T., Majer, F., Halodová, L., Vintr, T.: Navigation without localisation: reliable teach and repeat based on the convergence theorem. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2018)

    Google Scholar 

  23. Krajník, T., Nitsche, M., Pedre, S., Přeučil, L., Mejail, M.E.: A simple visual navigation system for an UAV. In: 9th International Multi-Conference on Systems, Signals and Devices (SSD), pp. 1–6. IEEE (2012)

    Google Scholar 

  24. Kunze, L., Hawes, N., Duckett, T., Hanheide, M., Krajnik, T.: Artificial intelligence for long-term robot autonomy: a survey. IEEE Robot. Autom. Lett. 3, 4023–4030 (2018). https://doi.org/10.1109/LRA.2018.2860628

    Article  Google Scholar 

  25. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)

    Article  Google Scholar 

  26. Mair, E., Hager, G.D., Burschka, D., Suppa, M., Hirzinger, G.: Adaptive and generic corner detection based on the accelerated segment test. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6312, pp. 183–196. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15552-9_14

    Chapter  Google Scholar 

  27. Majer, F., Halodová, L., Krajník, T.: Source codes: bearing-only navigation. https://github.com/gestom/stroll_bearnav

  28. Matsumoto, Y., Inaba, M., Inoue, H.: Visual navigation using view-sequenced route representation. In: IEEE International Conference on Robotics and Automation (ICRA), Minneapolis, USA, pp. 83–88 (1996)

    Google Scholar 

  29. Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31, 1147–1163 (2015)

    Article  Google Scholar 

  30. Nitsche, M., Pire, T., Krajník, T., Kulich, M., Mejail, M.: Monte Carlo localization for teach-and-repeat feature-based navigation. In: Mistry, M., Leonardis, A., Witkowski, M., Melhuish, C. (eds.) TAROS 2014. LNCS (LNAI), vol. 8717, pp. 13–24. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10401-0_2

    Chapter  Google Scholar 

  31. Paton, M., MacTavish, K., Berczi, L.-P., van Es, S.K., Barfoot, T.D.: I can see for miles and miles: an extended field test of visual teach and repeat 2.0. In: Hutter, M., Siegwart, R. (eds.) Field and Service Robotics. SPAR, vol. 5, pp. 415–431. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-67361-5_27

    Chapter  Google Scholar 

  32. Portugal, D., Pereira, S., Couceiro, M.S.: The role of security in human-robot shared environments: a case study in ROS-based surveillance robots. In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 981–986. IEEE (2017)

    Google Scholar 

  33. Royer, E., Lhuillier, M., Dhome, M., Lavest, J.M.: Monocular vision for mobile robot localization and autonomous navigation. Int. J. Comput. Vis. 74(3), 237–260 (2007). https://doi.org/10.1007/s11263-006-0023-y

    Article  MATH  Google Scholar 

  34. Segvic, S., Remazeilles, A., Diosi, A., Chaumette, F.: Large scale vision based navigation without an accurate global reconstruction. IEEE International Conference on Computer Vision and Pattern Recognition. CVPR 2007, Minneapolis, Minnesota, pp. 1–8 (2007)

    Google Scholar 

  35. Krajník, T., Faigl, J., Vonásek, V., et al.: Simple, yet Stable bearing-only Navigation. J. Field Robot. 27, 511–533 (2010)

    Article  Google Scholar 

  36. Thorpe, C., Hebert, M.H., Kanade, T., Shafer, S.A.: Vision and navigation for the Carnegie-Mellon Navlab. IEEE Trans. Pattern Anal. Mach. Intell. 10(3), 362–373 (1988)

    Article  Google Scholar 

  37. Wallace, R.S., Stentz, A., Thorpe, C.E., Moravec, H.P., Whittaker, W., Kanade, T.: First results in robot road-following. In: IJCAI, pp. 1089–1095. Citeseer (1985)

    Google Scholar 

Download references

Acknowledgments

We thank the VOP.cz for sharing their the data and the TAROS vehicle. We would like to thank also Milan Kroulík and Jakub Lev from the Czech University of Life Sciences Prague for their positive attitude and their help to perform experiments with the John Deere tractor.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomáš Krajník .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Majer, F. et al. (2019). A Versatile Visual Navigation System for Autonomous Vehicles. In: Mazal, J. (eds) Modelling and Simulation for Autonomous Systems. MESAS 2018. Lecture Notes in Computer Science(), vol 11472. Springer, Cham. https://doi.org/10.1007/978-3-030-14984-0_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-14984-0_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-14983-3

  • Online ISBN: 978-3-030-14984-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics