Abstract
The technology of unmanned aerial vehicles (UAVs) has increasingly become part of many civil and research applications in recent years. UAVs offer high-quality aerial imaging and the ability to perform quick, flexible and in-depth data acquisition over an area of interest. While navigating in remote environments, UAVs need to be capable of autonomously landing on complex terrains for security, safety and delivery reasons. This is extremely challenging as the structure of these terrains is often unknown, and no prior knowledge can be leveraged. In this study, we present a vision-based autonomous landing system for rotor wing UAVs equipped with a stereo camera and an inertial measurement unit (IMU). The landing site detection algorithm introduces and evaluates several factors including terrain’s flatness, inclination and steepness. Considering these features we compute map metrics that are used to obtain a landing-score map, based on which we detect candidate landing sites. The 3D reconstruction of the scene is acquired by stereo processing and the pose of the UAV at any given time is estimated by fusing raw data from the inertial sensors with the pose obtained from stereo ORB-SLAM2. Real-world trials demonstrate successful landing in unknown and complex terrains such as suburban and forest areas.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Yubo, L., et al: Survey of UAV autonomous landing based on vision processing. In: Advances in Intelligent Networking and Collaborative Systems, pp 300–311, Springer International Publishing (2021)
Kanellakis, C., Nikolakopoulos, G.: Survey on computer vision for UAVs: Current developments and trends. Journal of Intelligent & Robotic Systems 87(1), 141–168 (2017)
Masselli, A., Zell, A.: A novel marker based tracking method for position and attitude control of mavs. In: Proceedings of International Micro Air Vehicle Conference and Flight Competition (IMAV) (2012)
Mebarki, R., Lippiello, V., Siciliano, B.: Autonomous landing of rotary-wing aerial vehicles by image-based visual servoing in gps-denied environments. In: 2015 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 1–6, IEEE (2015)
Lange, S., Sunderhauf, N., Protzel, P.: A vision based onboard approach for landing and position control of an autonomous multirotor uav in gps-denied environments. In: 2009 International Conference on Advanced Robotics. pp. 1–6, IEEE (2009)
Patruno, C., Nitti, M., Petitti, A., et al.: A Vision-Based Approach for Unmanned Aerial Vehicle Landing. Journal of Intelligent & Robotic Systems 95, 645–664 (2019). https://doi.org/10.1007/s10846-018-0933-2
Hu, B., Lu, L., Mishra, S.: Fast, safe and precise landing of a quadrotor on an oscillating platform, In: 2015 American Control Conference (ACC), pp. 3836–3841, IEEE (2015)
Chaves, S.M., Wolcott, R.W., Eustice, R.M.: Neec research: Toward gps-denied landing of unmanned aerial vehicles on ships at sea. Naval Engineers Journal 127(1), 23–35 (2015)
Araar, O., Aouf, N., Vitanov, I.: Vision based autonomous landing of multirotor uav on moving platform. Journal of Intelligent & Robotic Systems 85(2), 369–384 (2017)
Patruno, C., et al.: Helipad detection for accurate UAV pose estimation by means of a visual sensor. International Journal of Advanced Robotic Systems 14(5), 1729881417731083 (2017)
Yang, S., Scherer, S. A., Zell, A.: An onboard monocular vision system for & Robotic Systems. 69, (1-4), 499–515 (2013)
Scherer, S.A., Dube, D., Komma, P., Masselli, A., Zell, A.: Robust, real-time number sign detection on a mobile outdoor robot. In: ECMR, pp. 145–152 (2011)
Lebedev, I., Erashov, A., Shabanova, A.: Accurate autonomous uav landing using vision-based detection of aruco-marker. In: International Conference on Interactive Collaborative Robotics, pp. 179–188, Springer (2020)
Yu, L., et al.: Deep learning for vision-based micro aerial vehicle autonomous landing. International Journal of Micro Air Vehicles 10(2), 171–185 (2018)
Pluckter, K., Scherer, S.: Precision UAV landing in unstructured environments. International Symposium on Experimental Robotics. Springer, Cham (2018)
Clement, L., Kelly, J., Barfoot, T.D.: Monocular visual teach and repeat aided by local ground planarity. In: Field and Service Robotics, pp. 547–561. Springer, Cham (2016)
Fraczek, P., Mora, A., Kryjak T.: Embedded vision system for automated drone landing site detection. International Conference on Computer Vision and Graphics. Springer, Cham (2018)
Yang, T., Li, P., Zhang, H., Li, J., Li, Z.: Monocular vision slam-based uav autonomous landing in emergencies and unknown environments. Electronics 7(5), 73 (2018)
Forster, C., Faessler, M., Fontana, F., Werlberger, M., Scaramuzza, D.: Continuous on-board monocular-vision-based elevation mapping applied to autonomous landing of micro aerial vehicles, In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 111–118, IEEE (2015)
Pizzoli, M., Forster, C., Scaramuzza, D.: Remode: Probabilistic, monocular dense reconstruction in real time. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 2609–2616, IEEE (2014)
Fankhauser, P., Bloesch, M., Gehring, C., Hutter, M., Siegwart, R.: Robot-centric elevation mapping with uncertainty estimates. In: Mobile Service Robotics. World Scientific, pp. 433–440 (2014)
Johnson, A.E., et al.: Lidar-based hazard avoidance for safe landing on mars. JGCD 25(6), 1091–1099 (2002)
Hinzmann, T., Stastny, T., Cadena, C., Siegwart, R., Gilitschenski, I.: Free LSD: Prior-free visual landing site detection for autonomous planes. IEEE Robotics and Automation Letters 3(3), 2545–2552 (2018). https://doi.org/10.1109/LRA.2018.2809962
Mittal, M., Valada, A., Burgard, W.: Vision-based autonomous landing in catastrophe-struck environments. arXiv:1809.05700 (2018)
Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Transactions on Robotics 33(5), 1255–1262 (2017). https://doi.org/10.1109/TRO.2017.2705103
Bonin-Font, F., Ortiz, A., Oliver, G.: Visual Navigation for Mobile Robots: A Survey. Journal of Intelligent & Robotic Systems 53, 263 (2008)
Haddadi, S.J., Castelan, E.B.: Visual-inertial fusion for indoor autonomous navigation of a quadrotor using ORB-SLAM. In: 2018 Latin American Robotic Symposium, 2018 Brazilian Symposium on Robotics (SBR) and 2018 Workshop on Robotics in Education (WRE), pp. 106–111, IEEE Xplore (2018)
Lynen, S., et al.: A robust and modular multi-sensor fusion approach applied to MAV navigation. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 3923–3929, IEEE Xplore (2013)
Julier, S.J., Uhlmann, J.K.: New extension of the Kalman filter to nonlinear systems. In: Proc. SPIE 3068, Signal Processing, Sensor Fusion, and Target Recognition VI, (28 July 1997), https://doi.org/10.1117/12.280797 (1997)
Wan, E.E.A., Van Der Merwe, R.: The unscented Kalman filter for nonlinear estimation. In: Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373), Lake Louise, AB, Canada, pp. 153–158 (2000). https://doi.org/10.1109/ASSPCC.2000.882463
Wöhler, C.: 3D computer vision: efficient methods and applications. Springer Science & Business Media (2012)
Point Cloud Library (1.11.1-dev), Conditional Euclidean Clustering [source code]. https://pointclouds.org/documentation/tutorials/cluster_extraction.html
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Embedded/Hardware preparation, data collection by Evangelos Chatzikalymnios, analysis were performed by Evangelos Chatzikalymnios and Konstantinos Moustakas. The first draft of the manuscript was written by Evangelos Chatzikalymnios and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflicts of interest
All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chatzikalymnios, E., Moustakas, K. Landing Site Detection for Autonomous Rotor Wing UAVs Using Visual and Structural Information. J Intell Robot Syst 104, 27 (2022). https://doi.org/10.1007/s10846-021-01544-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10846-021-01544-6