Abstract
This paper presents a challenging panoramic vision and LiDAR dataset collected by an autonomous vehicle at Chungbuk National University campus to facilitate robotics research. The vehicle is equipped with a Point Grey Ladybug 3 camera, 3D-LiDAR, global positioning system (GPS) and inertial measurement unit (IMU). The data are collected while driving in an outdoor environment, which includes various scenes such as parking lot, semi-off road path and the campus road scene with traffic. The data from all sensors mounted on the vehicle are timely registered and synchronized. The dataset includes point clouds from 3D LiDAR, images, GPS and IMU measurement. The vision data contain multiple fisheye images covering 360 field-of-view from individual cameras of Ladybug3 at high resolution and accurately stitched spherical panoramic images. The availability of multiple-fisheye and accurate panoramic images may be used for development and validation of novel multi-fisheye, panoramic, and 3D LiDAR based simultaneous localization and mapping (SLAM) systems. The dataset is collected to target various applications such as odometry, SLAM, loop closure detection, deep learning based algorithms with vision, inertial, LiDAR and fusion of visual, inertial and 3D information. To evaluate the algorithm, high accuracy RTK GPS measurements are provided for testing and evaluation.












Similar content being viewed by others
Change history
20 June 2022
A Correction to this paper has been published: https://doi.org/10.1007/s11227-022-04640-y
References
Yu DA (2019) Grid based spherical CNN for object detection from panoramic images. Sensors 19:2622
Wang D, He Y, Liu Y, Li D, Wu S, Qin Y, Xu Z (2019) 3D object detection algorithm for panoramic images with multi-scale convolutional neural network. IEEE Access 7:171461–171470
Ji S, Qin Z, Shan J, Lu M (2020) Panoramic SLAM from a multiple fisheye camera rig. ISPRS J Photogram Remote Sens 159:169–183
Yang Y, Tang D, Wang D, Song W, Wang J, Fu M (2020) Multi-camera visual SLAM for off-road navigation. Robot Auton Syst 128:103505
Won C, Seok H, Cui Z, Pollefeys M, Lim J (2020) “Omnislam: omnidirectional localization and dense mapping for wide-baseline multi-camera systems. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp 559–566
Wang Y, Cai S, Li SJ, Liu Y, Guo Y, Li T (2018) Cubemapslam: a piecewise-pinhole monocular fisheye slam system. In: Asian Conference on Computer Vision, pp 34–49
Urban S, Hinz S (2016) Multicol-slam-a modular real-time multi-camera slam system. arXiv preprint arXiv:1610.07336
Caruso D, Engel J, Cremers D (2015) Large-scale direct slam for omnidirectional cameras. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 141–148
Liu P, Heng L, Sattler T, Geiger A, Pollefeys M (2017) Direct visual odometry for a Fisheye-Stereo camera. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 1746–1752
Seok H, Lim J (2019) Rovo: robust omnidirectional visual odometry for wide-baseline wide-FOV camera systems. In: 2019 International Conference on Robotics and Automation (ICRA), pp 6344–6350
Jaramillo C, Yang L, Munoz JP, Taguchi Y, Xiao J (2019) Visual odometry with a single-camera stereo omnidirectional system. Mach Vis Appl 30:1145–1155
Matsuki H, von Stumberg L, Usenko V, Stückler J, Cremers D (2018) Omnidirectional DSO: direct sparse odometry with fisheye cameras. IEEE Robot Autom Lett 3:3693–3700
Ramezani M, Khoshelham K, Fraser C (2018) Pose estimation by omnidirectional visual-inertial odometry. Robot Auton Syst 105:26–37
Seok H, Lim J (2020) ROVINS: robust omnidirectional visual inertial navigation system. IEEE Robot Autom Lett 5:6225–6232
Pandey G, McBride JR, Eustice RM (2011) Ford campus vision and lidar data set. Int J Robot Res 30:1543–1552
Carlevaris-Bianco N, Ushani AK, Eustice RM (2016) University of Michigan North Campus long-term vision and lidar dataset. Int J Robot Res 35:1023–1035
Benseddik HE, Morbidi F, Caron G, Felsberg M, Nielsen L, Mester R (2020) PanoraMIS: an ultra-wide field of view image dataset for vision-based robot-motion estimation. Int J Robot Res 39:1037–1051
Koschorrek P, Piccini T, Oberg P, Felsberg M, Nielsen L, Mester R, Nielsen L, Mester R (2013) A multi-sensor traffic scene dataset with omnidirectional video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Vol 727–734
Smith M, Baldwin I, Churchill W, Paul R, Newman P (2009) The new college vision and laser data set. Int J Robot Res 28:595–599
Geiger A, Lenz P, Stiller C, Urtasun R (2013) Vision meets robotics: the kitti dataset. Int J Robot Res 32:1231–1237
Fallon M, Johannsson H, Kaess M, Leonard JJ (2013) The mit stata center dataset. Int J Robot Res 32:1695–1699
Nordlandsbanen: minute by minute, season by season. Norwegian Broadcasting Corporation (2013)
Ceriani S, Fontana G, Giusti A, Marzorati D, Matteucci M, Migliore D, Rizzi D, Domenico GS, Taddei P (2009) Rawseeds ground truth collection systems for indoor self-localization and mapping. Autono Robots 27:353–371
Badino H, Huber D, Kanade T (2011) The CMU visual localization data set
Urban S, Jutzi B (2017) LaFiDa-a laserscanner multi-fisheye camera dataset. J Imaging 3:5
Li Y, Tong G, Gao H, Wang Y, Zhang L, Chen H (2019) Pano-RSOD: a dataset and benchmark for panoramic road scene object detection. Electronics 8:329
Zhang Z, Rebecq H, Forster C, Scaramuzza D (2016) Benefit of large field-of-view cameras for visual odometry. In: 2016 IEEE international Conference on Robotics and Automation (ICRA), pp 801–808
Sturm J, Engelhard N, Endres F, Burgard W, Cremers D (2012) A benchmark for the evaluation of RGB-D SLAM systems. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 573–580
Schubert D, Goll T, Demmel N, Usenko V, Stückler J, Cremers D (2018) The TUM VI benchmark for evaluating visual-inertial odometry. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 1680–1687
Maddern W, Pascoe G, Linegar C, Newman P (2017) 1 year, 1000 km: the Oxford RobotCar dataset. Int J Robot Res 36:3–15
Cheeseman P, Smith R, Self M (1987) A stochastic map for uncertain spatial relationships. In: 4th International symposium on robotic research, pp 467–474
Pandey G, McBride JR, Savarese S, Eustice RM (2015) Automatic extrinsic calibration of vision and lidar by maximizing mutual information. J Field Robot 32:696–722
Horn BK, Hilden HM, Negahdaripour S (1988) Horn, Berthold KP and Hilden, Hugh M and Negahdaripour, Shahriar. JOSA A 5:1127–1135
Mur-Artal R, Montiel JMM, Tardos JD (2015) ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans Robot 31:1147–1163
Acknowledgements
This research was financially supported in part by the Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT) through the International Cooperative R&D program. (Project No. P0004631), and in part by the MSIT (Ministry of Science and ICT), Korea, under the Grand Information Technology Research Center support program (IITP-2021-2020-0-01462) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The original online version of this article was revised: During typesetting the e-mail addresses of the authors were interchanged.
Rights and permissions
About this article
Cite this article
Javed, Z., Kim, GW. PanoVILD: a challenging panoramic vision, inertial and LiDAR dataset for simultaneous localization and mapping. J Supercomput 78, 8247–8267 (2022). https://doi.org/10.1007/s11227-021-04198-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11227-021-04198-1