A Fast Spatial Clustering Method for Sparse LiDAR Point Clouds Using GPU Programming
Abstract
:1. Introduction
- The ER-CCL algorithm with a flexible search range is suitable for processing sparse and unevenly dense LiDAR point clouds. To improve the processing speed, GPU programming technology is utilized to process the ER-CCL algorithm in parallel for each cell.
- To solve the problem of classifying connective obstacles, the proposed clustering method adopts height information as a reference feature in the ER-CCL algorithm to determine whether adjacent cells belong to the same obstacle.
2. Related Works
3. Fast Spatial Clustering Method
3.1. Overview of Fast Spatial Clustering System
3.2. Obstacle Flag Map
3.3. ER-CCL Algorithm
3.4. GPU-Based Fast Spatial Clustering System
Algorithm 1: GPU-based ER-CCL Algorithm |
Input: point, search range r B: block G: grid count Memcpy (cuda point, cpu point, hosttodevice) Cuda_Kernel_GroundHeightCompute<<<B, G>>>(int height, cuda point); Cuda_Kernel_FlagMapGenerate<<<B, G>>>(cuda flag_map, cuda point, int height); Cuda_Kernel_Label_Initialization<<<B, G>>>(cuda label_map, cuda flag_map); while (cuda label_map is changed) Cuda_Kernel_Label_Updating<<<B, G>>>(cuda label_maps); Cuda_Kernel_Inverse_Mapping<<<B, G>>>(cuda point_label, cuda label_map, cuda point); Memcpy(cpu point_label, cuda point_label, devicetohost); Return: point_label |
4. Experiments and Analysis
4.1. Dataset and Experiment Platform Introduction
4.2. Intermediate Experiment Result
4.3. Time Comparison under Different Parameters
4.4. Obstacle Clustering Results under Different Scenes
4.5. Obstacle Clustering Result under Different Methods
4.6. Data Independency Solution of Labeling Process
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Xia, Z.; Hu, Z.; Luo, J. UPTP vehicle trajectory prediction based on user preference under complexity environment. Wirel. Pers. Commun. 2017, 97, 4651–4665. [Google Scholar] [CrossRef]
- Asvadi, A.; Garrote, L.; Premebida, C.; Peixoto, P.; Nunes, U.J. Multimodal vehicle detection: Fusing 3D-LIDAR and color camera data. Pattern Recogn. Lett. 2018, 115, 20–29. [Google Scholar] [CrossRef]
- Zhang, J.M.; Jin, X.K.; Sun, J.; Wang, J.; Kumar, A. Spatial and semantic convolutional features for robust visual object tracking. Multimed. Tools Appl. 2018. [Google Scholar] [CrossRef]
- Zhang, J.M.; Wu, Y.; Feng, W.J.; Wang, J. Spatially attentive visual tracking using multi-model adaptive response fusion. IEEE Access 2019, 7, 83873–83887. [Google Scholar] [CrossRef]
- Zhang, J.M.; Lu, C.Q.; Li, X.D.; Kim, H.J.; Wang, J. A full convolutional network based on DenseNet for remote sensing scene classification. Math. Biosci. Eng. 2019, 16, 3345–3367. [Google Scholar] [CrossRef] [PubMed]
- Rostami, S.M.H.; Sangaiah, A.K.; Wang, J.; Liu, X.Z. Obstacle avoidance of mobile robots using modified artificial potential field algorithm. Eurasip. J. Wirel. Commun. 2019. accepted. [Google Scholar] [CrossRef] [Green Version]
- Hua, X.; Chen, L.; Tang, B. Dynamic path planning for autonomous driving on various roads with avoidance of static and moving obstacles. Mech. Syst. Signal. Process. 2018, 100, 482–500. [Google Scholar] [CrossRef]
- Mahalingam, T.; Subramoniam, M. A robust single and multiple moving object detection, tracking and classification. Appl. Comput. Inform. 2018. [Google Scholar] [CrossRef]
- Zhao, G.; Xiao, X.; Yuan, J. Fusion of 3D-LIDAR and camera data for scene parsing. J. Vis. Commun. Image R. 2014, 25, 165–183. [Google Scholar] [CrossRef]
- Wei, X.; Phung, S.; Bouzerdoum, A. Object segmentation and classification using 3-D range camera. J. Vis. Commun. Image R 2014, 25, 74–85. [Google Scholar] [CrossRef]
- Asvadi, A.; Premebida, C.; Peixoto, P. 3D Lidar-based static and moving obstacle detection in driving environments: An approach based on voxels and multi-region ground planes. Robot. Auton. Syst. 2016, 83, 299–311. [Google Scholar] [CrossRef]
- Zeng, H.; Wang, H.; Dong, J. Robust 3D keypoint detection method based on double Gaussian weighted dissimilarity measure. Multimed. Tools Appl. 2017, 76, 26377–26389. [Google Scholar] [CrossRef]
- Chu, P.M.; Cho, S.; Sim, S.; Kwak, K.; Cho, K. Convergent application for trace elimination of dynamic objects from accumulated lidar point clouds. Multimed. Tools Appl. 2017, 77, 1–19. [Google Scholar] [CrossRef]
- Zhi, S.; Liu, Y.; Li, X. Toward real-time 3D object recognition: A lightweight volumetric CNN framework using multitask learning. Comput. Graph. 2018, 71, 199–207. [Google Scholar] [CrossRef]
- Bartels, M.; Wei, H. Threshold-free object and ground point separation in LIDAR data. Pattern Recogn. Lett. 2010, 31, 1089–1099. [Google Scholar] [CrossRef]
- Douillard, B.; Underwood, J.; Vlaskine, V. A Pipeline for the segmentation and classification of 3D point clouds. In Springer Tracts in Advanced Robotics; Springer: Berlin/Heidelberg, Germany, 2014; Volume 79, pp. 585–600. [Google Scholar]
- Li, G.; Zhu, Z.; Cong, Z.; Yang, F. Efficient decomposition of strongly connected components on GPUs. J. Syst. Archit. 2014, 60, 1–10. [Google Scholar] [CrossRef]
- Cho, S.; Kim, J.; Ikram, W.; Cho, K.; Jeong, Y.S.; Um, K.; Sim, S. Sloped terrain segmentation for autonomous drive using sparse 3D point cloud. Sci. World J. 2014, 2014, 582753. [Google Scholar] [CrossRef]
- Karma, S.; Zorba, E.; Pallis, G.C. Use of unmanned vehicles in search and rescue operations in forest fires: Advantages and limitations observed in a field trial. Int. J. Disast. Risk Reduct. 2015, 13, 307–312. [Google Scholar] [CrossRef]
- Menendez, E.; Victores, J.G.; Montero, R. Tunnel structural inspection and assessment using an autonomous robotic system. Autom. Constr. 2018, 87, 117–126. [Google Scholar] [CrossRef]
- Veronese, L.D.P.; Cheein, F.A.; Bastos, T.; Souza, A.F.D.; Aguiar, E.D. A computational geometry approach for localization and tracking in GPS-denied environments. J. Field Robot. 2016, 33, 946–966. [Google Scholar] [CrossRef]
- Quack, T.M.; Reiter, M.; Abel, D. Digital map generation and localization for vehicles in urban intersections using LiDAR and GNSS data. IFAC 2017, 50, 251–257. [Google Scholar] [CrossRef]
- Arvanitidou, M.G.; Tok, M.; Glantz, A. Motion-based object segmentation using hysteresis and bidirectional linter-frame change detection in sequences with moving camera. Signal. Process. Image Commun. 2013, 28, 1420–1434. [Google Scholar] [CrossRef]
- Boulch, A.; Guerry, J.; Saux, B.L. SnapNet: 3D point cloud semantic labelling with 2D deep segmentation networks. Comput. Graph. 2018, 71, 189–198. [Google Scholar] [CrossRef]
- Darms, M.S.; Rybski, P.E.; Baker, C.; Urmson, C. Obstacle detection and tracking for the urban challenge. IEEE Trans. Intell. Transp. 2009, 10, 475–485. [Google Scholar] [CrossRef]
- Hackel, T.; Wegner, J.D.; Schindler, K. Joint classification and contour extraction of large 3D point clouds. ISPRS J. Photogramm. 2017, 130, 231–245. [Google Scholar] [CrossRef]
- Ye, L.; Yamamoto, T. Modeling connected and autonomous vehicles in heterogeneous traffic flow. Phys. A Stat. Mech. Its Appl. 2018, 490, 269–277. [Google Scholar] [CrossRef]
- Wang, H.; Wang, B.; Liu, B. Pedestrian recognition and tracking using 3D LiDAR for autonomous vehicle. Robot. Auton. Syst. 2017, 88, 71–78. [Google Scholar] [CrossRef]
- Kalentev, O.; Rai, A.; Kemnitz, S.; Schneider, R. Connected component labelling on a 2D grid using CUDA. J. Parallel Distrib. Comput. 2011, 71, 615–620. [Google Scholar] [CrossRef] [Green Version]
- Song, W.; Tian, Y.; Fong, S.; Cho, K.; Wang, W.; Zhang, W. GPU-accelerated foreground segmentation and labelling for real-time video surveillance. Sustainability 2016, 8, 916. [Google Scholar] [CrossRef] [Green Version]
- Tian, Y.; Song, W.; Sun, S.; Fong, S.; Zou, S. 3D object recognition method with multiple feature extraction from LiDAR point clouds. J. Supercomput. 2019, 75, 4430–4442. [Google Scholar] [CrossRef]
- Bianco, N.C.; Ushani, A.K.; Eustice, R.M. University of Michigan north campus long-term vision and lidar dataset. Int. J. Robot. Res. 2016, 35, 1023–1034. [Google Scholar] [CrossRef]
- Awrangjeb, M.; Fraser, C.S.; Lu, G. Building change detection from LIDAR point cloud data based on connected component analysis. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, La Grande Motte, France, 28 September–3 October 2015. [Google Scholar]
Scene No. | Cluster Count | Iteration Count | Time (ms) |
---|---|---|---|
Scene 1 (T junction) | 301 | 50 | 39.85 |
Scene 2 (road) | 528 | 79 | 52.16 |
Scene 3 (square) | 493 | 39 | 40.14 |
Scene 4 (multi-trees) | 682 | 46 | 46.58 |
Scene 5 (multi-persons) | 474 | 46 | 40.02 |
Scene 6 (crossroad) | 497 | 30 | 28.89 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tian, Y.; Song, W.; Chen, L.; Sung, Y.; Kwak, J.; Sun, S. A Fast Spatial Clustering Method for Sparse LiDAR Point Clouds Using GPU Programming. Sensors 2020, 20, 2309. https://doi.org/10.3390/s20082309
Tian Y, Song W, Chen L, Sung Y, Kwak J, Sun S. A Fast Spatial Clustering Method for Sparse LiDAR Point Clouds Using GPU Programming. Sensors. 2020; 20(8):2309. https://doi.org/10.3390/s20082309
Chicago/Turabian StyleTian, Yifei, Wei Song, Long Chen, Yunsick Sung, Jeonghoon Kwak, and Su Sun. 2020. "A Fast Spatial Clustering Method for Sparse LiDAR Point Clouds Using GPU Programming" Sensors 20, no. 8: 2309. https://doi.org/10.3390/s20082309
APA StyleTian, Y., Song, W., Chen, L., Sung, Y., Kwak, J., & Sun, S. (2020). A Fast Spatial Clustering Method for Sparse LiDAR Point Clouds Using GPU Programming. Sensors, 20(8), 2309. https://doi.org/10.3390/s20082309