Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Semi-automated Cleaning of Laser Scanning Campaigns with Machine Learning

Published: 13 June 2019 Publication History
  • Get Citation Alerts
  • Abstract

    Terrestrial laser scanning campaigns provide an important means to document the 3D structure of historical sites. Unfortunately, the process of converting the 3D point clouds acquired by the laser scanner into a coherent and accurate 3D model has many stages and is not generally automated. In particular, the initial cleaning stage of the pipeline—in which undesired scene points are deleted—remains largely manual and is usually labour intensive. In this article, we introduce a semi-automated cleaning approach that incrementally trains a random forest (RF) classifier on an initial keep/discard point labelling generated by the user when cleaning the first scan(s). The classifier is then used to predict the labelling of the next scan in the sequence. Before this classification is presented to the user, a denoising post-process, based on the 2D range map representation of the laser scan, is applied. This significantly reduces small isolated point clusters that the user would otherwise have to fix. The user then selects the remaining incorrectly labelled points and these are weighted, based on a confidence estimate, and fed back into the classifier to retrain it for the next scan. Our experiments, across 8 scanning campaigns, show that when the scan campaign is coherent, i.e., it does not contain widely disparate or contradictory data, the classifier yields a keep/discard labelling that typically ranges between 95% and 99%. This is somewhat surprising, given that the data in each class can represent many object types, such as a tree, person, wall, and so on, and that no further effort beyond the point labeling of keep/discard is required of the user. We conducted an informal timing experiment over a 15-scan campaign, which compared the processing time required by our software, without user interaction (point label correction) time, against the time taken by an expert user to completely clean all scans. The expert user required 95mins to complete all cleaning. The average time required by the expert to clean a single scan was 6.3mins. Even with current unoptimized code, our system was able to generate keep/discard labels for all scans, with 98% (average) accuracy, in 75mins. This leaves as much as 20mins for the user input required to relabel the 2% of mispredicted points across the set of scans before the full system time would match the expert’s cleaning time.

    Supplementary Material

    a16-marais-supp.pdf (marais.zip)
    Supplemental movie, appendix, image and software files for, Semi-automated Cleaning of Laser Scanning Campaigns with Machine Learning

    References

    [1]
    Gustavo E. A. P. A. Batista, Ronaldo C. Prati, and Maria Carolina Monard. 2004. A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explor. Newslett. 6, 1 (2004), 20--29.
    [2]
    Mark de Berg, Otfried Cheong, Marc van Kreveld, and Mark Overmars. 2008. Computational Geometry: Algorithms and Applications (3rd ed.). Springer-Verlag TELOS, Santa Clara, CA.
    [3]
    Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning. Springer-Verlag New York, Inc., Secaucus, NJ.
    [4]
    Yuri Boykov, Olga Veksler, and Ramin Zabih. 2001. Fast approximate energy minimization via graph cuts. IEEE Trans. Patt. Anal. Machine Intell. 23, 11 (2001), 1222--1239.
    [5]
    Leo Breiman. 2001. Random forests. Machine Learn. 45, 1 (2001), 5--32.
    [6]
    L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. 2018. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Patt. Anal. Machine Intell. 40, 4 (2018), 834--838.
    [7]
    Paolo Cignoni, Marco Callieri, Massimiliano Corsini, Matteo Dellepiane, Fabio Ganovelli, and Guido Ranzuglia. 2008. MeshLab: An open-source mesh processing tool. In Proceedings of the 6th Eurographics Italian Chapter Conference. 129--136. http://vcg.isti.cnr.it/Publications/2008/CCCDGR08.
    [8]
    David Doria and Richard J. Radke. 2012. Filling large holes in lidar data by inpainting depth gradients. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’12). IEEE, 65--72.
    [9]
    J. Elseberg, D. Borrmann, and A. Nüchter. 2011. Full wave analysis in 3D laser scans for vegetation detection in urban environments. In Proceedings of the 23rd International Symposium on Information, Communication and Automation Technologies. 1--7.
    [10]
    Timo Hackel, Nikolay Savinov, Lubor Ladicky, Jan D. Wegner, Konrad Schindler, and Marc Pollefeys. 2017. Semantic3D. net: A new large-scale point cloud classification benchmark. Retrieved from arXiv preprint arXiv:1704.03847.
    [11]
    Timo Hackel, Jan D. Wegner, and Konrad Schindler. 2016. Fast semantic segmentation of 3D point clouds with strongly varying density. ISPRS Ann. Photogram., Remote Sens. Spatial Inform. Sci. 3, 3 (2016).
    [12]
    Jungong Han, Ling Shao, Dong Xu, and Jamie Shotton. 2013. Enhanced computer vision with Microsoft Kinect sensor: A review. IEEE Trans. Cyber. 43, 5 (2013), 1318--1334.
    [13]
    Alexander Hermans, Georgios Floros, and Bastian Leibe. 2014. Dense 3D semantic mapping of indoor scenes from RGB-D images. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’14). IEEE, 2631--2638.
    [14]
    Jing Huang and Suya You. 2016. Point cloud labeling using 3D convolutional neural network. In Proceedings of the 23rd International Conference on Pattern Recognition (ICPR’16). IEEE, 2670--2675.
    [15]
    Juha Hyyppä, Anttoni Jaakkola, Yuwei Chen, and Antero Kukko. 2013. Unconventional lidar mapping from air, terrestrial and mobile. In Proceedings of the Photogrammetric Week Conference. 205--214.
    [16]
    Byung-soo Kim, Pushmeet Kohli, and Silvio Savarese. 2013. 3D scene understanding by Voxel-CRF. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’13). IEEE, 1425--1432.
    [17]
    J. Kisztner, J. Jelínek, T. Daněk, and J. Růžička. 2016. 3D documentation of outcrop by laser scanner—Filtration of vegetation. Perspect. Sci. 7 (2016), 161--165.
    [18]
    Hema S. Koppula, Abhishek Anand, Thorsten Joachims, and Ashutosh Saxena. 2011. Semantic labeling of 3D point clouds for indoor scenes. In Proceedings of the Conference on Advances in Neural Information Processing Systems. 244--252.
    [19]
    Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Proceedings of the Conference on Advances in Neural Information Processing Systems. 1097--1105.
    [20]
    Rickert Mulder and Patrick Marais. 2016. Accelerating point cloud cleaning. In Proceedings of the 14th Eurographics Workshop on Graphics and Cultural Heritage (GCH’16). Eurographics Association, 211--214.
    [21]
    Daniel Munoz, Nicolas Vandapel, and Martial Hebert. 2008. Directional associative Markov network for 3-D point cloud classification. In Proceedings of the International Symposium on 3D Data Processing, Visualization and Transmission.
    [22]
    Guan Pang and Ulrich Neumann. 2013. Training-based object recognition in cluttered 3D point clouds. In Proceedings of the International Conference on 3D Vision (3DV’13). IEEE, 87--94.
    [23]
    Charles R. Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. 2017. Pointnet: Deep learning on point sets for 3D classification and segmentation. Proc. Comput. Vis. Patt. Recog., IEEE 1, 2 (2017), 4.
    [24]
    Heinz Rüther, Christoph Held, Roshan Bhurtha, Ralph Schroeder, and Stephen Wessels. 2012. From point cloud to textured model, the Zamani laser scanning pipeline in heritage documentation. South African J. Geomat. 1, 1 (2012), 44--59.
    [25]
    Amir Saffari, Christian Leistner, Jakob Santner, Martin Godec, and Horst Bischof. 2009. On-line random forests. In Proceedings of the 12th IEEE International Conference on Computer Vision Workshops (ICCV’09). IEEE, 1393--1400.
    [26]
    Richard Socher, Brody Huval, Bharath Bath, Christopher D. Manning, and Andrew Y. Ng. 2012. Convolutional-recursive deep learning for 3D object classification. In Adv. Neural Inform. Proc. Syst. 656--664.
    [27]
    Rudolph Triebel, Kristian Kersting, and Wolfram Burgard. 2006. Robust 3D scan point classification using associative Markov networks. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’06). IEEE, 2603--2608.
    [28]
    Julien Valentin, Vibhav Vineet, Ming-Ming Cheng, David Kim, Jamie Shotton, Pushmeet Kohli, Matthias Niessner, Antonio Criminisi, Shahram Izadi, and Philip Torr. 2015. SemanticPaint: Interactive 3D labeling and learning at your fingertips. ACM Trans. Graph. 34, 5, Article 154 (Nov. 2015), 17 pages.
    [29]
    Martin Weinmann, Boris Jutzi, Stefan Hinz, and Clément Mallet. 2015. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS J. Photogram. Remote Sens. 105 (2015), 286--304.
    [30]
    Daniel Wolf, Johann Prankl, and Markus Vincze. 2015. Fast semantic segmentation of 3D point clouds using a dense CRF with learned parameters. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’15). IEEE, 4867--4873.
    [31]
    Jingyu Yang, Ziqiao Gan, Kun Li, and Chunping Hou. 2015. Graph-based segmentation for RGB-D data using 3-D geometry enhanced superpixels. IEEE Trans. Cyber. 45, 5 (2015), 927--940.
    [32]
    Na-Eun Yang, Yong-Gon Kim, and Rae-Hong Park. 2012. Depth hole filling using the depth distribution of neighboring regions of depth holes in the Kinect sensor. In Proceedings of the IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC’12). IEEE, 658--661.
    [33]
    Zhi-Hua Zhou and Ji Feng. 2017. Deep forest: Towards an alternative to deep neural networks. Retrieved from arXiv preprint arXiv:1702.08835.

    Cited By

    View all
    • (2021)Digitalization and 3D Documentation Techniques Applied to Two Pieces of Visigothic Sculptural Heritage in Merida Through Structured Light ScanningJournal on Computing and Cultural Heritage 10.1145/342738114:4(1-19)Online publication date: 20-Aug-2021

    Index Terms

    1. Semi-automated Cleaning of Laser Scanning Campaigns with Machine Learning

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Journal on Computing and Cultural Heritage
        Journal on Computing and Cultural Heritage   Volume 12, Issue 3
        October 2019
        158 pages
        ISSN:1556-4673
        EISSN:1556-4711
        DOI:10.1145/3340676
        Issue’s Table of Contents
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 13 June 2019
        Accepted: 01 November 2018
        Revised: 01 October 2018
        Received: 01 March 2018
        Published in JOCCH Volume 12, Issue 3

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Heritage data
        2. cleaning
        3. full dome scan
        4. laser scan
        5. machine learning
        6. point clouds

        Qualifiers

        • Research-article
        • Research
        • Refereed

        Funding Sources

        • National Research Foundation (NRF) in South Africa
        • Ministry of Education, University and Research (MIUR)
        • SCIADRO project
        • Tuscany Region (Italy) under the Regional Implementation Programme for Underutilized Areas Fund
        • Research Facilitation Fund (FAR)

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)26
        • Downloads (Last 6 weeks)1

        Other Metrics

        Citations

        Cited By

        View all
        • (2021)Digitalization and 3D Documentation Techniques Applied to Two Pieces of Visigothic Sculptural Heritage in Merida Through Structured Light ScanningJournal on Computing and Cultural Heritage 10.1145/342738114:4(1-19)Online publication date: 20-Aug-2021

        View Options

        Get Access

        Login options

        Full Access

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media