Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3565970.3567696acmconferencesArticle/Chapter ViewAbstractPublication PagessuiConference Proceedingsconference-collections
research-article
Public Access

Virtual Reality Point Cloud Annotation

Published: 01 December 2022 Publication History
  • Get Citation Alerts
  • Abstract

    This work presents a hybrid immersive headset- and desktop-based virtual reality (VR) visualization and annotation system for point clouds, oriented towards application on laser scans of plants. The system can be used to paint regions or individual points with fine detail, while using compute shaders to address performance limitations when working with large, dense point clouds. The system can either be used with an immersive VR headset and tracked controllers, or with mouse and keyboard on a 2D monitor using the same underlying rendering systems. A within-subjects user study (N=16) was conducted to compare these interfaces for annotation and counting tasks. Results showed a strong user preference for the immersive virtual reality interface, likely as a result of perceived and actual significant differences in task performance. This was especially true for annotation tasks, where users could rapidly identify, reach and paint over target regions, reaching high levels of accuracy with minimal time, but we found nuances in the ways users approached the tasks in the two systems.

    References

    [1]
    Felipe Bacim, Mahdi Nabiyouni, and Doug A Bowman. 2014. Slice-n-Swipe: A free-hand gesture user interface for 3D point cloud annotation. In 2014 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 185–186.
    [2]
    Doug A Bowman and Ryan P McMahan. 2007. Virtual reality: how much immersion is enough?Computer 40, 7 (2007), 36–43.
    [3]
    Doug A Bowman, Ryan P McMahan, and Eric D Ragan. 2012. Questioning naturalism in 3D user interfaces. Commun. ACM 55, 9 (2012), 78–88.
    [4]
    Isaac Cho, Xiaoyu Wang, and Zachary J Wartell. 2014. HyFinBall: a two-handed, hybrid 2D/3D desktop VR interface for multi-dimensional visualization. In Visualization and Data Analysis 2014, Vol. 9017. SPIE, 150–163.
    [5]
    Carolina Cruz-Neira, Jason Leigh, Michael Papka, Craig Barnes, Steven M Cohen, Sumit Das, Roger Engelmann, Randy Hudson, Trina Roy, Lewis Siegel, 1993. Scientists in wonderland: A report on visualization applications in the CAVE virtual reality environment. In Proceedings of 1993 IEEE research properties in virtual reality symposium. IEEE, 59–66.
    [6]
    Carolina Cruz-Neira, Daniel J Sandin, Thomas A DeFanti, Robert V Kenyon, and John C Hart. 1992. The CAVE: audio visual experience automatic virtual environment. Commun. ACM 35, 6 (1992), 64–73.
    [7]
    Ronan Gaugne, Quentin Petit, Jean-Baptiste Barreau, and Valérie Gouranton. 2019. Interactive and Immersive Tools for Point Clouds in Archaeology. In ICAT-EGVE 2019-International Conference on Artificial Reality and Telexistence-Eurographics Symposium on Virtual Environments. 1–8.
    [8]
    Kenny Gruchalla. 2004. Immersive well-path editing: investigating the added value of immersion. In IEEE Virtual Reality 2004. IEEE, 157–164.
    [9]
    Timo Hackel, Nikolay Savinov, Lubor Ladicky, Jan D Wegner, Konrad Schindler, and Marc Pollefeys. 2017. Semantic3d. net: A new large-scale point cloud classification benchmark. arXiv preprint arXiv:1704.03847(2017).
    [10]
    Rajiv Khadka, Nikhil Shetty, Eric T Whiting, and Amy Banic. 2016. Evaluation of collaborative actions to inform design of a remote interactive collaboration framework for immersive data visualizations. In International symposium on visual computing. Springer, 472–481.
    [11]
    Boštjan Kovač and Borut Žalik. 2010. Visualization of LIDAR datasets using point-based rendering technique. Computers & Geosciences 36, 11 (2010), 1443–1450.
    [12]
    Oliver Kreylos, Gerald W Bawden, and Louise H Kellogg. 2008. Immersive visualization and analysis of LiDAR data. In International Symposium on Visual Computing. Springer, 846–855.
    [13]
    Oliver Kreylos and Louise H Kellogg. 2017. Immersive Visual Data Analysis For Geoscience Using Commodity VR Hardware. In AGU Fall Meeting Abstracts, Vol. 2017. IN32B–04.
    [14]
    Oliver Kreylos, Michael Oskin, Eric Cowgill, Peter Gold, Austin Elliott, and Louise Kellogg. 2013. Point-based computing on scanned terrain with LidarViewer. Geosphere 9, 3 (2013), 546–556.
    [15]
    Bireswar Laha, Kriti Sensharma, James D Schiffbauer, and Doug A Bowman. 2012. Effects of immersion on visual analysis of volume data. IEEE transactions on visualization and computer graphics 18, 4(2012), 597–606.
    [16]
    E Li, Shuaijun Wang, Chengyang Li, Dachuan Li, Xiangbin Wu, and Qi Hao. 2020. SUSTech POINTS: A Portable 3D Point Cloud Interactive Annotation Platform System. In 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 1108–1115.
    [17]
    Oscar Martinez-Rubi, Stefan Verhoeven, Maarten Van Meersbergen, Peter Van Oosterom, R GonÁalves, Theo Tijssen, 2015. Taming the beast: Free and open-source massive point cloud web visualization. In Capturing Reality Forum 2015, 23-25 November 2015, Salzburg, Austria. The Servey Association.
    [18]
    Ryan P McMahan, Doug A Bowman, David J Zielinski, and Rachael B Brady. 2012. Evaluating display fidelity and interaction fidelity in a virtual reality game. IEEE transactions on visualization and computer graphics 18, 4(2012), 626–633.
    [19]
    Riccardo Monica, Jacopo Aleotti, Michael Zillich, and Markus Vincze. 2017. Multi-label point cloud annotation by selection of sparse control points. In 2017 International Conference on 3D Vision (3DV). IEEE, 301–308.
    [20]
    Yancheng Pan, Biao Gao, Jilin Mei, Sibo Geng, Chengkun Li, and Huijing Zhao. 2020. Semanticposs: A point cloud dataset with large quantity of dynamic instances. In 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 687–693.
    [21]
    Abhishek Patil, Srikanth Malla, Haiming Gang, and Yi-Ting Chen. 2019. The h3d dataset for full-surround 3d multi-object detection and tracking in crowded urban scenes. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, 9552–9557.
    [22]
    Christopher Plachetka, Jens Rieken, and Markus Maurer. 2018. The TUBS Road User Dataset: A New LiDAR Dataset and its Application to CNN-based Road User Classification for Automated Vehicles. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2623–2630.
    [23]
    Markus Schütz. 2015. Potree: Rendering large point clouds in web browsers. Ph. D. Dissertation. Wien.
    [24]
    Markus Schutz, Katharina Krosl, and Michael Wimmer. 2019. Real-Time Continuous Level of Detail Rendering of Point Clouds. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE. https://doi.org/10.1109/vr.2019.8798284
    [25]
    Markus Schütz, Gottfried Mandlburger, Johannes Otepka, and Michael Wimmer. 2020. Progressive real-time rendering of one billion points without hierarchical acceleration structures. In Computer Graphics Forum, Vol. 39. Wiley Online Library, 51–64.
    [26]
    William R Sherman, Gary L Kinsland, Christoph W Borst, Eric Whiting, Jurgen P Schulze, Philip Weber, Albert YM Lin, Aashish Chaudhary, Simon Su, and Daniel S Coming. 2014. 47 Immersive Visualization for the Geological Sciences. (2014).
    [27]
    Ross Tredinnick, Markus Broecker, and Kevin Ponto. 2016. Progressive feedback point cloud rendering for virtual reality display. In 2016 IEEE Virtual Reality (VR). IEEE, 301–302.
    [28]
    Manuel Veit and Antonio Capobianco. 2014. Go’Then’Tag: A 3-D point cloud annotation technique. In 2014 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 193–194.
    [29]
    Colin Ware, Kevin Arthur, and Kellogg S Booth. 1993. Fish tank virtual reality. In Proceedings of the INTERACT’93 and CHI’93 conference on Human factors in computing systems. 37–42.
    [30]
    Florian Wirth, Jannik Quehl, Jeffrey Ota, and Christoph Stiller. 2019. Pointatme: efficient 3d point cloud labeling in virtual reality. In 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 1693–1698.
    [31]
    Jimmy F. Zhang, Alex R. Paciorkowski, Paul A. Craig, and Feng Cui. 2019. BioVR: a platform for virtual reality assisted biological data integration and visualization. BMC Bioinformatics 20, 1 (feb 2019). https://doi.org/10.1186/s12859-019-2666-z
    [32]
    SM Zolanvari, Susana Ruano, Aakanksha Rana, Alan Cummins, Rogerio Eduardo da Silva, Morteza Rahbar, and Aljosa Smolic. 2019. DublinCity: Annotated LiDAR point cloud and its applications. arXiv preprint arXiv:1909.03613(2019).

    Cited By

    View all
    • (2023)Similarities and Differences between Immersive Virtual Reality, Real World, and Computer Screens: A Systematic Scoping Review in Human Behavior StudiesMultimodal Technologies and Interaction10.3390/mti70600567:6(56)Online publication date: 27-May-2023
    • (2023)MeTACAST: Target- and Context-Aware Spatial Selection in VRIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.332651730:1(480-494)Online publication date: 23-Oct-2023
    • (2023)Problems and Solutions of Point Cloud Mapping for VR and CAVE Environments for Data Visualization and Physics Simulation2023 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)10.1109/AIPR60534.2023.10490312(1-7)Online publication date: 27-Sep-2023

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SUI '22: Proceedings of the 2022 ACM Symposium on Spatial User Interaction
    December 2022
    233 pages
    ISBN:9781450399487
    DOI:10.1145/3565970
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 01 December 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. annotation
    2. point clouds
    3. virtual reality
    4. visualization

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    SUI '22
    SUI '22: Symposium on Spatial User Interaction
    December 1 - 2, 2022
    CA, Online, USA

    Acceptance Rates

    Overall Acceptance Rate 86 of 279 submissions, 31%

    Upcoming Conference

    SUI '24
    ACM Symposium on Spatial User Interaction
    October 7 - 8, 2024
    Trier , Germany

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)284
    • Downloads (Last 6 weeks)23
    Reflects downloads up to 11 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Similarities and Differences between Immersive Virtual Reality, Real World, and Computer Screens: A Systematic Scoping Review in Human Behavior StudiesMultimodal Technologies and Interaction10.3390/mti70600567:6(56)Online publication date: 27-May-2023
    • (2023)MeTACAST: Target- and Context-Aware Spatial Selection in VRIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.332651730:1(480-494)Online publication date: 23-Oct-2023
    • (2023)Problems and Solutions of Point Cloud Mapping for VR and CAVE Environments for Data Visualization and Physics Simulation2023 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)10.1109/AIPR60534.2023.10490312(1-7)Online publication date: 27-Sep-2023

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media