Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3574131.3574439acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
research-article

A Low-cost Efficient Approach to Synchronize Real-world and Virtual-world Objects in VR via In-built Cameras

Published: 13 January 2023 Publication History
  • Get Citation Alerts
  • Abstract

    Virtual reality (VR) technology has become a growing force in entertainment, education, science, and manufacturing due to the capability of providing users with immersive experiences and natural interaction. Although common input devices such as controllers, gamepads, and trackpads have been integrated into mainstream VR systems for user-content interaction, they cannot provide users with realistic haptic feedback. Some prior work tracks and maps the physical objects into the virtual space to allow users to interact with these objects directly, which improves users’ sense of reality in the virtual environment. However, most of them use additional hardware sensors, which inevitably increases the cost. In this research, a lightweight approach is proposed to synchronize the positions and motions between physical and digital objects without any extra costs. We use the real-time captured video data from in-built cameras on a VR headset and employ feature points based algorithms to generate projections of the physical objects in the virtual world. Our approach does not rely on additional sensors but just uses components available in a VR headset. Our approach allows users to interact with target objects with their hands directly without the need for specially designed trackers, markers, and other hardware devices as used in previous work. With our approach, users can get more realistic operational feedback when interacting with corresponding virtual objects.

    References

    [1]
    Lila Bozgeyikli and Evren Bozgeyikli. 2019. Tangiball: Dynamic embodied tangible interaction with a ball in virtual reality. In Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion. 135–140.
    [2]
    Jack Shen-Kuen Chang, Georgina Yeboah, Alison Doucette, Paul Clifton, Michael Nitsche, Timothy Welsh, and Ali Mazalek. 2017. Tasc: combining virtual reality with tangible and embodied interactions to support spatial cognition. In Proceedings of the 2017 Conference on Designing Interactive Systems. 1239–1251.
    [3]
    Mark Fairchild. 2004. Color appearance models: CIECAM02 and beyond. In Tutorial Notes, IS&T/SID 12th Color Imaging Conference.
    [4]
    Mark D Fairchild. 2013. Color appearance models. John Wiley & Sons.
    [5]
    Xiang Gao, Tao Zhang, Y Liu, and Q Yan. 2017. 14 lectures on visual SLAM: from theory to practice. Publishing House of Electronics Industry, Beijing (2017).
    [6]
    Andreas Geiger, Julius Ziegler, and Christoph Stiller. 2011. Stereoscan: Dense 3d reconstruction in real-time. In 2011 IEEE intelligent vehicles symposium (IV). Ieee, 963–968.
    [7]
    Michael J Gourlay and Robert T Held. 2017. Head-Mounted-Display Tracking for Augmented and Virtual Reality. Information Display 33, 1 (2017), 6–10.
    [8]
    Daniel Harley, Aneesh P Tarun, Daniel Germinario, and Ali Mazalek. 2017. Tangible vr: Diegetic tangible objects for virtual reality narratives. In Proceedings of the 2017 Conference on Designing Interactive Systems. 1253–1263.
    [9]
    Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew Davison, 2011. Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera. In Proceedings of the 24th annual ACM symposium on User interface software and technology. 559–568.
    [10]
    Anestis Koutsoudis, Blaž Vidmar, George Ioannakis, Fotis Arnaoutoglou, George Pavlidis, and Christodoulos Chamzas. 2014. Multi-image 3D reconstruction data evaluation. Journal of cultural heritage 15, 1 (2014), 73–79.
    [11]
    Max Krichenbauer, Goshiro Yamamoto, Takafumi Taketom, Christian Sandor, and Hirokazu Kato. 2017. Augmented reality versus virtual reality for 3d object manipulation. IEEE transactions on visualization and computer graphics 24, 2(2017), 1038–1048.
    [12]
    Krzysztof Jakub Kruszyński and Robert van Liere. 2009. Tangible props for scientific visualization: concept, requirements, application. Virtual Reality 13, 4 (2009). https://doi.org/10.1007/s10055-009-0126-1
    [13]
    Joseph J LaViola Jr, Ernst Kruijff, Ryan P McMahan, Doug Bowman, and Ivan P Poupyrev. 2017. 3D user interfaces: theory and practice. Addison-Wesley Professional.
    [14]
    Ziming Li, Yiming Luo, Jialin Wang, Yushan Pan, Lingyun Yu, and Hai-Ning Liang. 2022. Collaborative Remote Control of Unmanned Ground Vehicles in Virtual Reality. In 2022 International Conference on Interactive Media, Smart Systems and Emerging Technologies (IMET). 1–8. https://doi.org/10.1109/IMET54801.2022.9929783
    [15]
    B. Lok, S. Naik, M. Whitton, and F.P. Brooks. 2003. Effects of handling real objects and avatar fidelity on cognitive task performance in virtual environments. In IEEE Virtual Reality, 2003. Proceedings.125–132. https://doi.org/10.1109/VR.2003.1191130
    [16]
    Feiyu Lu, Difeng Yu, Hai-Ning Liang, Wenjun Chen, Konstantinos Papangelis, and Nazlena Mohamad Ali. 2018. Evaluating Engagement Level and Analytical Support of Interactive Visualizations in Virtual Reality Environments. In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 143–152. https://doi.org/10.1109/ISMAR.2018.00050
    [17]
    Yiming Luo, Jialin Wang, Rongkai Shi, Hai-Ning Liang, and Shan Luo. 2022. In-Device Feedback in Immersive Head-Mounted Displays for Distance Perception During Teleoperation of Unmanned Ground Vehicles. IEEE Transactions on Haptics 15, 1 (2022), 79–84. https://doi.org/10.1109/TOH.2021.3138590
    [18]
    Mark Mine, Arun Yoganandan, and Dane Coffey. 2014. Making VR Work: Building a Real-World Immersive Modeling Application in the Virtual World. In Proceedings of the 2nd ACM Symposium on Spatial User Interaction (Honolulu, Hawaii, USA) (SUI ’14). Association for Computing Machinery, New York, NY, USA, 80–89. https://doi.org/10.1145/2659766.2659780
    [19]
    Vijayakumar Nanjappan, Hai-Ning Liang, Feiyu Lu, Konstantinos Papangelis, Yong Yue, and Ka Lok Man. 2018. User-elicited dual-hand interactions for manipulating 3D objects in virtual reality environments. Human-centric Computing and Information Sciences 8, 1 (2018), pp 1–16. https://doi.org/10.1186/s13673-018-0154-5
    [20]
    OpenCV. 2022. OpenCV: Changing Colorspaces. https://docs.opencv.org/4.x/df/d9d/tutorial_py_colorspaces.html. (Accessed on 12/06/2022).
    [21]
    SRWorks. 2022. VIVE SRWorks SDK Guide — SRWorks 0.9.3.0 documentation. https://hub.vive.com/storage/srworks/. (Accessed on 12/06/2022).
    [22]
    Rebecca Stone. 2022. Image Segmentation Using Color Spaces in OpenCV + Python – Real Python. https://realpython.com/python-opencv-color-spaces/. (Accessed on 12/06/2022).
    [23]
    Unity. 2022. Unity Scripting API. https://docs.unity3d.com/510/Documentation/ScriptReference/MonoBehaviour.Update.html. (Accessed on 12/06/2022).
    [24]
    VIVE. 2022. VIVE Cosmos | VIVE™. https://www.vive.com/cn/product/vive-cosmos/overview/. (Accessed on 12/06/2022).
    [25]
    Xian Wang, Diego Monteiro, Lik-Hang Lee, Pan Hui, and Hai-Ning Liang. 2022. VibroWeight: Simulating Weight and Center of Gravity Changes of Objects in Virtual Reality for Enhanced Realism. In 2022 IEEE Haptics Symposium (HAPTICS). 1–7. https://doi.org/10.1109/HAPTICS52432.2022.9765609
    [26]
    Yukang Yan, Chun Yu, Xiaojuan Ma, Shuai Huang, Hasan Iqbal, and Yuanchun Shi. 2018. Eyes-free target acquisition in interaction space around the body for virtual reality. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–13.
    [27]
    Difeng Yu, Kaixuan Fan, Heng Zhang, Diego Monteiro, Wenge Xu, and Hai-Ning Liang. 2018. PizzaText: Text Entry for Virtual Reality Systems Using Dual Thumbsticks. IEEE Transactions on Visualization and Computer Graphics 24, 11(2018), 2927–2935. https://doi.org/10.1109/TVCG.2018.2868581
    [28]
    Difeng Yu, Xueshi Lu, Rongkai Shi, Hai-Ning Liang, Tilman Dingler, Eduardo Velloso, and Jorge Goncalves. 2021. Gaze-Supported 3D Object Manipulation in Virtual Reality. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 734, 13 pages. https://doi.org/10.1145/3411764.3445343
    [29]
    Zhenliang Zhang, Yue Li, Jie Guo, Dongdong Weng, Yue Liu, and Yongtian Wang. 2019. Vision-tangible interactive display method for mixed and virtual reality: Toward the human-centered editable reality. Journal of the Society for Information Display 27, 2 (2019), 72–84.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    VRCAI '22: Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry
    December 2022
    284 pages
    ISBN:9798400700316
    DOI:10.1145/3574131
    • Editors:
    • Enhua Wu,
    • Lionel Ming-Shuan Ni,
    • Zhigeng Pan,
    • Daniel Thalmann,
    • Ping Li,
    • Charlie C.L. Wang,
    • Lei Zhu,
    • Minghao Yang
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 January 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Augmented Reality
    2. Object Synchronization
    3. Virtual Reality

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    VRCAI '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 51 of 107 submissions, 48%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 101
      Total Downloads
    • Downloads (Last 12 months)48
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 27 Jul 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media