Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3605495.3605798acmconferencesArticle/Chapter ViewAbstractPublication PagessapConference Proceedingsconference-collections
research-article
Open access

Foveated Walking: Translational Ego-Movement and Foveated Rendering

Published: 05 August 2023 Publication History
  • Get Citation Alerts
  • Abstract

    The demands of creating an immersive Virtual Reality (VR) experience often exceed the raw capabilities of graphics hardware. Perceptually-driven techniques can reduce rendering costs by directing effort away from features that do not significantly impact the overall user experience while maintaining a high level of quality where it matters most. One such approach is foveated rendering, which allows for a reduction in the quality of the image in the peripheral region of the field-of-view where lower visual acuity results in users being less able to resolve fine details. 6 Degrees of Freedom tracking allows for the exploration of VR environments through different modalities, such as user-generated head or body movements. The effect of self-induced motion on rendering optimization has generally been overlooked and is not yet well understood. To explore this, we used Variable Rate Shading (VRS) to create a foveated rendering method triggered by the translational velocity of the users and studied different levels of shading Level-of-Detail (LOD). We asked 10 participants in a within-subjects design to report whether they noticed a degradation in the rendering of a rich environment when performing active ego-movement or when being passively transported through the environment. We ran a psychophysical experiment using an accelerated stochastic approximation staircase method and modified the diameter and the LOD of the peripheral region. Our results show that self-induced walking can be used to significantly improve the savings of foveated rendering by allowing for an increased size of the low-quality area in a foveated algorithm compared to the passive condition. After fitting psychometric functions showcasing the percentage of correct responses related to different shading rates in the two types of movements, we also report the threshold severity (75%) point for when participants are able to detect such degradation. We argue such metrics can inform the future design of movement-dependent foveated techniques that could reduce computational load and increase energy savings.

    References

    [1]
    Rachel Albert, Anjul Patney, David Luebke, and Joohwan Kim. 2017. Latency Requirements for Foveated Rendering in Virtual Reality. ACM Transactions on Applied Perception 14, 4 (Sept. 2017), 25:1–25:13. https://doi.org/10.1145/3127589
    [2]
    Ryan Beams, Brendan Collins, Andrea S. Kim, and Aldo Badano. 2020. Angular Dependence of the Spatial Resolution in Virtual Reality Displays. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 836–841. https://doi.org/10.1109/VR46266.2020.00108 ISSN: 2642-5254.
    [3]
    Doris I. Braun, Alexander C. Schütz, and Karl R. Gegenfurtner. 2017. Visual sensitivity for luminance and chromatic stimuli during the execution of smooth pursuit and saccadic eye movements. Vision Research 136 (July 2017), 57–69. https://doi.org/10.1016/j.visres.2017.05.008
    [4]
    Jack Brookes, Matthew Warburton, Mshari Alghadier, Mark Mon-Williams, and Faisal Mushtaq. 2020. Studying human behavior with virtual reality: The Unity Experiment Framework. Behavior Research Methods 52, 2 (April 2020), 455–463. https://doi.org/10.3758/s13428-019-01242-0
    [5]
    Alexandre Chapiro, Robin Atkins, and Scott Daly. 2019. A Luminance-aware Model of Judder Perception. ACM Transactions on Graphics 38, 5 (July 2019), 142:1–142:10. https://doi.org/10.1145/3338696
    [6]
    Christine A. Curcio and Kimberly A. Allen. 1990. Topography of ganglion cells in human retina. Journal of Comparative Neurology 300, 1 (1990), 5–25. https://doi.org/10.1002/cne.903000103 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/cne.903000103.
    [7]
    Gyorgy Denes, Akshay Jindal, Aliaksei Mikhailiuk, and Rafał K. Mantiuk. 2020. A perceptual model of motion quality for rendering with adaptive refresh-rate and resolution. ACM Transactions on Graphics 39, 4 (July 2020). https://doi.org/10.1145/3386569.3392411
    [8]
    Michael Dorr, Martin Böhme, Thomas Martinetz, and Erhardt Barth. 2006. Gaze-Contingent Spatio-temporal Filtering in a Head-Mounted Display. In Perception and Interactive Technologies(Lecture Notes in Computer Science), Elisabeth André, Laila Dybkjær, Wolfgang Minker, Heiko Neumann, and Michael Weber (Eds.). Springer, Berlin, Heidelberg, 205–207. https://doi.org/10.1007/11768029_24
    [9]
    David E. Jacobs, Orazio Gallo, Emily A. Cooper, Kari Pulli, and Marc Levoy. 2015. Simulating the Visual Experience of Very Bright and Very Dark Scenes. ACM Transactions on Graphics 34, 3 (May 2015), 25:1–25:15. https://doi.org/10.1145/2714573
    [10]
    Gavin Ellis and Alan Chalmers. 2006. The effect of translational ego-motion on the perception of high fidelity animations. In Proceedings of the 22nd Spring Conference on Computer Graphics(SCCG ’06). Association for Computing Machinery, New York, NY, USA, 75–82 PAGE@7. https://doi.org/10.1145/2602161.2602170
    [11]
    Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D graphics. ACM Transactions on Graphics 31, 6 (Nov. 2012), 164:1–164:10. https://doi.org/10.1145/2366145.2366183
    [12]
    Vivian Holten and Paul R. MacNeilage. 2018. Optic flow detection is not influenced by visual-vestibular congruency. PLoS ONE 13, 1 (Jan. 2018), e0191693. https://doi.org/10.1371/journal.pone.0191693
    [13]
    Akshay Jindal, Krzysztof Wolski, Karol Myszkowski, and Rafał K. Mantiuk. 2021. Perceptual model for adaptive local shading and refresh rate. ACM Transactions on Graphics 40, 6 (Dec. 2021), 281:1–281:18. https://doi.org/10.1145/3478513.3480514
    [14]
    Ke Liao, Mark F. Walker, Anand C. Joshi, Millard Reschke, Michael Strupp, Judith Wagner, and R. John Leigh. 2010. The linear vestibulo-ocular reflex, locomotion and falls in neurological disorders. Restorative Neurology and Neuroscience 28, 1 (Jan. 2010), 91–103. https://doi.org/10.3233/RNN-2010-0507 Publisher: IOS Press.
    [15]
    Daniel Linares and Joan López-Moliner. 2016. quickpsy: An R Package to Fit Psychometric Functions for Multiple Groups. The R Journal 8, 1 (2016), 122. https://doi.org/10.32614/RJ-2016-008
    [16]
    Rafał K. Mantiuk, Maliha Ashraf, and Alexandre Chapiro. 2022. stelaCSF: a unified model of contrast sensitivity as the function of spatio-temporal frequency, eccentricity, luminance and area. ACM Transactions on Graphics 41, 4 (July 2022), 1–16. https://doi.org/10.1145/3528223.3530115
    [17]
    Brian J. Murphy. 1978. Pattern thresholds for moving and stationary gratings during smooth eye movement. Vision Research 18, 5 (Jan. 1978), 521–530. https://doi.org/10.1016/0042-6989(78)90196-7
    [18]
    NVIDIA. 2018. VRWorks - Variable Rate Shading (VRS). https://developer.nvidia.com/vrworks/graphics/variablerateshading
    [19]
    Derrick Parkhurst and Ernst Niebur. 2004. A feasibility test for perceptually adaptive level of detail rendering on desktop systems. In Proceedings of the 1st Symposium on Applied perception in graphics and visualization(APGV ’04). Association for Computing Machinery, New York, NY, USA, 49–56. https://doi.org/10.1145/1012551.1012561
    [20]
    Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics 35, 6 (Dec. 2016), 179:1–179:12. https://doi.org/10.1145/2980179.2980246
    [21]
    Théo Perrin, Hugo A. Kerhervé, Charles Faure, Anthony Sorel, Benoit Bideau, and Richard Kulpa. 2019. Enactive Approach to Assess Perceived Speed Error during Walking and Running in Virtual Reality. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 622–629. https://doi.org/10.1109/VR.2019.8798209 ISSN: 2642-5254.
    [22]
    Jeffrey S. Perry and Wilson S. Geisler. 2002. Gaze-contingent real-time simulation of arbitrary visual fields. In Human Vision and Electronic Imaging VII, Vol. 4662. SPIE, 57–69. https://doi.org/10.1117/12.469554
    [23]
    David Petrescu, Paul A. Warren, Zahra Montazeri, and Steve Pettifer. 2023. Velocity-Based LOD Reduction in Virtual Reality: A Psychophysical Approach. In Eurographics 2023 - Short Papers, Vahid Babaei and Melina Skouras (Eds.). The Eurographics Association. https://doi.org/10.2312/egs.20231010
    [24]
    R Core Team. 2023. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/
    [25]
    Martin Reddy. 2001. Perceptually optimized 3D graphics. IEEE Computer Graphics and Applications 21, 5 (July 2001), 68–75. https://doi.org/10.1109/38.946633 Conference Name: IEEE Computer Graphics and Applications.
    [26]
    Josef Spjut, Ben Boudaoud, Jonghyun Kim, Trey Greer, Rachel Albert, Michael Stengel, Kaan Akşit, and David Luebke. 2020. Toward Standardized Classification of Foveated Displays. IEEE Transactions on Visualization and Computer Graphics 26, 5 (May 2020), 2126–2134. https://doi.org/10.1109/TVCG.2020.2973053 Conference Name: IEEE Transactions on Visualization and Computer Graphics.
    [27]
    Jordan W. Suchow and George A. Alvarez. 2011a. Background motion silences awareness of foreground change. In ACM SIGGRAPH 2011 Posters(SIGGRAPH ’11). Association for Computing Machinery, New York, NY, USA, 1. https://doi.org/10.1145/2037715.2037750
    [28]
    Jordan W. Suchow and George A. Alvarez. 2011b. Motion Silences Awareness of Visual Change. Current Biology 21, 2 (Jan. 2011), 140–143. https://doi.org/10.1016/j.cub.2010.12.019
    [29]
    Bernhard Treutwein. 1995. Adaptive psychophysical procedures. Vision Research 35, 17 (Sept. 1995), 2503–2522. https://doi.org/10.1016/0042-6989(95)00016-X
    [30]
    Okan Tarhan Tursun, Elena Arabadzhiyska-Koleva, Marek Wernikowski, Radosław Mantiuk, Hans-Peter Seidel, Karol Myszkowski, and Piotr Didyk. 2019. Luminance-contrast-aware foveated rendering. ACM Transactions on Graphics 38, 4 (July 2019), 98:1–98:14. https://doi.org/10.1145/3306346.3322985
    [31]
    Karthik Vaidyanathan, Marco Salvi, Rpbert Toth, Tim Foley, Tomas Akenine-Möller, Jim Nilsson, Hacob. Munkberg, Jon Hasselgren, Masamichi Sugihara, Petrick Clarberg, Tomasz Janczak, and Aaron Lefohn. 2014. Coarse pixel shading. High-Performance Graphics 2014, HPG 2014 - Proceedings (Jan. 2014), 9–18.
    [32]
    HTC ViveSoftware. 2020. Vive Foveated Rendering - Developer Resources. https://developer.vive.com/resources/vive-sense/tools/vive-foveated-rendering/
    [33]
    Lili Wang, Xuehuai Shi, and Yi Liu. 2023. Foveated rendering: A state-of-the-art survey. Computational Visual Media 9, 2 (June 2023), 195–228. https://doi.org/10.1007/s41095-022-0306-4
    [34]
    Paul A. Warren, Graham Bell, and Yu Li. 2022. Investigating distortions in perceptual stability during different self-movements using virtual reality. Perception (Aug. 2022), 03010066221116480. https://doi.org/10.1177/03010066221116480 Publisher: SAGE Publications Ltd STM.
    [35]
    Martin Weier, Thorsten Roth, Ernst Kruijff, André Hinkenjann, Arsène Pérard-Gayot, Philipp Slusallek, and Yongmin Li. 2016. Foveated Real-Time Ray Tracing for Head-Mounted Displays. Computer Graphics Forum 35 (Oct. 2016), 289–298. https://doi.org/10.1111/cgf.13026
    [36]
    Martin Weier, M Stengel, Thorsten Roth, Piotr Didyk, Elmar Eisemann, M Eisemann, Steve Grogorick, André Hinkenjann, Ernst Kruijff, M Magnor, Karol Myszkowski, and Philipp Slusallek. 2017. Perception-driven Accelerated Rendering. Computer Graphics Forum (Proc. of Eurographics) 36 (April 2017). https://doi.org/10.1111/cgf.13150

    Index Terms

    1. Foveated Walking: Translational Ego-Movement and Foveated Rendering

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        SAP '23: ACM Symposium on Applied Perception 2023
        August 2023
        111 pages
        ISBN:9798400702525
        DOI:10.1145/3605495
        This work is licensed under a Creative Commons Attribution International 4.0 License.

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 05 August 2023

        Check for updates

        Author Tags

        1. foveated rendering
        2. motion
        3. psychophysics
        4. variable rate shading

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        SAP '23
        Sponsor:
        SAP '23: ACM Symposium on Applied Perception 2023
        August 5 - 6, 2023
        CA, Los Angeles, USA

        Acceptance Rates

        Overall Acceptance Rate 43 of 94 submissions, 46%

        Upcoming Conference

        SAP '24
        ACM Symposium on Applied Perception 2024
        August 30 - 31, 2024
        Dublin , Ireland

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 179
          Total Downloads
        • Downloads (Last 12 months)179
        • Downloads (Last 6 weeks)13
        Reflects downloads up to 10 Aug 2024

        Other Metrics

        Citations

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media