Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Latency Requirements for Foveated Rendering in Virtual Reality

Published: 14 September 2017 Publication History

Abstract

Foveated rendering is a performance optimization based on the well-known degradation of peripheral visual acuity. It reduces computational costs by showing a high-quality image in the user’s central (foveal) vision and a lower quality image in the periphery. Foveated rendering is a promising optimization for Virtual Reality (VR) graphics, and generally requires accurate and low-latency eye tracking to ensure correctness even when a user makes large, fast eye movements such as saccades. However, due to the phenomenon of saccadic omission, it is possible that these requirements may be relaxed.
In this article, we explore the effect of latency for foveated rendering in VR applications. We evaluated the detectability of visual artifacts for three techniques capable of generating foveated images and for three different radii of the high-quality foveal region. Our results show that larger foveal regions allow for more aggressive foveation, but this effect is more pronounced for temporally stable foveation techniques. Added eye tracking latency of 80--150ms causes a significant reduction in acceptable amount of foveation, but a similar decrease in acceptable foveation was not found for shorter eye-tracking latencies of 20--40ms, suggesting that a total system latency of 50--70ms could be tolerated.

Supplementary Material

albert (albert.zip)
Supplemental movie, appendix, image and software files for, Latency Requirements for Foveated Rendering in Virtual Reality

References

[1]
Robert Scott Allison, Jens Schumacher, Shabnam Sadr, and Rainer Herpers. 2010. Apparent motion during saccadic suppression periods. Experimental Brain Research 202, 1 (2010), 155--169.
[2]
Stephen J. Anderson, Kathy T. Mullen, and Robert F. Hess. 1991. Human peripheral spatial resolution for achromatic and chromatic stimuli: Limits imposed by optical and retinal factors. The Journal of Physiology 442, 1 (1991), 47--64.
[3]
A. Terry Bahill, Michael R. Clark, and Lawrence Stark. 1975. The main sequence, a tool for studying human eye movements. Mathematical Biosciences 24, 3--4 (1975), 191--204.
[4]
Nir Benty. 2016. The Falcor Rendering Framework. Retrieved from https://github.com/NVIDIA/Falcor.
[5]
Jean-Baptiste Bernard, Scherlen Anne-Catherine, and Castet Eric. 2007. Page mode reading with simulated scotomas: A modest effect of interline spacing on reading speed. Vision Research 47, 28 (2007), 3447--3459.
[6]
Colin Blakemore. 1970. The range and scope of binocular depth discrimination in man. The Journal of Physiology 211, 3 (1970), 599.
[7]
Christopher J. Bockisch and Joel M. Miller. 1999. Different motor systems use similar damped extraretinal eye position information. Vision Research 39, 5 (1999), 1025--1038.
[8]
David C. Burr, M. Concetta Morrone, John Ross, and others. 1994. Selective suppression of the magnocellular visual pathway during saccadic eye movements. Nature 371, 6497 (1994), 511--513.
[9]
Mark R. Diamond, John Ross, and Maria C. Morrone. 2000. Extraretinal control of saccadic suppression. The Journal of Neuroscience 20, 9 (2000), 3449--3455. http://sci-hub.cc http://www.jneurosci.org/content/20/9/3449.short.
[10]
Michael Dorr and Peter J. Bex. 2011. A gaze-contingent display to study contrast sensitivity under natural viewing conditions. In IS8T/SPIE Electronic Imaging. International Society for Optics and Photonics, 78650Y--78650Y. http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=730733
[11]
Andrew T. Duchowski and Arzu Çöltekin. 2007. Foveated gaze-contingent displays for peripheral LOD management, 3D visualization, and stereo imaging. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 3, 4 (2007), 6.
[12]
Andrew T. Duchowski, Nathan Cournia, and Hunter Murphy. 2004. Gaze-contingent displays: A review. CyberPsychology 8 Behavior 7, 6 (2004), 621--634.
[13]
Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D graphics. ACM Transactions on Graphics 31, 6 (Nov. 2012), 164:1--164:10.
[14]
Thorsten Hansen, Lars Pracejus, and Karl R. Gegenfurtner. 2009. Color perception in the intermediate periphery of the visual field. Journal of Vision 9, 4 (2009), 26--26.
[15]
John M. Henderson, Karen K. McClure, Steven Pierce, and Gary Schrock. 1997. Object identification without foveal vision: Evidence from an artificial scotoma paradigm. Attention, Perception, 8 Psychophysics 59, 3 (1997), 323--346.
[16]
Cale Hunt. 2016. Field of view face-off: Rift vs Vive vs Gear VR vs PSVR. Retrieved from https://www.vrheads.com/field-view-faceoff-rift-vs-vive-vs-gear-vr-vs-psvr.
[17]
Michael R. Ibbotson and Shaun L. Cloherty. 2009. Visual perception: Saccadic omission—Suppression or temporal masking? Current Biology 19, 12 (June 2009), R493--R496.
[18]
Jason Jerald and Mary Whitton. 2009. Relating scene-motion thresholds to latency thresholds for head-mounted displays. In Proceedings of the Virtual Reality Conference (VR ’09). IEEE, 211--218.
[19]
Marc Levoy and Ross Whitaker. 1990. Gaze-directed volume rendering. ACM SIGGRAPH Computer Graphics 24, 2 (1990), 217--223.
[20]
Lester C. Loschky and George W. McConkie. 2000. User performance with gaze contingent multiresolutional displays. In Proceedings of the 2000 Symposium on Eye Tracking Research 8 Applications. ACM, 97--103. http://dl.acm.org/citation.cfm?id=355032
[21]
David Luebke and Benjamin Hallen. 2001. Perceptually driven simplification for interactive rendering. In Rendering Techniques 2001. Springer, 223--234.
[22]
Katerina Mania, Bernard D. Adelstein, Stephen R. Ellis, and Michael I. Hill. 2004. Perceptual sensitivity to head tracking latency in virtual environments with varying degrees of scene complexity. In Proceedings of the 1st Symposium on Applied Perception in Graphics and Visualization. ACM, 39--47.
[23]
Ethel Matin. 1974. Saccadic suppression: A review and an analysis. Psychological Bulletin 81, 12 (1974), 899. http://sci-hub.cchttp://psycnet.apa.org/psycinfo/1975-06562-001
[24]
George W. McConkie. 1981. Evaluating and reporting data quality in eye movement research. Behavior Research Methods 8 Instrumentation 13, 2 (1981), 97--106.
[25]
George W. McConkie and Lester C. Loschky. 2002. Perception onset time during fixations in free viewing. Behavior Research Methods, Instruments, 8 Computers 34, 4 (2002), 481--490.
[26]
Hunter A. Murphy, Andrew T. Duchowski, and Richard A. Tyrrell. 2009. Hybrid image/model-based gaze-contingent rendering. ACM Transactions on Applied Perception (TAP) 5, 4 (2009), 22.
[27]
NVIDIA. 2016. VRWorks—Lens Matched Shading. Retrieved from https://developer.nvidia.com/vrworks/graphics/lensmatchedshading.
[28]
NVIDIA. 2016. VRWorks—Multi-Res Shading. Retrieved from https://developer.nvidia.com/vrworks/graphics/multiresshading.
[29]
Toshikazu Ohshima, Hiroyuki Yamamoto, and Hideyuki Tamura. 1996. Gaze-directed adaptive rendering for interacting with virtual space. In Proceedings of the IEEE Virtual Reality Annual International Symposium 1996. IEEE, 103--110.
[30]
Anjul Patney. 2017. Perceptual insights into foveated virtual reality. In Proceedings of the NVIDIA GPU Technology Conference 2017 Talks. https://gputechconf2017.smarteventscloud.com/connect/sessionDetail.ww?SESSION_ID=1101958tclass=popup.
[31]
Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. Retrieved from http://research.nvidia.com/sites/default/files/publications/foveated-siga-16-v1-for-web.pdf.
[32]
Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics 35, 6 (Nov. 2016), 179:1--179:12.
[33]
Stephen M. Reder. 1973. On-line monitoring of eye-position signals in contingent and noncontingent paradigms. Behavior Research Methods 5, 2 (1973), 218--228. http://www.springerlink.com/index/4413450650Q75507.pdf.
[34]
Eyal M. Reingold, Lester C. Loschky, George W. McConkie, and David M. Stampe. 2003. Gaze-contingent multiresolutional displays: An integrative review. Human Factors: The Journal of the Human Factors and Ergonomics Society 45, 2 (2003), 307--328. http://hfs.sagepub.com/content/45/2/307.short.
[35]
William H. Ridder III and Alan Tomlinson. 1997. A comparison of saccadic and blink suppression in normal observers. Vision Research 37, 22 (Nov. 1997), 3171--3179.
[36]
John Ross, M. Concetta Morrone, Michael E. Goldberg, and David C. Burr. 2001. Changes in visual perception at the time of saccades. Trends in Neurosciences 24, 2 (Feb. 2001), 113--121.
[37]
Fabrizio Santini, Gabriel Redner, Ramon Iovin, and Michele Rucci. 2005. A general purpose system for eye movement contingent display control. Journal of Vision 5, 8 (2005), 594--594.
[38]
Daniel R. Saunders and Russell L. Woods. 2014. Direct measurement of the system latency of gaze-contingent displays. Behavior Research Methods 46, 2 (June 2014), 439--447.
[39]
Heiko H. Schütt, Stefan Harmeling, Jakob H. Macke, and Felix A. Wichmann. 2016. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data. Vision Research 122 (2016), 105--123.
[40]
Robert Sekuler and Randolph Blake. 1985. Perception. Alfred A. Knopf, New York, NY.
[41]
John Siderov and Ronald S. Harwerth. 1995. Stereopsis, spatial frequency and retinal eccentricity. Vision Research 35, 16 (1995), 2329--2337.
[42]
Michael Stengel, Steve Grogorick, Martin Eisemann, and Marcus Magnor. 2016. Adaptive image-space sampling for gaze-c real-time rendering. In Computer Graphics Forum, Vol. 35. Wiley Online Library, 129--139.
[43]
L. N. Thibos, F. E. Cheney, and D. J. Walsh. 1987. Retinal limits to the detection and resolution of gratings. Journal of the Optical Society of America A 4, 8 (1987), 1524--1529.
[44]
Robin Thunström. 2014. Passive Gaze-Contingent Techniques Relation to System Latency. Retrieved from http://www.diva-portal.org/smash/record.jsf?pid=diva2:829880
[45]
Jochen Triesch, Brian T. Sullivan, Mary M. Hayhoe, and Dana H. Ballard. 2002. Saccade contingent updating in virtual reality. In Proceedings of the 2002 Symposium on Eye Tracking Research 8 Applications. ACM, 95--102.
[46]
Christopher W. Tyler. 1987. Analysis of visual modulation sensitivity. III. Meridional variations in peripheral flicker sensitivity. Journal of the Optical Society of America A 4, 8 (1987), 1612--1619.
[47]
A. van der Schaaf and J. H. van Hateren. 1996. Modelling the power spectra of natural images: Statistics and information. Vision Research 36, 17 (1996), 2759--2770.
[48]
Peter Vincent and Ritchie Brannan. 2017. S7797 Tobii Eye Tracked Foveated Rendering for VR and Desktop. Retrieved from https://gputechconf2017.smarteventscloud.com/connect/sessionDetail.ww?SESSION_ID=1153608tclass=popup.
[49]
Alex Vlachos. 2015. Advanced VR Rendering. Retrieved from http://alex.vlachos.com/graphics/Alex_Vlachos_Advanced_VR_Rendering_GDC2015.pdf.
[50]
Andrew B. Watson. 2014. A formula for human retinal ganglion cell receptive field density as a function of visual field location. Journal of Vision 14, 7 (2014), 15.
[51]
Nick Whiting and Nick Donaldson. 2016. Lessons from Integrating the Oculus Rift into Unreal Engine 4. Retrieved from http://static.oculus.com/connect/slides/OculusConnect_Epic_UE4_Integration_and_Demos.pdf.
[52]
Stefan Wiens, Peter Fransson, Thomas Dietrich, Peter Lohmann, Martin Ingvar, and Öhman Arne. 2004. Keeping it short: A comparison of methods for brief picture presentation. Psychological Science 15, 4 (2004), 282--285.
[53]
Hongbin Zha, Yoshinobu Makimoto, and Tsutomu Hasegawa. 1999. Dynamic gaze-controlled levels of detail of polygonal objects in 3-D environment modeling. In Proceedings of the 2nd International Conference on 3-D Digital Imaging and Modeling, 1999. IEEE, 321--330.
[54]
Xin Zhang, Wei Chen, Zhonglei Yang, Chuan Zhu, and Qunsheng Peng. 2011. A new foveation ray casting approach for real-time rendering of 3D scenes. In Proceedings of the 2011 12th International Conference on Computer-Aided Design and Computer Graphics (CAD/Graphics). IEEE, 99--102.

Cited By

View all
  • (2024)Foveated Path Culling: A mixed path tracing and radiance field approach for optimizing rendering in XR DisplaysJournal on Interactive Systems10.5753/jis.2024.435215:1(576-590)Online publication date: 18-Jun-2024
  • (2024)Assessing the data quality of AdHawk MindLink eye-tracking glassesBehavior Research Methods10.3758/s13428-023-02310-256:6(5771-5787)Online publication date: 2-Jan-2024
  • (2024)VISUALIZATION OF TURBULENT EVENTS VIA VIRTUAL/AUGMENTED REALITYJournal of Flow Visualization and Image Processing10.1615/JFlowVisImageProc.202304764031:1(1-22)Online publication date: 2024
  • Show More Cited By

Index Terms

  1. Latency Requirements for Foveated Rendering in Virtual Reality

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Applied Perception
      ACM Transactions on Applied Perception  Volume 14, Issue 4
      Special Issue SAP 2017
      October 2017
      63 pages
      ISSN:1544-3558
      EISSN:1544-3965
      DOI:10.1145/3140462
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 14 September 2017
      Accepted: 01 July 2017
      Received: 01 July 2017
      Published in TAP Volume 14, Issue 4

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Foveated rendering
      2. eye-tracking
      3. latency

      Qualifiers

      • Research-article
      • Research
      • Refereed

      Funding Sources

      • NVIDIA Research

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)252
      • Downloads (Last 6 weeks)28
      Reflects downloads up to 10 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Foveated Path Culling: A mixed path tracing and radiance field approach for optimizing rendering in XR DisplaysJournal on Interactive Systems10.5753/jis.2024.435215:1(576-590)Online publication date: 18-Jun-2024
      • (2024)Assessing the data quality of AdHawk MindLink eye-tracking glassesBehavior Research Methods10.3758/s13428-023-02310-256:6(5771-5787)Online publication date: 2-Jan-2024
      • (2024)VISUALIZATION OF TURBULENT EVENTS VIA VIRTUAL/AUGMENTED REALITYJournal of Flow Visualization and Image Processing10.1615/JFlowVisImageProc.202304764031:1(1-22)Online publication date: 2024
      • (2024)Sense of agency at a temporally-delayed gaze-contingent displayPLOS ONE10.1371/journal.pone.030999819:9(e0309998)Online publication date: 6-Sep-2024
      • (2024)Analysing Hybrid Neural and Ray Tracing Perception for Foveated RenderingProceedings of the 26th Symposium on Virtual and Augmented Reality10.1145/3691573.3691580(21-30)Online publication date: 30-Sep-2024
      • (2024)Towards Motion Metamers for Foveated RenderingACM Transactions on Graphics10.1145/365814143:4(1-10)Online publication date: 19-Jul-2024
      • (2024)Theia: Gaze-driven and Perception-aware Volumetric Content Delivery for Mixed Reality HeadsetsProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661858(70-84)Online publication date: 3-Jun-2024
      • (2024)Optimizing spatial resolution in head-mounted displays: evaluating characteristics of peripheral visual fieldProceedings of the 30th ACM Symposium on Virtual Reality Software and Technology10.1145/3641825.3687740(1-7)Online publication date: 9-Oct-2024
      • (2024)Accelerating Saccadic Response through Spatial and Temporal Cross-Modal MisalignmentsACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657432(1-12)Online publication date: 13-Jul-2024
      • (2024)Saccade-Contingent RenderingACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657420(1-9)Online publication date: 13-Jul-2024
      • Show More Cited By

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media