Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Public Access

Kernel Foveated Rendering

Published: 25 July 2018 Publication History

Abstract

Foveated rendering coupled with eye-tracking has the potential to dramatically accelerate interactive 3D graphics with minimal loss of perceptual detail. In this paper, we parameterize foveated rendering by embedding polynomial kernel functions in the classic log-polar mapping. Our GPU-driven technique uses closed-form, parameterized foveation that mimics the distribution of photoreceptors in the human retina. We present a simple two-pass kernel foveated rendering (KFR) pipeline that maps well onto modern GPUs. In the first pass, we compute the kernel log-polar transformation and render to a reduced-resolution buffer. In the second pass, we carry out the inverse-log-polar transformation with anti-aliasing to map the reduced-resolution rendering to the full-resolution screen. We have carried out pilot and formal user studies to empirically identify the KFR parameters. We observe a 2.8X -- 3.2X speedup in rendering on 4K UHD (2160p) displays with minimal perceptual loss of detail. The relevance of eye-tracking-guided kernel foveated rendering can only increase as the anticipated rise of display resolution makes it ever more difficult to resolve the mutually conflicting goals of interactive rendering and perceptual realism.

References

[1]
Marco Antonelli, Francisco D. Igual, Francisco Ramos, and V. Javier Traver. 2015. Speeding up the log-polar transform with inexpensive parallel hardware: graphics units and multi-core architectures. J. Real-Time Image Process. 10, 3 (Sept. 2015), 533--550.
[2]
H. Araujo and J. M. Dias. 1996. An introduction to the log-polar mapping {image sampling}. In Proceedings II Workshop on Cybernetic Vision. 139--144.
[3]
Nir Benty, Kai-Hwa Yao, Tim Foley, Anton S. Kaplanyan, Conor Lavelle, Chris Wyman, and Ashwin Vijay. 2017. The Falcor rendering framework. https://github.com/NVIDIAGameWorks/Falcor
[4]
Peter J Burt. 1988. Smart sensing within a pyramid vision machine. Proc. IEEE 76, 8 (1988), 1006--1015.
[5]
Matthäus G. Chajdas, Morgan McGuire, and David Luebke. 2011. Subpixel reconstruction antialiasing for deferred shading. In Symposium on Interactive 3D Graphics and Games (I3D '11). ACM, New York, NY, USA, 15-22 PAGE@7.
[6]
Ee-Chien Chang, Stéphane Mallat, and Chee Yap. 2000. Wavelet foveation. Applied and Computational Harmonic Analysis 9, 3 (2000), 312--335.
[7]
Petrik Clarberg, Robert Toth, Jon Hasselgren, Jim Nilsson, and Tomas Akenine-Möller. 2014. AMFS: adaptive multi-frequency shading for future graphics processors. ACM Trans. Graph. 33, 4, Article 141 (July 2014), 12 pages.
[8]
Cyril Crassin, Morgan McGuire, Kayvon Fatahalian, and Aaron Lefohn. 2015. Aggregate G-buffer anti-aliasing. In Proceedings of the 19th Symposium on Interactive 3D Graphics and Games (I3D '15). ACM, New York, NY, USA, 109--119.
[9]
Jerome F Duluk Jr, Richard E Hessel, Vaughn T Arnold, Jack Benkual, Joseph P Bratt, George Cuan, Stephen L Dodgen, Emerson S Fang, Zhaoyu Gong, Thomas Y Ho, et al. 2004. Deferred shading graphics pipeline processor having advanced features. US Patent 6,717,576.
[10]
Steven J. Gortler, Radek Grzeszczuk, Richard Szeliski, and Michael F. Cohen. 1996. The lumigraph. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '96). ACM, New York, NY, USA, 43--54.
[11]
Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D graphics. ACM Trans. Graph. 31, 6, Article 164 (Nov. 2012), 10 pages.
[12]
Shawn Hargreaves and Mark Harris. 2004. Deferred shading. In Game Developers Conference, Vol. 2. 31.
[13]
Yong He, Yan Gu, and Kayvon Fatahalian. 2014. Extending the graphics pipeline with adaptive, multi-rate shading. ACM Trans. Graph. 33, 4, Article 142 (July 2014), 12 pages.
[14]
Hugues Hoppe. 1998. Smooth view-dependent level-of-detail control and its application to terrain rendering. In Proceedings of the Conference on Visualization '98 (VIS '98). IEEE Computer Society Press, Los Alamitos, CA, USA, 35--42. http://dl.acm.org/citation.cfm?id=288216.288221
[15]
Chih-Fan Hsu, Anthony Chen, Cheng-Hsin Hsu, Chun-Ying Huang, Chin-Laung Lei, and Kuan-Ta Chen. 2017. Is foveated rendering perceivable in virtual reality?: exploring the efficiency and consistency of quality assessment methods. In Proceedings of the 2017 ACM on Multimedia Conference (MM '17). ACM, New York, NY, USA, 55--63.
[16]
L. Hu, P. V. Sander, and H. Hoppe. 2010. Parallel view-dependent level-of-detail control. IEEE Transactions on Visualization and Computer Graphics 16, 5 (Sept 2010), 718--728.
[17]
Cheuk Yiu Ip, M. Adil Yalçin, David Luebke, and Amitabh Varshney. 2013. PixelPie: maximal Poisson-disk sampling with rasterization. In Proceedings of the 5th High-Performance Graphics Conference (HPG '13). ACM, New York, NY, USA, 17--26.
[18]
B Karis. 2014. High-quality temporal supersampling. Advances in Real-Time Rendering in Games, SIGGRAPH Courses 1 (2014), 1--55.
[19]
Youngmin Kim, Amitabh Varshney, David Jacobs, and Francois Guimbretere. 2010. Mesh Saliency and Human Eye Fixations. ACM Transactions on Applied Perception 7, 2 (2010), 1--13.
[20]
Philip Kortum and Wilson S Geisler. 1996. Implementation of a foveated image coding system for image bandwidth reduction. In Electronic Imaging: Science 8 Technology. International Society for Optics and Photonics, 350--360.
[21]
Lee and Sanghoon. 2000. Foveated video compression and visual communications over wireless and wireline networks. Ph.D. Dissertation. Dept. of ECE, University of Texas at Austin.
[22]
Chang Ha Lee, Amitabh Varshney, and David Jacobs. 2005. Mesh Saliency. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2005) 24, 3 (August 2005), 659--666.
[23]
Sanghoon Lee, M. S. Pattichis, and A. C. Bovik. 2001. Foveated video compression with optimal rate control. IEEE Transactions on Image Processing 10, 7 (Jul 2001), 977--992.
[24]
Sanghoon Lee, M. S. Pattichis, and A. C. Bovik. 2002. Foveated video quality assessment. IEEE Transactions on Multimedia 4, 1 (Mar 2002), 129--132.
[25]
Marc Levoy and Ross Whitaker. 1990. Gaze-directed volume rendering. In Proceedings of the 1990 Symposium on Interactive 3D Graphics (I3D '90). ACM, New York, NY, USA, 217--223.
[26]
Amazon Lumberyard. 2017. Amazon Lumberyard Bistro, Open Research Content Archive (ORCA). http://developer.nvidia.com/orca/amazon-lumberyard-bistro
[27]
Bochang Moon, Jose A. Iglesias-Guitian, Steven McDonagh, and Kenny Mitchell. 2017. Noise reduction on G-buffers for Monte Carlo filtering. Computer Graphics Forum 36, 8 (2017), 600--612.
[28]
Toshikazu Ohshima, Hiroyuki Yamamoto, and Hideyuki Tamura. 1996. Gaze-directed adaptive rendering for interacting with virtual space. In Proceedings of the 1996 Virtual Reality Annual International Symposium (VRAIS 96) (VRAIS '96). IEEE Computer Society, Washington, DC, USA, 103-110, 267.
[29]
C. Papadopoulos and A. E. Kaufman. 2013. Acuity-driven gigapixel visualization. IEEE Transactions on Visualization and Computer Graphics 19, 12 (Dec 2013), 2886--2895.
[30]
Kashinath D Patil. 1975. Cochran's Q test: Exact distribution. J. Amer. Statist. Assoc. 70, 349 (1975), 186--189.
[31]
Anjul Patney, Joohwan Kim, Marco Salvi, Anton Kaplanyan, Chris Wyman, Nir Benty, Aaron Lefohn, and David Luebke. 2016a. Perceptually-based foveated virtual reality. In ACM SIGGRAPH 2016 Emerging Technologies (SIGGRAPH '16). ACM, New York, NY, USA, Article 17, 2 pages.
[32]
Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016b. Towards foveated rendering for gaze-tracked virtual reality. ACM Trans. Graph. 35, 6, Article 179 (Nov. 2016), 12 pages.
[33]
T Pengo, A Muñoz-Barrutia, and C Ortiz-de solórzano. 2009. Halton sampling for autofocus. Journal of Microscopy 235, 1 (2009), 50--58.
[34]
Matt Pharr and Greg Humphreys. 2010. Physically Based rendering, second edition: from theory to implementation (2nd ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
[35]
D. Pohl, X. Zhang, and A. Bulling. 2016. Combining eye tracking with optimizations for lens astigmatism in modern wide-angle HMDs. In 2016 IEEE Virtual Reality (VR). 269--270.
[36]
Jonathan Ragan-Kelley, Jaakko Lehtinen, Jiawen Chen, Michael Doggett, and Frédo Durand. 2011. Decoupled sampling for graphics pipelines. ACM Trans. Graph. 30, 3, Article 17 (May 2011), 17 pages.
[37]
T. H. Reeves and J. A. Robinson. 1996. Adaptive foveation of MPEG video. In Proceedings of the Fourth ACM International Conference on Multimedia (MULTIMEDIA '96). ACM, New York, NY, USA, 231--241.
[38]
A. Said and W. A. Pearlman. 1996. A new, fast, and efficient image codec based on set partitioning in hierarchical trees. IEEE Transactions on Circuits and Systems for Video Technology 6, 3 (Jun 1996), 243--250.
[39]
Kai Selgrad, Christian Reintges, Dominik Penk, Pascal Wagner, and Marc Stamminger. 2015. Real-time depth of field using multi-layer filtering. In Proceedings of the 19th Symposium on Interactive 3D Graphics and Games (i3D '15). ACM, New York, NY, USA, 121--127.
[40]
Jerome M Shapiro. 1993. Embedded image coding using zerotrees of wavelet coefficients. IEEE Transactions on Signal Processing 41, 12 (1993), 3445--3462.
[41]
Hamid R. Sheikh, Brian L. Evans, and Alan C. Bovik. 2003. Real-time foveation techniques for low bit rate video coding. Real-Time Imaging 9, 1 (Feb. 2003), 27--40.
[42]
H. R. Sheikh, S. Liu, Z. Wang, and A. C. Bovik. 2002. Foveated multipoint videoconferencing at low bit rates. In 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 2. II-2069-II-2072.
[43]
Michael Stengel, Steve Grogorick, Martin Eisemann, and Marcus Magnor. 2016. Adaptive image-space sampling for gaze-contingent real-time rendering. Computer Graphics Forum 35, 4 (2016), 129--139.
[44]
Qi Sun, Fu-Chung Huang, Joohwan Kim, Li-Yi Wei, David Luebke, and Arie Kaufman. 2017. Perceptually-guided foveation for light field displays. ACM Trans. Graph. 36, 6, Article 192 (Nov. 2017), 13 pages.
[45]
Nicholas T. Swafford, José A. Iglesias-Guitian, Charalampos Koniaris, Bochang Moon, Darren Cosker, and Kenny Mitchell. 2016. User, metric, and computational evaluation of foveated rendering methods. In Proceedings of the ACM Symposium on Applied Perception (SAP '16). ACM, New York, NY, USA, 7--14.
[46]
K. Vaidyanathan, M. Salvi, R. Toth, T. Foley, T. Akenine-Möller, J. Nilsson, J. Munkberg, J. Hasselgren, M. Sugihara, P. Clarberg, T. Janczak, and A. Lefohn. 2014. Coarse pixel shading. In Proceedings of High Performance Graphics (HPG '14). Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, 9--18. http://dl.acm.org/citation.cfm?id=2980009.2980011
[47]
Margarita Vinnikov and Robert S Allison. 2014. Gaze-contingent depth of field in realistic scenes: The user experience. In Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, 119--126.
[48]
Zhou Wang and Alan C Bovik. 2001. Embedded foveation image coding. IEEE Transactions on Image Processing 10, 10 (2001), 1397--1410.
[49]
Zhou Wang and Alan C Bovik. 2005. Foveated image and video coding. In Digitial Video Image Quality and Perceptual Coding. 1--28.
[50]
M. Weier, M. Stengel, T. Roth, P. Didyk, E. Eisemann, M. Eisemann, S. Grogorick, A. Hinkenjann, E. Kruijff, M. Magnor, K. Myszkowski, and P. Slusallek. 2017. Perception-driven accelerated rendering. Comput. Graph. Forum 36, 2 (May 2017), 611--643.
[51]
Frank W Weymouth. 1958. Visual sensory units and the minimal angle of resolution. American Journal of Ophthalmology 46, 1 (1958), 102--113.

Cited By

View all
  • (2024)Towards Motion Metamers for Foveated RenderingACM Transactions on Graphics10.1145/365814143:4(1-10)Online publication date: 19-Jul-2024
  • (2024)Theia: Gaze-driven and Perception-aware Volumetric Content Delivery for Mixed Reality HeadsetsProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661858(70-84)Online publication date: 3-Jun-2024
  • (2024)Fovea Prediction Model in VR2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW62533.2024.00230(867-868)Online publication date: 16-Mar-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Computer Graphics and Interactive Techniques
Proceedings of the ACM on Computer Graphics and Interactive Techniques  Volume 1, Issue 1
July 2018
378 pages
EISSN:2577-6193
DOI:10.1145/3242771
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 July 2018
Published in PACMCGIT Volume 1, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. eye-tracking
  2. foveated rendering
  3. head-mounted displays
  4. log-polar mapping
  5. perception
  6. virtual reality

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)473
  • Downloads (Last 6 weeks)42
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Towards Motion Metamers for Foveated RenderingACM Transactions on Graphics10.1145/365814143:4(1-10)Online publication date: 19-Jul-2024
  • (2024)Theia: Gaze-driven and Perception-aware Volumetric Content Delivery for Mixed Reality HeadsetsProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661858(70-84)Online publication date: 3-Jun-2024
  • (2024)Fovea Prediction Model in VR2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW62533.2024.00230(867-868)Online publication date: 16-Mar-2024
  • (2024)Retinotopic Foveated Rendering2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00109(903-912)Online publication date: 16-Mar-2024
  • (2024)Foveated Fluid Animation in Virtual Reality2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00074(535-545)Online publication date: 16-Mar-2024
  • (2024)VPRF: Visual Perceptual Radiance Fields for Foveated Image SynthesisIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345618430:11(7183-7192)Online publication date: 11-Sep-2024
  • (2024)Scene-aware Foveated RenderingIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345615730:11(7097-7106)Online publication date: 10-Sep-2024
  • (2024)Scene‐content‐sensitive real‐time adaptive foveated renderingJournal of the Society for Information Display10.1002/jsid.134632:10(703-715)Online publication date: 14-Jul-2024
  • (2024)Neural foveated super‐resolution for real‐time VR renderingComputer Animation and Virtual Worlds10.1002/cav.228735:4Online publication date: 11-Jul-2024
  • (2023)Compact near‐eye display for firefighter's self‐contained breathing apparatusETRI Journal10.4218/etrij.2023-006745:6(1046-1055)Online publication date: 7-Nov-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media