Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Perception-JND-driven path tracing for reducing sample budget

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Monte Carlo path tracing is a widely used method for generating realistic rendering results in multimedia applications, but often suffers from poor convergence and heavy sampling budget. Insufficient path samples may lead to noisy results. Some noises are hidden in textures, and the human visual system cannot detect them all. Just noticeable difference (JND) quantifies this limitation as a full-reference perception threshold. In rendering, the reference is unavailable and a surrogate is required. This paper proposed a perception-JND-driven path tracing method for reducing sampling budget. We tested and verified the surrogate JND thresholds derived from current rendering results. Then, we introduced difference pooling module and shading restart module to control perceptual convergence. Further, to improve accuracy, we developed the strategy for optimizing sampling steps. Experiments showed that the proposed method outperformed the state-of-the-art method at moderately low sampling levels, offering a lightweight and efficient solution to reducing sample budget while improving visual quality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1:
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The related data and materials will be made available upon reasonable request.

References

  1. Constantin, J., Bigand, A., Constantin, I.: Pooling spike neural network for fast rendering in global illumination. Neural Comput. Appl. 32(2), 427–446 (2020)

    Article  Google Scholar 

  2. Zhang, A., Zhao, Y., Wang, S.: Illumination estimation for augmented reality based on a global illumination model. Multimed. Tools Appl. 78(23), 33487–33503 (2019)

    Article  Google Scholar 

  3. Jiang, G., Kainz, B.: Deep radiance caching: convolutional autoencoders deeper in ray tracing. Comput. Graph. 94, 22–31 (2021)

    Article  Google Scholar 

  4. Parke, F.I.: Perception-based animation rendering. J. Vis. Comput. Animat. 2(2), 44–51 (1991)

    Article  Google Scholar 

  5. Laparra, V., Berardino, A., Ballé, J., Simoncelli, E.P.: Perceptually optimized image rendering. JOSA A 34(9), 1511–1525 (2017)

    Article  Google Scholar 

  6. Hou, X., Zhang, L.: Saliency detection: a spectral residual approach. In: IEEE Conference on computer vision and pattern recognition, pp. 1–8. IEEE (2007)

  7. Lin, W., Ghinea, G.: Progress and opportunities in modelling just noticeable difference (JND) for multimedia. IEEE Trans. Multimed. 24, 3706–3721 (2021)

    Article  Google Scholar 

  8. Weier, M., Stengel, M., Roth, T., Didyk, P., Eisemann, E., Eisemann, M., Grogorick, S., Hinkenjann, A., Kruijff, E., Magnor, M., et al.: Perception-driven accelerated rendering. In: Computer Graphics Forum, vol. 36, no. 2, pp. 611–643. Wiley Online Library (2017)

  9. Mitchell, D.P.: Generating antialiased images at low sampling densities. In: Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, pp. 65–72 (1987)

  10. Dong, L., Lin, W., Zhu, C., Seah, H.S.: Selective rendering with graphical saliency model. In: 2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis, pp. 159–164. IEEE (2011)

  11. Myszkowski, K.: The visible differences predictor: applications to global illumination problems. In: Eurographics Workshop on Rendering Techniques, pp. 223–236. Springer (1998)

    Google Scholar 

  12. Daly, S.J: Visible differences predictor: an algorithm for the assessment of image fidelity. In: Human Vision, Visual Processing, and Digital Display III, vol. 1666, pp. 2–15. SPIE (1992)

  13. Farrugia, J.P., Péroche, B.: A progressive rendering algorithm using an adaptive perceptually based image metric. In: Computer Graphics Forum, vol. 23, no. 3, pp. 605–614. Wiley Online Library (2004)

  14. Pattanaik, S. N., Ferwerda, J.A., Fairchild, M.D., Greenberg, D.P.: A multiscale model of adaptation and spatial vision for realistic image display. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, pp. 287–298 (1998).

  15. Stern, M.K., Johnson, J.H.: Just noticeable difference. The Corsini Encyclopedia of Psychology, pp. 1–2 (2010)

  16. Yang, X., Ling, W., Lu, Z., Ong, E.P., Yao, S.: Just noticeable distortion model and its applications in video coding. Signal Process. Image Commun. 20(7), 662–680 (2005)

    Article  Google Scholar 

  17. Wu, J., Shi, G., Lin, W., Kuo, C.J.: Enhanced just noticeable difference model with visual regularity consideration. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1581–1585. IEEE (2016)

  18. Ullah, I., Jian, M., Hussain, S., Guo, J., Yu, H., Wang, X., Yin, Y.: A brief survey of visual saliency detection. Multimed. Tools Appl. 79(45), 34605–34645 (2020)

    Article  Google Scholar 

  19. Tolhurst, D.J., Ripamonti, C., Párraga, C.A., Lovell, P.G., Troscianko, T.: A multiresolution color model for visual difference prediction. In: Proceedings of the 2nd Symposium on Applied Perception in Graphics and Visualization, pp. 135–138 (2005)

  20. Mantiuk, R., Kim, K.J., Rempel, A.G., Heidrich, W.: Hdr-vdp-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans. Graph. (TOG) 30(4), 1–14 (2011)

    Article  Google Scholar 

  21. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  22. He, S., Han, C., Han, G., Qin, J.: Exploring duality in visual question-driven top-down saliency. IEEE Trans. Neural Netw. Learn. Syst. 31(7), 2672–2679 (2019)

    Google Scholar 

  23. Ren, Z., Gao, S., Chia, L.T., Tsang, I.W.H.: Region-based saliency detection and its application in object recognition. IEEE Trans. Circuits Syst. Video Technol. 24(5), 769–779 (2013)

    Article  Google Scholar 

  24. Prashnani, E., Gallo, O., Kim, J., Spjut, J., Sen, P., Frosio, I.: Noise-aware saliency prediction for videos with incomplete gaze data. arXiv pre-print, arXiv:2104.08038 (2021)

  25. Lubin, J.: A visual discrimination model for imaging system design and evaluation. In: Vision Models for Target Detection and Recognition: In Memory of Arthur Menendez, pp. 245–283. World Scientific (1995)

  26. Li, B., Meyer, G.W., Klassen, R.V.: Comparison of two image quality models. In: Human Vision and Electronic Imaging III, vol. 3299, pp. 98–109. SPIE (1998)

  27. Andersson, P., Nilsson, J., Akenine Möller, T., Oskarsson, M., Åström, K., Fairchild, M.D.: Flip: A difference evaluator for alternating images. Proc. ACM Comput. Graph. Interact. Tech. 3(2), 15–1 (2020)

    Article  Google Scholar 

  28. Takouachet, N., Delepoulle, S., Renaud, C.: A perceptual stopping condition for global illumination computations. In: Proceedings of the 23rd Spring Conference on Computer Graphics, pp. 55–62 (2007).

  29. Qu, L., Meyer, G.W.: Perceptually guided polygon reduction. IEEE Trans. Vis. Comput. Graphics 14(5), 1015–1029 (2008)

    Article  Google Scholar 

  30. Bolin, M.R., Meyer, G.W.: A perceptually based adaptive sampling algorithm. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, pp. 299–309 (1998)

  31. Wei, Z., Ngan, K.N.: Spatio-temporal just noticeable distortion profile for gray scale image/video in DCT domain. IEEE Trans. Circuits Syst. Video Technol. 19(3), 337–346 (2009)

    Article  Google Scholar 

  32. Ahumada Jr. A.J., Peterson, H.A.: Luminance-model-based DCT quantization for color image compression. In: Human Vision, Visual Processing, and Digital Display III, vol. 1666, pp. 365–374. SPIE (1992)

  33. Zhang, X., Lin, W., Xue, P.: Improved estimation for just-noticeable visual distortion. Signal Process. 85(4), 795–808 (2005)

    Article  Google Scholar 

  34. Bae, S.H., Kim, M.: A novel generalized DCT-based JND profile based on an elaborate CM-JND model for variable block-sized transforms in monochrome images. IEEE Trans. Image Process. 23(8), 3227–3240 (2014)

    Article  MathSciNet  Google Scholar 

  35. Ki, S., Bae, S.H., Kim, M., Ko, H.: Learning-based just-noticeable-quantization-distortion modeling for perceptual video coding. IEEE Trans. Image Process. 27(7), 3178–3193 (2018)

    Article  MathSciNet  Google Scholar 

  36. Chou, C.H., Li, Y.C.: A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile. IEEE Trans. Circuits Syst. Video Technol. 5(6), 467–476 (1995)

    Article  Google Scholar 

  37. Wu, J., Shi, G., Lin, W., Liu, A., Qi, F.: Just noticeable difference estimation for images with free-energy principle. IEEE Trans. Multimed. 15(7), 1705–1710 (2013)

    Article  Google Scholar 

  38. Wang, H., Yu, L., Liang, J., Yin, H., Li, T., Wang, S.: Hierarchical predictive coding-based JND estimation for image compression. IEEE Trans. Image Process. 30, 487–500 (2020)

    Article  Google Scholar 

  39. Longhurst, P., Debattista, K., Chalmers, A.: A GPU based saliency map for high-fidelity selective rendering. In: Proceedings of the 4th International Conference on Computer Graphics, Virtual Reality, Visualization and Interaction in Africa, pp. 21–29 (2006)

  40. Harvey, C., Debattista, K., Bashford-Rogers, T., Chalmers, A.: Multi-modal perception for selective rendering. In: Computer Graphics Forum, vol. 36, no. 1, pp. 172–183. Wiley Online Library (2017)

  41. Renaud, C., Delepoulle, S., Takouachet, N.: Detecting visual convergence for stochastic global illumination. In: Intelligent Computer Graphics 2011, pp. 1–17. Springer (2012)

  42. Constantin, J., Bigand, A., Constantin, I., Hamad, D.: Image noise detection in global illumination methods based on FRVM. Neurocomputing 164, 82–95 (2015)

    Article  Google Scholar 

  43. Takouachet, N., Delepoulle, S., Renaud, C., Zoghlami, N., Tavares, J.M.R.: Perception of noise and global illumination: Toward an automatic stopping criterion based on SVM. Comput. Graph. 69, 49–58 (2017)

    Article  Google Scholar 

  44. Buisine, J., Bigand, A., Synave, R., Delepoulle, S., Renaud, C.: Stopping criterion during rendering of computer-generated images based on SVD-entropy. Entropy 23(1), 75 (2021)

    Article  Google Scholar 

  45. Mueller, J.H., Neff, T., Voglreiter, P., Steinberger, M., Schmalstieg, D.: Temporally adaptive shading reuse for real-time rendering and virtual reality. ACM Trans. Graph. (TOG) 40(2), 1–14 (2021)

    Article  Google Scholar 

  46. Wang, L., Shi, X., Liu, Y.: Foveated rendering: a state-of-the-art survey. arXiv preprint, arXiv:2211.07969 (2022)

  47. Koskela, M.K., Immonen, K.V., Viitanen, T.T., Jääskeläinen, P.O., Multanen, J.I., Takala, J.H.: Instantaneous foveated preview for progressive Monte Carlo rendering. Comput. Vis. Media 4(3), 267–276 (2018)

    Article  Google Scholar 

  48. Weier, M., Roth, T., Kruijff, E., Hinkenjann, A., Pérard-Gayot, A., Slusallek, P., Li, Y.: Foveated real-time ray tracing for head-mounted displays. Comput. Graph. Forum 35(7), 289–298 (2016)

    Article  Google Scholar 

  49. Pharr, M., Jakob, W., Humphreys, G.: Physically based rendering: from theory to implementation. Morgan Kaufmann Publishers Inc., Tech. Rep. (2016)

  50. Chandler, D.M., Hemami, S.S.: Vsnr: a wavelet-based visual signal-to-noise ratio for natural images. IEEE Trans. Image Process. 16(9), 2284–2298 (2007)

    Article  MathSciNet  Google Scholar 

  51. Sheikh, H.R., Bovik, A.C.: Image information and visual quality. IEEE Trans. Image Process. 15(2), 430–444 (2006)

    Article  Google Scholar 

  52. Wang, Z., Bovik, A.C.: A universal image quality index. IEEE Signal Process. Lett. 9(3), 81–84 (2002)

    Article  Google Scholar 

  53. Wu, J., Li, L., Dong, W., Shi, G., Lin, W., Kuo, C.C.J.: Enhanced just noticeable difference model for images with pattern complexity. IEEE Trans. Image Process. 26(6), 2682–2693 (2017)

    Article  MathSciNet  Google Scholar 

  54. Ahmed, A.G.M., Wonka, P.: Screen-space blue-noise diffusion of Monte Carlo sampling error via hierarchical ordering of pixels. ACM Trans. Graph. (TOG) 39(6), 1–15 (2020)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant (No. U19A2063) and Jilin Provincial Development Program of Science and Technology (20230201080GX).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chunyi Chen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by the National Natural Science Foundation of China under Grant (No. U19A2063) and Jilin Provincial Development Program of Science and Technology (20230201080GX).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, Z., Chen, C., Zhang, R. et al. Perception-JND-driven path tracing for reducing sample budget. Vis Comput 40, 7651–7665 (2024). https://doi.org/10.1007/s00371-023-03199-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-03199-w

Keywords