Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Forest fog rendering using generative adversarial networks

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Producing a physically realistic rendering of forests with fog involves the simulation of light diffusion in participating media. This volumetric phenomenon increases the realism of natural scenes in several fields, such as video games and flight simulators. However, this effect in a natural scene is characterized by complexity at all scales, representing a challenge for the computer graphics community. In this paper, we propose a novel approach based on a generative adversarial network to estimate fog in forest scenes. Our approach operates in two steps. The first step consists of training a generative adversarial network on a large dataset of synthetic images. Our network takes four images as input (normal map, depth map, albedo map and RGB map without fog), and it generates an estimation of the rendering forest with fog for output. A reference image conditions the input images to produce a better approximation. The second step consists of the production of realistic images with fog. Our technique can be generalized to high-frequency lighting effects (specularity and shadow), so it does not require any calculation in 3D space.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. https://github.com/GameTechDev/OutdoorLightScattering.

References

  1. Abbas, F., Babahenini, M.C.: Gaussian radial basis function for efficient computation of forest indirect illumination. 3D Res. 9(2), 18 (2018). https://doi.org/10.1007/s13319-018-0171-1

    Article  Google Scholar 

  2. Bitterli, B., Jarosz, W.: Beyond points and beams: higher-dimensional photon samples for volumetric light transport. ACM Trans. Graph. (TOG) 36(4), 1–12 (2017). https://doi.org/10.1145/3072959.3073698

    Article  Google Scholar 

  3. Blasi, P., Le Saec, B., Schlick, C.: A rendering algorithm for discrete volume density objects. In: Computer graphics forum, vol. 12, pp. 201–210. Wiley (1993). https://doi.org/10.1111/1467-8659.1230201

  4. Boulanger, K.: Real-time realistic rendering of nature scenes with dynamic lighting. Ph.D. thesis, University of Rennes I, France (2008)

  5. Bruneton, E., Neyret, F.: Precomputed atmospheric scattering. In: Computer graphics forum, vol. 27, pp. 1079–1086. Wiley (2008). https://doi.org/10.1111/j.1467-8659.2008.01245.x

  6. Chen, J., Baran, I., Durand, F., Jarosz, W.: Real-time volumetric shadows using 1d min-max mipmaps. In: Symposium on Interactive 3D Graphics and Games, pp. 39–46 (2011). https://doi.org/10.1145/1944745.1944752

  7. Chen, Z., Chen, A., Zhang, G., Wang, C., Ji, Y., Kutulakos, K.N., Yu, J.: A neural rendering framework for free-viewpoint relighting. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5599–5610 (2020)

  8. Dahm, K., Keller, A.: Learning light transport the reinforced way. In: International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, pp. 181–195. Springer (2016). https://doi.org/10.1007/978-3-319-91436-7_9

  9. Deng, X., Jiao, S., Bitterli, B., Jarosz, W.: Photon surfaces for robust, unbiased volumetric density estimation. ACM Trans. Graph. (TOG) 38(4), 46 (2019). https://doi.org/10.1145/3306346.3323041

    Article  Google Scholar 

  10. Georgoulis, S., Rematas, K., Ritschel, T., Gavves, E., Fritz, M., Van Gool, L., Tuytelaars, T.: Reflectance and natural illumination from single-material specular objects using deep learning. IEEE Trans. Pattern Anal. Mach. Intell. 40(8), 1932–1947 (2017). https://doi.org/10.1109/TPAMI.2017.2742999

    Article  Google Scholar 

  11. Girod, B.: What’s Wrong with Mean-Squared Error? Digital Images and Human Vision, pp. 207–220 (1993)

  12. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014). https://doi.org/10.1145/3422622

  13. Hachisuka, T., Georgiev, I., Jarosz, W., Krivánek, J., Nowrouzezahrai, D.: Extended path integral formulation for volumetric transport. In: EGSR (EI&I), pp. 65–70 (2017). https://doi.org/10.2312/sre.20171195

  14. Hoffman, N., Preetham, A.J., et al.: Rendering outdoor light scattering in real time. In: Proceedings of Game Developer Conference, vol. 2002, pp. 337–352 (2002)

  15. Hold-Geoffroy, Y., Sunkavalli, K., Hadap, S., Gambaretto, E., Lalonde, J.F.: Deep outdoor illumination estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7312–7321 (2017). https://doi.org/10.1109/CVPR.2017.255

  16. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/CVPR.2017.632

  17. Jakob, W.: Mitsuba physically based renderer www. mitsuba-renderer. org (2010)

  18. Kajiya, J.T.: The rendering equation. In: Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques, pp. 143–150 (1986). https://doi.org/10.1145/15886.15902

  19. Kallweit, S., Müller, T., Mcwilliams, B., Gross, M., Novák, J.: Deep scattering: rendering atmospheric clouds with radiance-predicting neural networks. ACM Trans. Graph. (TOG) 36(6), 1–11 (2017). https://doi.org/10.1145/3130800.3130880

    Article  Google Scholar 

  20. Keller, A., Dahm, K.: Integral equations and machine learning. Math. Comput. Simul. 161, 2–12 (2019). https://doi.org/10.1016/j.matcom.2019.01.010

    Article  MathSciNet  MATH  Google Scholar 

  21. Kim, S., Kim, D., Choi, S.: Citycraft: 3d virtual city creation from a single image. Vis. Comput. 36(5), 911–924 (2020). https://doi.org/10.1007/s00371-019-01701-x

    Article  Google Scholar 

  22. LeCun, Y., Boser, B.E., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W.E., Jackel, L.D.: Handwritten digit recognition with a back-propagation network. In: Advances in Neural Information Processing Systems, pp. 396–404 (1990)

  23. Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017). https://doi.org/10.1109/CVPR.2017.19

  24. Lunz, S., Li, Y., Fitzgibbon, A., Kushman, N.: Inverse graphics gan: Learning to generate 3d shapes from unstructured 2d data. ArXiv:abs/2002.12674 (2020)

  25. Miller, B., Georgiev, I., Jarosz, W.: A null-scattering path integral formulation of light transport. ACM Trans. Graph. (TOG) 38(4), 1–13 (2019). https://doi.org/10.1145/3306346.3323025

    Article  Google Scholar 

  26. Nalbach, O., Arabadzhiyska, E., Mehta, D., Seidel, H.P., Ritschel, T.: Deep shading: convolutional neural networks for screen space shading. In: Computer graphics forum, vol. 36, pp. 65–78. Wiley (2017). https://doi.org/10.1111/cgf.13225

  27. Novák, J., Georgiev, I., Hanika, J., Jarosz, W.: Monte carlo methods for volumetric light transport simulation. In: Computer Graphics Forum, vol. 37, pp. 551–576. Wiley (2018). https://doi.org/10.1111/cgf.13383

  28. Pegoraro, V., Parker, S.G.: An analytical solution to single scattering in homogeneous participating media. In: Computer Graphics Forum, vol. 28, pp. 329–335. Wiley (2009)

  29. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241. Springer (2015). https://doi.org/10.1007/978-3-319-24574-4_28

  30. Schafhitzel, T., Falk, M., Ertl, T.: Real-time rendering of planets with atmospheres. WSCG 27, 91–98 (2007)

    Google Scholar 

  31. Si, L., Yasumura, Y.: Reconstruction of a 3d model from single 2d image by gan. In: International Conference on Multi-disciplinary Trends in Artificial Intelligence, pp. 226–232. Springer (2018). https://doi.org/10.1007/978-3-030-03014-8_20

  32. Simon, F., Hanika, J., Zirr, T., Dachsbacher, C.: Line integration for rendering heterogeneous emissive volumes. In: Computer Graphics Forum, vol. 36, pp. 101–110. Wiley (2017). https://doi.org/10.1111/cgf.13228

  33. Song, H., Wang, M., Zhang, L., Li, Y., Jiang, Z., Yin, G.: S2rgan: sonar-image super-resolution based on generative adversarial network. Vis. Comput. (2020). https://doi.org/10.1007/s00371-020-01986-3

    Article  Google Scholar 

  34. Song, S., Funkhouser, T.: Neural illumination: lighting prediction for indoor environments. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6918–6926 (2019). https://doi.org/10.1109/CVPR.2019.00708

  35. Stam, J.: Stochastic rendering of density fields. In: Graphics Interface, pp. 51–51. Canadian Information Processing Society (1994)

  36. Subileau, T.: Contrôle artistique du rendu en synthèse d’images. Ph.D. thesis, Université de Toulouse, Université Toulouse III-Paul Sabatier (2016)

  37. Taniai, T., Maehara, T.: Neural inverse rendering for general reflectance photometric stereo. arXiv:1802.10328 (2018)

  38. Thomas, M.M., Forbes, A.G.: Deep illumination: approximating dynamic global illumination with generative adversarial network. arXiv:1710.09834 (2017)

  39. Unity: Unity technologies. https://unity.com/ (2021)

  40. Von Bernuth, A., Volk, G., Bringmann, O.: Simulating photo-realistic snow and fog on existing images for enhanced cnn training and evaluation. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 41–46. IEEE (2019)

  41. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861

    Article  Google Scholar 

  42. Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992). https://doi.org/10.1007/BF00992698

    Article  MATH  Google Scholar 

  43. Wronski, B.: Volumetric fog and lighting. GPU Pro 360 Guide to Lighting, p. 321 (2018)

  44. Wu, J., Zhang, C., Xue, T., Freeman, B., Tenenbaum, J.: Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In: Advances in Neural Information Processing Systems, pp. 82–90 (2016)

  45. Xu, Z., Sunkavalli, K., Hadap, S., Ramamoorthi, R.: Deep image-based relighting from optimal sparse samples. ACM Trans. Graph. (ToG) 37(4), 1–13 (2018)

    Article  Google Scholar 

  46. Yusov, E.: High performance outdoor light scattering using epipolar sampling. In: GPU Pro 360 Guide to Lighting, pp. 211–236. AK Peters/CRC Press (2018)

  47. Zhang, S., Han, Z., Lai, Y.K., Zwicker, M., Zhang, H.: Stylistic scene enhancement gan: mixed stylistic enhancement generation for 3d indoor scenes. Vis. Comput. 35(6), 1157–1169 (2019)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fayçal Abbas.

Ethics declarations

Conflict of interest

We have no conflict of interest to declare. We have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abbas, F., Babahenini, M.C. Forest fog rendering using generative adversarial networks. Vis Comput 39, 943–952 (2023). https://doi.org/10.1007/s00371-021-02376-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-021-02376-z

Keywords