Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3579895.3579910acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicnccConference Proceedingsconference-collections
research-article

Realistic Nighttime Haze Image Generation with Glow Effect

Published: 04 April 2023 Publication History

Abstract

Abstract: Haze rendering aims to generate realistic nighttime haze images from clear images. The results can be applied to various practical applications, such as nighttime image dehazing algorithms, game scene rendering, shooting filters, etc. We investigate two smaller but challenging problems in nighttime haze rendering, namely 1) how to accurately estimate the transmission map and air light from clear and haze images, respectively, in the absence of paired datasets? 2) How to render the characteristics of a realistic nighttime haze image: glow effect? For this purpose, we propose an unsupervised nighttime haze image rendering method called NHRM (Nighttime Haze Rendering Module). Precisely, the NHRM consists of three modules: 1) an illumination retention module based on Retinex theory. 2) a transmission map estimation network using multi-scale feature fusion and attention mechanism. 3) a glow generation module using tone mapping and convolution bloom technique. Compared with existing haze generation methods, NHRM can render realistic and controllable nighttime haze images by simulating the main features of nighttime haze scenes in an unsupervised manner. Numerous experimental results show that our method outperforms existing rendering methods with better visual effects and quantitative metrics.

References

[1]
Li, B., Lin, Y., Bai, J., Hu, P., Lv, J. and Peng, X. Unsupervised Neural Rendering for Image Hazing. IEEE Transactions on Image Processing (2022).
[2]
Liu, D. and Klette, R. Fog effect for photography using stereo vision. The Visual Computer, 32, 1 (2016), 99-109.
[3]
Zhao, F., Zeng, M., Jiang, B. and Liu, X. Render synthetic fog into interior and exterior photographs. City, 2013.
[4]
Nayar, S. K. and Narasimhan, S. G. Vision in bad weather. IEEE, City, 1999.
[5]
Li, Y., Tan, R. T. and Brown, M. S. Nighttime haze removal with glow and multiple light colors. City, 2015.
[6]
Zhang, J., Cao, Y. and Wang, Z. Nighttime haze removal based on a new imaging model. IEEE, City, 2014.
[7]
Li, X., Kou, K. and Zhao, B. Weather GAN: Multi-domain weather translation using generative adversarial networks. arXiv preprint arXiv:2103.05422 (2021).
[8]
Ye, T., Chen, S., Liu, Y., Ye, Y., Chen, E. and Li, Y. Underwater Light Field Retention: Neural Rendering for Underwater Imaging. City, 2022.
[9]
Lei, N., Guo, Y., An, D., Qi, X., Luo, Z., Yau, S.-T. and Gu, X. Mode collapse and regularity of optimal transportation maps. arXiv preprint arXiv:1902.02934 (2019).
[10]
Karras, T., Laine, S. and Aila, T. A style-based generator architecture for generative adversarial networks. City, 2019.
[11]
Zhu, J.-Y., Park, T., Isola, P. and Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. City, 2017.
[12]
Isola, P., Zhu, J.-Y., Zhou, T. and Efros, A. A. Image-to-image translation with conditional adversarial networks. City, 2017.
[13]
Bodla, N., Hua, G. and Chellappa, R. Semi-supervised FusedGAN for conditional image generation. City, 2018.
[14]
Li, B., Ren, W., Fu, D., Tao, D., Feng, D., Zeng, W. and Wang, Z. Benchmarking single-image dehazing and beyond. IEEE Transactions on Image Processing, 28, 1 (2018), 492-505.
[15]
Land, E. H. and McCann, J. J. Lightness and retinex theory. Josa, 61, 1 (1971), 1-11.
[16]
Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W. and Jiang, J. URetinex-Net: Retinex-Based Deep Unfolding Network for Low-Light Image Enhancement. City, 2022.
[17]
Wei, C., Wang, W., Yang, W. and Liu, J. Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560 (2018).
[18]
Lei, X., Xie, J., Jiang, Z., Mai, W., Gong, Z., Lu, C., Lu, L. and Shan, Z. KinD-LCE Curve Estimation And Retinex Fusion On Low-Light Image. arXiv preprint arXiv:2207.09210 (2022).
[19]
Silberman, N., Hoiem, D., Kohli, P. and Fergus, R. Indoor segmentation and support inference from rgbd images. Springer, City, 2012.
[20]
Geiger, A., Lenz, P., Stiller, C. and Urtasun, R. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32, 11 (2013), 1231-1237.
[21]
Qin, J., Huang, Y. and Wen, W. Multi-scale feature fusion residual network for single image super-resolution. Neurocomputing, 379 (2020), 334-342.
[22]
Woo, S., Park, J., Lee, J.-Y. and Kweon, I. S. Cbam: Convolutional block attention module. City, 2018.
[23]
Debevec, P. and Gibson, S. A tone mapping algorithm for high contrast images. City, 2002.
[24]
Vogel, R. Adding Bloom to High-Dynamic-Range Tone Mapping (2021).
[25]
Johnson, J., Alahi, A. and Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. Springer, City, 2016.
[26]
Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[27]
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B. and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30 (2017).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICNCC '22: Proceedings of the 2022 11th International Conference on Networks, Communication and Computing
December 2022
365 pages
ISBN:9781450398039
DOI:10.1145/3579895
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 April 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Atmospheric Scattering Model
  2. Glow Effect
  3. Image Hazing
  4. Image Rendering

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICNCC 2022

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 40
    Total Downloads
  • Downloads (Last 12 months)28
  • Downloads (Last 6 weeks)0
Reflects downloads up to 18 Aug 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media