Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Semi-Supervised Underexposed Image Enhancement Network With Supervised Context Attention and Multi-Exposure Fusion

Published: 22 May 2023 Publication History

Abstract

Recently, image enhancement approaches yield impressive progress. However, most methods are still based supervised-learning, which requires plenty of paired data. Meanwhile, owing to the complex illumination condition in a real-world scenario, those methods trained on synthetic images cannot restore details in extremely dark or bright areas and lead to exposure errors. The traditional losses that deem all pixels the same in training also produce blurry edges in the result. To handle these problems, in this article, we present an effective semi-supervised framework for severely underexposed image enhancement. Our network consists of a supervised and an unsupervised branch, which shares weights and can make full use of paired data and plenty of unpaired data. Meanwhile, a multi-exposure fusion module is designed to adaptively fuse the corrected images to address the low contrast and color bias issues occurring in some extreme situations. Moreover, we propose a supervised context attention module to better use the edge information as supervision to recover fine image details. Extensive experiments have proved that the proposed method outperforms state-of-the-art approaches in enhancing exposure images.

References

[1]
Y. Jiang et al., “EnlightenGAN: Deep light enhancement without paired supervision,” IEEE Trans. Image Process., vol. 30, pp. 2340–2349, 2021.
[2]
W. Yang, S. Wang, Y. Fang, Y. Wang, and J. Liu, “From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 3063–3072.
[3]
E. H. Land, “The retinex theory of color vision,” Sci. Amer., vol. 237, no. 6, pp. 108–129, 1977.
[4]
S. M. Pizer et al., “Adaptive histogram equalization and its variations,” Comput. Vis., Graph., Image Process., vol. 39, no. 3, pp. 355–368, 1987.
[5]
S. M. Pizer, R. E. Johnston, J. P. Ericksen, B. C. Yankaskas, and K. E. Muller, “Contrast-limited adaptive histogram equalization: Speed and effectiveness,” in Proc. 1st Conf. Visual. Biomed. Comput., 1990, pp. 337–345.
[6]
M. Abdullah-Al-Wadud, M. H. Kabir, M. A. Akber Dewan, and O. Chae, “A dynamic histogram equalization for image contrast enhancement,” IEEE Trans. Consum. Electron., vol. 53, no. 2, pp. 593–600, May 2007.
[7]
K. G. Lore, A. Akintayo, and S. Sarkar, “LLNet: A deep autoencoder approach to natural low-light image enhancement,” Pattern Recognit., vol. 61, pp. 650–662, 2017.
[8]
C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” 2018, arXiv:1808.04560.
[9]
R. Wang et al., “Underexposed photo enhancement using deep illumination estimation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 6849–6857.
[10]
S. Moran, P. Marza, S. McDonagh, S. Parisot, and G. Slabaugh, “DeepLPF: Deep local parametric filters for image enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 12826–12835.
[11]
H. Xu, J. Ma, and X.-P. Zhang, “MEF-GAN: Multi-exposure image fusion via generative adversarial networks,” IEEE Trans. Image Process., vol. 29, pp. 7203–7216, 2020.
[12]
Y. Chai, R. Giryes, and L. Wolf, “Supervised and unsupervised learning of parameterized color enhancement,” in Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis., 2020, pp. 992–1000.
[13]
Y. Zhang, X. Di, B. Zhang, and C. Wang, “Self-supervised image enhancement network: Training with low light images only,” 2020, arXiv:2002.11300.
[14]
H. U. Kim, Y. J. Koh, and C. S. Kim, “Global and local enhancement networks for paired and unpaired image enhancement,” in Proc. 16th Eur. Conf. Comput. Vis., 2020, pp. 339–354.
[15]
S. Lim and W. Kim, “DSLR: Deep stacked Laplacian restorer for low-light image enhancement,” IEEE Trans. Multimedia, vol. 23, pp. 4272–4284, 2021.
[16]
J. Huang, Z. Xiong, X. Fu, D. Liu, and Z.-J. Zha, “Hybrid image enhancement with progressive Laplacian enhancing unit,” in Proc. 27th ACM Int. Conf. Multimedia, 2019, pp. 1614–1622.
[17]
D. J. Jobson, Z. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Trans. Image Process., vol. 6, no. 3, pp. 451–462, Mar. 1997.
[18]
X. Guo and Q. Hu, “Low-light image enhancement via breaking down the darkness,” Int. J. Comput. Vis., vol. 131, pp. 48–66, 2023.
[19]
D. J. Jobson, Z. Rahman, and G. A. Woodenll, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process., vol. 6, no. 7, pp. 965–976, Jul. 1997.
[20]
S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Process., vol. 22, no. 9, pp. 3538–3548, Sep. 2013.
[21]
L. Ma, R. Liu, Y. Wang, X. Fan, and Z. Luo, “Low-light image enhancement via self-reinforced retinex projection model,” IEEE Trans. Multimedia, early access, Mar. 25, 2022.
[22]
M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Trans. Image Process., vol. 27, no. 6, pp. 2828–2841, Jun. 2018.
[23]
X. Ren, M. Li, W.-H. Cheng, and J. Liu, “Joint enhancement and denoising method via sequential decomposition,” in Proc. IEEE Int. Symp. Circuits Syst., 2018, pp. 1–5.
[24]
J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2223–2232.
[25]
Y.-S. Chen, Y.-C. Wang, M.-H. Kao, and Y.-Y. Chuang, “Deep photo enhancer: Unpaired learning for image enhancement from photographs with GANs,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 6306–6314.
[26]
O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. 18th Int. Conf. Med. Image Comput. Comput.- Assist. Interv., 2015, pp. 234–241.
[27]
C. Guo et al., “Zero-reference deep curve estimation for low-light image enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 1780–1789.
[28]
Y. Zhuang, Z. Zheng, and C. Lyu, “DPFNet: A dual-branch dilated network with phase-aware Fourier convolution for low-light image enhancement,” 2022, arXiv:2209.07937.
[29]
T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion: A simple and practical alternative to high dynamic range photography,” in Computer Graphics Forum, vol. 28, no. 1. Hoboken, NJ, USA: Wiley, 2009, pp. 161–171.
[30]
S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process., vol. 22, no. 7, pp. 2864–2875, Jul. 2013.
[31]
K. R. Prabhakar, V.S. Srikar, and R. V. Babu, “DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 4714–4722.
[32]
K. Ma, K. Zeng, and Z. Wang, “Perceptual quality assessment for multi-exposure image fusion,” IEEE Trans. Image Process., vol. 24, no. 11, pp. 3345–3356, Nov. 2015.
[33]
M. Zhu, P. Pan, W. Chen, and Y. Yang, “EEMEFN: Low-light image enhancement via edge-enhanced multi-exposure fusion network,” in Proc. AAAI Conf. Artif. Intell., 2020, pp. 13106–13113.
[34]
S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “CBAM: Convolutional block attention module,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 3–19.
[35]
H. Zhao et al., “PSANet: Point-wise spatial attention network for scene parsing,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 267–283.
[36]
J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 7132–7141.
[37]
Q. Hou, L. Zhang, M.-M. Cheng, and J. Feng, “Strip pooling: Rethinking spatial pooling for scene parsing,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 4003–4012.
[38]
S. W. Zamir et al., “Multi-stage progressive image restoration,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 14821–14831.
[39]
Z. Ni, W. Yang, S. Wang, L. Ma, and S. Kwong, “Unpaired image enhancement with quality-attention generative adversarial network,” in Proc. 28th ACM Int. Conf. Multimedia, 2020, pp. 1697–1705.
[40]
X. Xu, R. Wang, C.-W. Fu, and J. Jia, “SNR-Aware low-light image enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 17714–17724.
[41]
F. Lv, Y. Li, and F. Lu, “Attention guided low-light image enhancement with a large scale low-light simulation dataset,” Int. J. Comput. Vis., vol. 129, no. 7, pp. 2175–2193, 2021.
[42]
X. Mao et al., “Least squares generative adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2794–2802.
[43]
J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in Proc. 14th Eur. Conf. Comput. Vis., 2016, pp. 694–711.
[44]
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2014, arXiv:1409.1556.
[45]
J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-8, no. 6, pp. 679–698, Nov. 1986.
[46]
V. Bychkovsky, S. Paris, E. Chan, and F. Durand, “Learning photographic global tonal adjustment with a database of input/output image pairs,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2011, pp. 97–104.
[47]
C. Lee, C. Lee, and C.-S. Kim, “Contrast enhancement based on layered difference representation of 2D histograms,” IEEE Trans. Image Process., vol. 22, no. 12, pp. 5372–5384, Dec. 2013.
[48]
X. Fu, D. Zeng, Y. Huang, X. Ding, and X.-P. Zhang, “A variational framework for single low light image enhancement using bright channel prior,” in Proc. IEEE Glob. Conf. Signal Inf. Process., 2013, pp. 1085–1088.
[49]
V. Vonikakis, I. Andreadis, and A. Gasteratos, “Fast centre–surround contrast modification,” IET Image Process., vol. 2, no. 1, pp. 19–34, 2008.
[50]
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2014, arXiv:1412.6980.
[51]
A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett., vol. 20, no. 3, pp. 209–212, Mar. 2013.
[52]
X. Guo, Y. Li, and H. Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process., vol. 26, no. 2, pp. 982–993, Feb. 2017.
[53]
M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand, “Deep bilateral learning for real-time image enhancement,” ACM Trans. Graph., vol. 36, no. 4, pp. 1–12, 2017.
[54]
W. Wang, C. Wei, W. Yang, and J. Liu, “GLADNet: Low-light enhancement network with global awareness,” in Proc. IEEE 13th Int. Conf. Autom. Face Gesture Recognit., 2018, pp. 751–755.
[55]
C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” 2018, arXiv:1808.04560.
[56]
Y. Zhang, J. Zhang, and X. Guo, “Kindling the darkness: A practical low-light image enhancer,” in Proc. 27th ACM Int. Conf. Multimedia, 2019, pp. 1632–1640.
[57]
H. Zeng, J. Cai, L. Li, Z. Cao, and L. Zhang, “Learning image-adaptive 3D lookup tables for high performance photo enhancement in real-time,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 4, pp. 2058–2073, Apr. 2022.
[58]
J. He, Y. Liu, Y. Qiao, and C. Dong, “Conditional sequential modulation for efficient global image retouching,” in Proc. 16th Eur. Conf. Comput. Vis., 2022, pp. 679–695.
[59]
J. Li, J. Li, F. Fang, F. Li, and G. Zhang, “Luminance-aware pyramid network for low-light image enhancement,” IEEE Trans. Multimedia, vol. 23, pp. 3153–3165, 2021.
[60]
Y. Zhang, X. Guo, J. Ma, W. Liu, and J. Zhang, “Beyond brightening low-light images,” Int. J. Comput. Vis., vol. 129, pp. 1013–1037, 2021.
[61]
R. Liu, L. Ma, J. Zhang, X. Fan, and Z. Luo, “Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 10561–10570.
[62]
X. Lei, Z. Fei, W. Zhou, H. Zhou, and M. Fei, “Low-light image enhancement using the cell vibration model,” IEEE Trans. Multimedia, early access, May 19, 2022.
[63]
L. Ma, T. Ma, R. Liu, X. Fan, and Z. Luo, “Toward fast, flexible, and robust low-light image enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 5637–5646.
[64]
W. Wu et al., “URetinex-Net: Retinex-based deep unfolding network for low-light image enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 5901–5910.
[65]
Y. Wang et al., “Low-light image enhancement with normalizing flow,” in Proc. AAAI Conf. Artif. Intell., 2022, pp. 2604–2612.

Index Terms

  1. A Semi-Supervised Underexposed Image Enhancement Network With Supervised Context Attention and Multi-Exposure Fusion
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image IEEE Transactions on Multimedia
          IEEE Transactions on Multimedia  Volume 26, Issue
          2024
          9891 pages

          Publisher

          IEEE Press

          Publication History

          Published: 22 May 2023

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • 0
            Total Citations
          • 0
            Total Downloads
          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 10 Oct 2024

          Other Metrics

          Citations

          View Options

          View options

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media