Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3503161.3548006acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Cycle-Interactive Generative Adversarial Network for Robust Unsupervised Low-Light Enhancement

Published: 10 October 2022 Publication History

Abstract

Getting rid of the fundamental limitations in fitting to the paired training data, recent unsupervised low-light enhancement methods excel in adjusting illumination and contrast of images. However, for unsupervised low light enhancement, the remaining noise suppression issue due to the lacking of supervision of detailed signal largely impedes the wide deployment of these methods in real-world applications. Herein, we propose a novel Cycle-Interactive Generative Adversarial Network (CIGAN) for unsupervised low-light image enhancement, which is capable of not only better transferring illumination distributions between low/normal-light images but also manipulating detailed signals between two domains, e.g., suppressing/synthesizing realistic noise in the cyclic enhancement/degradation process. In particular, the proposed low-light guided transformation feed-forwards the features of low-light images from the generator of enhancement GAN (eGAN) into the generator of degradation GAN (dGAN). With the learned information of real low-light images, dGAN can synthesize more realistic diverse illumination and contrast in low-light images. Moreover, the feature randomized perturbation module in dGAN learns to increase the feature randomness to produce diverse feature distributions, persuading the synthesized low-light images to contain realistic noise. Extensive experiments demonstrate both the superiority of the proposed method and the effectiveness of each module in CIGAN.

Supplementary Material

MP4 File (MM22-fp1063.mp4)
In this work, we propose a novel Cycle-Interactive Generative Adversarial Network (CIGAN) for unsupervised low-light image enhancement, which is capable of not only better transferring illumination distributions between low/normal-light images but also manipulating detailed signals between two domains. In particular, the proposed low-light guided transformation feed-forwards the features of low-light images from the generator of enhancement GAN (eGAN) into the generator of degradation GAN (dGAN). With the learned information of real low-light images, dGAN can synthesize more realistic diverse illumination and contrast in low-light images. Moreover, the feature randomized perturbation module in dGAN learns to increase the feature randomness to produce diverse feature distributions, persuading the synthesized low-light images to contain realistic noise. Extensive experiments demonstrate both the superiority of the proposed method and the effectiveness of each module in CIGAN.

References

[1]
Mohammad Abdullah-Al-Wadud, Md Hasanul Kabir, M Ali Akber Dewan, and Oksam Chae. 2007. A dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics, Vol. 53, 2 (2007), 593--600.
[2]
Tarik Arici, Salih Dikbas, and Yucel Altunbasak. 2009. A histogram modification framework and its application for image contrast enhancement. IEEE Transactions on image processing, Vol. 18, 9 (2009), 1921--1935.
[3]
Vladimir Bychkovsky, Sylvain Paris, Eric Chan, and Frédo Durand. 2011. Learning photographic global tonal adjustment with a database of input/output image pairs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 97--104.
[4]
Jianrui Cai, Shuhang Gu, and Lei Zhang. 2018. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Transactions on Image Processing, Vol. 27, 4 (2018), 2049--2062.
[5]
Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. 2018a. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3291--3300.
[6]
Yu-Sheng Chen, Yu-Ching Wang, Man-Hsin Kao, and Yung-Yu Chuang. 2018b. Deep photo enhancer: Unpaired learning for image enhancement from photographs with gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6306--6314.
[7]
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. 2018. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 8789--8797.
[8]
Dinu Coltuc, Philippe Bolon, and J-M Chassery. 2006. Exact histogram specification. IEEE Transactions on Image Processing, Vol. 15, 5 (2006), 1143--1152.
[9]
Xuan Dong, Guan Wang, Yi Pang, Weixin Li, Jiangtao Wen, Wei Meng, and Yao Lu. 2011. Fast efficient algorithm for enhancement of low lighting video. In Proceedings of the IEEE International Conference on Multimedia and Expo. 1--6.
[10]
Xueyang Fu, Delu Zeng, Yue Huang, Yinghao Liao, Xinghao Ding, and John Paisley. 2016a. A fusion-based enhancing method for weakly illuminated images. Signal Processing, Vol. 129 (2016), 82--96.
[11]
Xueyang Fu, Delu Zeng, Yue Huang, Xiao-Ping Zhang, and Xinghao Ding. 2016b. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2782--2790.
[12]
Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, and Runmin Cong. 2020. Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1780--1789.
[13]
Xiaojie Guo, Yu Li, and Haibin Ling. 2016. LIME: Low-light image enhancement via illumination map estimation. IEEE Transactions on image processing, Vol. 26, 2 (2016), 982--993.
[14]
Jie Hu, Li Shen, and Gang Sun. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7132--7141.
[15]
Xun Huang and Serge Belongie. 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision. 1501--1510.
[16]
Haidi Ibrahim and Nicholas Sia Pik Kong. 2007. Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics, Vol. 53, 4 (2007), 1752--1758.
[17]
Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, and Zhangyang Wang. 2021. EnlightenGAN: Deep Light Enhancement Without Paired Supervision. IEEE transactions on image processing, Vol. 30 (2021), 2340--2349.
[18]
Daniel J Jobson, Zia-ur Rahman, and Glenn A Woodell. 1997. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image processing, Vol. 6, 7 (1997), 965--976.
[19]
Alexia Jolicoeur-Martineau. 2018. The relativistic discriminator: a key element missing from standard GAN. arXiv preprint arXiv:1807.00734 (2018).
[20]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
[21]
Mading Li, Jiaying Liu, Wenhan Yang, Xiaoyan Sun, and Zongming Guo. 2018. Structure-revealing low-light image enhancement via robust retinex model. IEEE Transactions on Image Processing, Vol. 27, 6 (2018), 2828--2841.
[22]
Yuen Peng Loh and Chee Seng Chan. 2019. Getting to know low-light images with the exclusively dark dataset. Computer Vision and Image Understanding, Vol. 178 (2019), 30--42.
[23]
Kin Gwn Lore, Adedotun Akintayo, and Soumik Sarkar. 2017. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognition, Vol. 61 (2017), 650--662.
[24]
Tom Mertens, Jan Kautz, and Frank Van Reeth. 2009. Exposure fusion: A simple and practical alternative to high dynamic range photography. In Computer graphics forum, Vol. 28. Wiley Online Library, 161--171.
[25]
Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. 2018. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957 (2018).
[26]
Keita Nakai, Yoshikatsu Hoshi, and Akira Taguchi. 2013. Color image contrast enhacement method based on differential intensity/saturation gray-levels histograms. In Proceedings of the International Symposium on Intelligent Signal Processing and Communication Systems. 445--449.
[27]
Zhangkai Ni, Wenhan Yang, Shiqi Wang, Lin Ma, and Sam Kwong. 2020a. Towards Unsupervised Deep Image Enhancement With Generative Adversarial Network. IEEE Transactions on Image Processing, Vol. 29 (2020), 9140--9151.
[28]
Zhangkai Ni, Wenhan Yang, Shiqi Wang, Lin Ma, and Sam Kwong. 2020b. Unpaired Image Enhancement with Quality-Attention Generative Adversarial Network. In Proceedings of the 28th ACM International Conference on Multimedia. 1697--1705.
[29]
Wenqi Ren, Sifei Liu, Lin Ma, Qianqian Xu, Xiangyu Xu, Xiaochun Cao, Junping Du, and Ming-Hsuan Yang. 2019. Low-light image enhancement via a deep hybrid network. IEEE Transactions on Image Processing, Vol. 28, 9 (2019), 4364--4375.
[30]
Xutong Ren, Mading Li, Wen-Huang Cheng, and Jiaying Liu. 2018. Joint enhancement and denoising method via sequential decomposition. In Proceedings of the IEEE International Symposium on Circuits and Systems. 1--5.
[31]
Liang Shen, Zihan Yue, Fan Feng, Quan Chen, Shihao Liu, and Jie Ma. 2017. MSR-net: Low-light image enhancement using deep convolutional network. arXiv preprint arXiv:1711.02488 (2017).
[32]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[33]
J Alex Stark. 2000. Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Transactions on image processing, Vol. 9, 5 (2000), 889--896.
[34]
Ruixing Wang, Qing Zhang, Chi-Wing Fu, Xiaoyong Shen, Wei-Shi Zheng, and Jiaya Jia. 2019. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6849--6857.
[35]
Shuhang Wang, Jin Zheng, Hai-Miao Hu, and Bo Li. 2013. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing, Vol. 22, 9 (2013), 3538--3548.
[36]
Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, Vol. 13, 4 (2004), 600--612.
[37]
Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. 2018. Deep Retinex Decomposition for Low-Light Enhancement. In Proceedings of the British Machine Vision Conference. 1--11.
[38]
Wei Xiong, Ding Liu, Xiaohui Shen, Chen Fang, and Jiebo Luo. 2020. Unsupervised Real-world Low-light Image Enhancement with Decoupled Networks. arXiv preprint arXiv:2005.02818 (2020).
[39]
Wenhan Yang, Shiqi Wang, Yuming Fang, Yue Wang, and Jiaying Liu. 2020. From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3063--3072.
[40]
Zhenqiang Ying, Ge Li, and Wen Gao. 2017a. A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv preprint arXiv:1711.00591 (2017).
[41]
Zhenqiang Ying, Ge Li, Yurui Ren, Ronggang Wang, and Wenmin Wang. 2017b. A new image contrast enhancement algorithm using exposure fusion framework. In International Conference on Computer Analysis of Images and Patterns. Springer, 36--46.
[42]
Zhenqiang Ying, Ge Li, Yurui Ren, Ronggang Wang, and Wenmin Wang. 2017c. A new low-light image enhancement algorithm using camera response model. In Proceedings of the IEEE International Conference on Computer Vision Workshops. 3015--3022.
[43]
Yu Zhang, Xiaoguang Di, Bin Zhang, Ruihang Ji, and Chunhui Wang. 2020. Better Than Reference In Low Light Image Enhancement: Conditional Re-Enhancement Networks. arXiv preprint arXiv:2008.11434 (2020).
[44]
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision. 2223--2232.
[45]
Karel Zuiderveld. 1994. Contrast limited adaptive histogram equalization. Graphics gems (1994), 474--485.

Cited By

View all
  • (2024)Joint Luminance Adjustment and Color Correction for Low-Light Image Enhancement NetworkApplied Sciences10.3390/app1414632014:14(6320)Online publication date: 19-Jul-2024
  • (2024)CodedBGT: Code Bank-Guided Transformer for Low-Light Image EnhancementIEEE Transactions on Multimedia10.1109/TMM.2024.340066826(9880-9891)Online publication date: 2024
  • (2024)Glow in the Dark: Low-Light Image Enhancement With External MemoryIEEE Transactions on Multimedia10.1109/TMM.2023.329373626(2148-2163)Online publication date: 1-Jan-2024
  • Show More Cited By

Index Terms

  1. Cycle-Interactive Generative Adversarial Network for Robust Unsupervised Low-Light Enhancement

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '22: Proceedings of the 30th ACM International Conference on Multimedia
    October 2022
    7537 pages
    ISBN:9781450392037
    DOI:10.1145/3503161
    © 2022 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 10 October 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. generative adversarial network (GAN)
    2. low-light image enhancement
    3. quality attention module

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    MM '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 995 of 4,171 submissions, 24%

    Upcoming Conference

    MM '24
    The 32nd ACM International Conference on Multimedia
    October 28 - November 1, 2024
    Melbourne , VIC , Australia

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)76
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 14 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Joint Luminance Adjustment and Color Correction for Low-Light Image Enhancement NetworkApplied Sciences10.3390/app1414632014:14(6320)Online publication date: 19-Jul-2024
    • (2024)CodedBGT: Code Bank-Guided Transformer for Low-Light Image EnhancementIEEE Transactions on Multimedia10.1109/TMM.2024.340066826(9880-9891)Online publication date: 2024
    • (2024)Glow in the Dark: Low-Light Image Enhancement With External MemoryIEEE Transactions on Multimedia10.1109/TMM.2023.329373626(2148-2163)Online publication date: 1-Jan-2024
    • (2024)Learning Depth-Density Priors for Fourier-Based Unpaired Image RestorationIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.330599634:4(2604-2618)Online publication date: Apr-2024
    • (2024)Joint Image and Feature Enhancement for Object Detection under Adverse Weather Conditions2024 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN60899.2024.10650989(1-8)Online publication date: 30-Jun-2024
    • (2024)Misalignment-Robust Frequency Distribution Loss for Image Transformation2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.00281(2910-2919)Online publication date: 16-Jun-2024
    • (2024)Enhancement of Mine Images Based on HSV Color SpaceIEEE Access10.1109/ACCESS.2024.340345212(72170-72186)Online publication date: 2024
    • (2024)A novel low light object detection method based on the YOLOv5 fusion feature enhancementScientific Reports10.1038/s41598-024-54428-814:1Online publication date: 23-Feb-2024
    • (2024)CRetinex: A Progressive Color-Shift Aware Retinex Model for Low-Light Image EnhancementInternational Journal of Computer Vision10.1007/s11263-024-02065-z132:9(3610-3632)Online publication date: 8-Apr-2024
    • (2023)Attention-Guided Neural Networks for Full-Reference and No-Reference Audio-Visual Quality AssessmentIEEE Transactions on Image Processing10.1109/TIP.2023.325169532(1882-1896)Online publication date: 1-Jan-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media