Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3394171.3413876acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Discrete Haze Level Dehazing Network

Published: 12 October 2020 Publication History

Abstract

In contrast to traditional dehazing methods, deep learning based single image dehazing (SID) algorithms have achieved better performances by creating a mapping function from haze to haze-free images. Usually, the images taken from the natural scenes have different haze levels, but deep SID algorithms only process the hazy images as one group. It makes the deep SID algorithms difficult to deal with the image set with some images having specific haze density. In this paper, a Discrete Haze Level Dehazing network (DHL-Dehaze), a very effective method to dehaze multiple different haze level images, is proposed. The proposed approach considers a single image dehazing problem as a multi-domain image-to-image translation, instead of grouping all hazy images into the same domain. DHL-Dehaze provides computational derivation to describe the role of different haze levels for image translation. To verify the proposed approach, we synthesize two largescale datasets with multiple haze level images based on the NYU-Depth and DIML/CVL datasets. The experiments show that DHL-Dehaze can obtain excellent quantitative and qualitative dehazing results, especially when the haze concentration is high.

Supplementary Material

ZIP File (mmfp0933aux.zip)
Please cite our paper: Xiaofeng Cong, Jie Gui, Kai-Chao Miao, Jun Zhang, Bing Wang, and Peng Chen.
MP4 File (3394171.3413876.mp4)
Presentation Video.

References

[1]
Codruta O Ancuti, Cosmin Ancuti, Radu Timofte, and Christophe De Vleeschouwer. O-haze: a dehazing benchmark with real hazy and haze-free outdoor images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 754--762, 2018.
[2]
Cosmin Ancuti, Codruta O Ancuti, and Christophe De Vleeschouwer. D-hazy: A dataset to evaluate quantitatively dehazing algorithms. In IEEE International Conference on Image Processing, pages 2226--2230. IEEE, 2016.
[3]
Cosmin Ancuti, Codruta O Ancuti, and Radu Timofte. Ntire 2018 challenge on image dehazing: Methods and results. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 891--901, 2018.
[4]
Cosmin Ancuti, Codruta O Ancuti, Radu Timofte, and Christophe De Vleeschouwer. I-haze: a dehazing benchmark with real hazy and haze-free indoor images. In International Conference on Advanced Concepts for Intelligent Vision Systems, pages 620--631. Springer, 2018.
[5]
Dana Berman, Shai Avidan, and Shai Avidan. Non-local image dehazing. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1674--1682, 2016.
[6]
Bolun Cai, Xiangmin Xu, Kui Jia, Chunmei Qing, and Dacheng Tao. Dehazenet: An end-to-end system for single image haze removal. IEEE Transactions on Image Processing, 25(11):5187--5198, 2016.
[7]
Jaehoon Cho, Dongbo Min, Youngjung Kim, and Kwanghoon Sohn. A large rgb-d dataset for semi-supervised monocular depth estimation. arXiv preprint arXiv:1904.10230, 2019.
[8]
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 8789--8797, 2018.
[9]
Jia Deng, Wei Dong, Richard Socher, Li Jia Li, and Fei Fei Li. Imagenet: a large-scale hierarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20--25 June 2009, Miami, Florida, USA, 2009.
[10]
Deniz Engin, Anil Gencc, and Hazim Kemal Ekenel. Cycle-dehaze: Enhanced cyclegan for single image dehazing. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 825--833, 2018.
[11]
Reiner Eschbach and Bernd W Kolpatzik. Image-dependent color saturation correction in a natural scene pictorial image, September 12 1995. US Patent 5,450,217.
[12]
Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Advances in Neural Information Processing Systems, 3:2672--2680, 2014.
[13]
Jie Gui, Zhenan Sun, Yonggang Wen, Dacheng Tao, and Jieping Ye. A review on generative adversarial networks: Algorithms, theory, and applications. arXiv preprint arXiv: 2001.06937, 2020.
[14]
Kaiming He, Jian Sun, and Xiaoou Tang. Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12):2341--2353, 2010.
[15]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 770--778, 2016.
[16]
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, pages 694--711. Springer, 2016.
[17]
Tae Keun Kim, Joon Ki Paik, and Bong Soon Kang. Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering. IEEE Transactions on Consumer Electronics, 44(1):82--87, 1998.
[18]
Harald Koschmieder. Theorie der horizontalen sichtweite. Beitrage zur Physik der freien Atmosphare, pages 33--53, 1924.
[19]
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436--444, 2015.
[20]
Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4681--4690, 2017.
[21]
Boyi Li, Xiulian Peng, Zhangyang Wang, Jizheng Xu, and Dan Feng. An all-in-one network for dehazing and beyond. arXiv preprint arXiv:1707.06543, 2017.
[22]
Earl J McCartney. Optics of the atmosphere: scattering by molecules and particles. New York, John Wiley and Sons, Inc., 1976. 421 p., 1976.
[23]
Srinivasa G Narasimhan and Shree K Nayar. Contrast restoration of weather degraded images. IEEE Transactions on Pattern Analysis and Machine Intelligence, (6):713--724, 2003.
[24]
Shree K Nayar and Srinivasa G Narasimhan. Vision in bad weather. In IEEE International Conference on Computer Vision, pages 820--827. IEEE, 1999.
[25]
Guojun Qi. Loss-sensitive generative adversarial networks on lipschitz densities. International Journal of Computer Vision, pages 1--23, 2019.
[26]
Guojun Qi, Liheng Zhang, Hao Hu, Marzieh Edraki, Jingdong Wang, and Xiansheng Hua. Global versus localized generative adversarial nets. pages 1517--1525, 2018.
[27]
Wenqi Ren, Si Liu, Hua Zhang, Jinshan Pan, Xiaochun Cao, and Ming-Hsuan Yang. Single image dehazing via multi-scale convolutional neural networks. In European Conference on Computer Vision, pages 154--169. Springer, 2016.
[28]
Daniel Scharstein, Heiko Hirschmüller, York Kitajima, Greg Krathwohl, Nera Nevs ić, Xi Wang, and Porter Westling. High-resolution stereo datasets with subpixel-accurate ground truth. In German Conference on Pattern Recognition, pages 31--42. Springer, 2014.
[29]
Gaurav Sharma, Wencheng Wu, and Edul N Dalal. The ciede2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research & Application: Endorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Francc ais de la Couleur, 30(1):21--30, 2005.
[30]
Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. In European Conference on Computer Vision, pages 746--760. Springer, 2012.
[31]
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[32]
J Alex Stark. Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Transactions on Image Processing, 9(5):889--896, 2000.
[33]
Jinhui Tang, Xiangbo Shu, Zechao Li, Guojun Qi, and Jingdong Wang. Generalized deep transfer networks for knowledge propagation in heterogeneous domains. 12(4):68, 2016.
[34]
Jean-Philippe Tarel, Nicolas Hautiere, Aurélien Cord, Dominique Gruyer, and Houssam Halmaoui. Improved visibility of road scene images under heterogeneous fog. In IEEE Intelligent Vehicles Symposium, pages 478--485. IEEE, 2010.
[35]
Zhou Wang, Alan C Bovik, Hamid R Sheikh, Eero P Simoncelli, et al. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600--612, 2004.
[36]
Xitong Yang, Zheng Xu, and Jiebo Luo. Towards perceptual image dehazing by physics-based disentanglement and adversarial training. In AAAI conference on artificial intelligence, pages 7485--7492, 2018.
[37]
He Zhang and Vishal M Patel. Densely connected pyramid dehazing network. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3194--3203, 2018.
[38]
Hang Zhao, Orazio Gallo, Iuri Frosio, and Jan Kautz. Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging, 3(1):47--57, 2016.
[39]
Yiru Zhao, Zhongming Jin, Guojun Qi, Hongtao Lu, and Xiansheng Hua. An adversarial approach to hard triplet generation. pages 508--524, 2018.
[40]
Qingsong Zhu, Jiaming Mai, and Ling Shao. A fast single image haze removal algorithm using color attenuation prior. IEEE Transactions on Image Processing, 24(11):3522--3533, 2015.

Cited By

View all
  • (2024)Illumination Controllable Dehazing Network based on Unsupervised Retinex EmbeddingIEEE Transactions on Multimedia10.1109/TMM.2023.332688126(4819-4830)Online publication date: 2024
  • (2023)LHNet: A Low-cost Hybrid Network for Single Image DehazingProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612594(7706-7717)Online publication date: 26-Oct-2023
  • (2023)Mutual Information-driven Triple Interaction Network for Efficient Image DehazingProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612299(7-16)Online publication date: 26-Oct-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '20: Proceedings of the 28th ACM International Conference on Multimedia
October 2020
4889 pages
ISBN:9781450379885
DOI:10.1145/3394171
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 October 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. GAN
  2. MS-SSIM
  3. dehazing
  4. multi-domain
  5. perceptual

Qualifiers

  • Research-article

Funding Sources

  • Educational Commission of Anhui Province
  • National Natural Science Foundation of China

Conference

MM '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 995 of 4,171 submissions, 24%

Upcoming Conference

MM '24
The 32nd ACM International Conference on Multimedia
October 28 - November 1, 2024
Melbourne , VIC , Australia

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)51
  • Downloads (Last 6 weeks)7
Reflects downloads up to 18 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Illumination Controllable Dehazing Network based on Unsupervised Retinex EmbeddingIEEE Transactions on Multimedia10.1109/TMM.2023.332688126(4819-4830)Online publication date: 2024
  • (2023)LHNet: A Low-cost Hybrid Network for Single Image DehazingProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612594(7706-7717)Online publication date: 26-Oct-2023
  • (2023)Mutual Information-driven Triple Interaction Network for Efficient Image DehazingProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612299(7-16)Online publication date: 26-Oct-2023
  • (2023)A Comprehensive Survey and Taxonomy on Single Image Dehazing Based on Deep LearningACM Computing Surveys10.1145/357691855:13s(1-37)Online publication date: 13-Jul-2023
  • (2023)Fine-grained Visible Watermark Removal2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.01173(12724-12733)Online publication date: 1-Oct-2023
  • (2022)SADnet: Semi-supervised Single Image Dehazing Method Based on an Attention MechanismACM Transactions on Multimedia Computing, Communications, and Applications10.1145/347845718:2(1-23)Online publication date: 16-Feb-2022
  • (2022)Rich feature distillation with feature affinity module for efficient image dehazingOptik10.1016/j.ijleo.2022.169656267(169656)Online publication date: Oct-2022
  • (2021)Visible Watermark Removal via Self-calibrated Localization and Background RefinementProceedings of the 29th ACM International Conference on Multimedia10.1145/3474085.3475592(4426-4434)Online publication date: 17-Oct-2021

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media