Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

TANet: : Transmission and atmospheric light driven enhancement of underwater images

Published: 16 May 2024 Publication History

Abstract

Motivated by the adverse impact of light attenuation and scattering, which leads to color distortion and low contrast in underwater images, our study primarily focuses on enhancement techniques for these images using localized transmission feature analysis and global atmospheric light feature extraction. To this end, we propose a novel approach, named TANet, drawing upon the dynamics of transmission and atmospheric light. TANet integrates two primary components: a spatial domain-based Transmission-Driven Refinement module (TDR) and a frequency domain-based Atmospheric Light Removal Fourier Module (ALRF). The TDR module employs a Gated Multipurpose Unit with dual branches, selectively regulating input features. This allows for a refined merging of feature vectors that subsequently interact, enabling cross-channel feature integration. Capitalizing on the correlation between transmission and image quality, TDR facilitates the detailed enhancement of underwater images by depicting the perceived transmission across distinct image sections. Given that atmospheric light exhibits different attenuation rates under water due to varying wavelengths, and considering that atmospheric light is globally constant, thereby influencing underwater image capture, we developed the ALRF module. This caters to the processing of global information within the frequency domain, efficiently negating atmospheric light’s impact on underwater images and enhancing their quality and visibility. Our TANet’s superior performance is affirmed by extensive experimental results, demonstrating its effectiveness in underwater image enhancement.

Highlights

UNet based on atmospheric scattering model for underwater image enhancement.
TANet interactively captures the complementarity between spatial and frequency domain.
TDR handles the impact of non-uniform transmission in the spatial domain.
ALRF uses features in frequency domain to handle global atmospheric light’s impact.

References

[1]
Akkaynak, D., & Treibitz, T. (2019). Sea-thru: A method for removing water from underwater images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1682–1691).
[2]
Ancuti C.O., Ancuti C., De Vleeschouwer C., Bekaert P., Color balance and fusion for underwater image enhancement, IEEE Transactions on Image Processing 27 (1) (2017) 379–393.
[3]
Ancuti C.O., Ancuti C., De Vleeschouwer C., Sbert M., Color channel compensation (3C): A fundamental pre-processing step for image enhancement, IEEE Transactions on Image Processing 29 (2019) 2653–2665.
[4]
Ancuti C., Ancuti C.O., Haber T., Bekaert P., Enhancing underwater images and videos by fusion, in: 2012 IEEE conference on computer vision and pattern recognition, IEEE, 2012, pp. 81–88.
[5]
Anwar S., Li C., Diving deeper into underwater image enhancement: A survey, Signal Processing: Image Communication 89 (2020).
[6]
Arnold-Bos A., Malkasse J.-P., Kervern G., Towards a model-free denoising of underwater optical images, in: Europe oceans 2005. Vol. 1, IEEE, 2005, pp. 527–532.
[7]
Ba J.L., Kiros J.R., Hinton G.E., Layer normalization, 2016, arXiv preprint arXiv:1607.06450.
[8]
Berman D., Treibitz T., Avidan S., Single image dehazing using haze-lines, IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (3) (2018) 720–734.
[9]
Bryson M., Johnson-Roberson M., Pizarro O., Williams S.B., Colour-consistent structure-from-motion models using underwater imagery, Robotics: Science and Systems VIII 33 (2013).
[10]
Bryson M., Johnson-Roberson M., Pizarro O., Williams S.B., True color correction of autonomous underwater vehicle imagery, Journal of Field Robotics 33 (6) (2016) 853–874.
[11]
Cao K., Peng Y.-T., Cosman P.C., Underwater image restoration using deep networks to estimate background light and scene depth, in: 2018 IEEE Southwest symposium on image analysis and interpretation, IEEE, 2018, pp. 1–4.
[12]
Chen L., Chu X., Zhang X., Sun J., Simple baselines for image restoration, in: Computer vision–ECCV 2022: 17th European conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VII, Springer, 2022, pp. 17–33.
[13]
Chen X., Yu J., Wu Z., Temporally identity-aware SSD with attentional LSTM, IEEE Transactions on Cybernetics 50 (6) (2019) 2674–2686.
[14]
Chi L., Jiang B., Mu Y., Fast fourier convolution, Advances in Neural Information Processing Systems 33 (2020) 4479–4488.
[15]
Chu X., Chen L., Chen C., Lu X., Revisiting global statistics aggregation for improving image restoration, 2021, arXiv preprint arXiv:2112.04491.
[16]
Deng J., Dong W., Socher R., Li L.-J., Li K., Fei-Fei L., Imagenet: A large-scale hierarchical image database, in: 2009 IEEE conference on computer vision and pattern recognition, IEEE, 2009, pp. 248–255.
[17]
Ding, C., Liao, S., Wang, Y., Li, Z., Liu, N., Zhuo, Y., et al. (2017). CirCNN: Accelerating and compressing deep neural networks using block-circulant weight matrices. In Proceedings of the 50th annual IEEE/ACM international symposium on microarchitecture (pp. 395–408).
[18]
Dudhane A., Hambarde P., Patil P., Murala S., Deep underwater image restoration and beyond, IEEE Signal Processing Letters 27 (2020) 675–679.
[19]
Fang Y., Ma K., Wang Z., Lin W., Fang Z., Zhai G., No-reference quality assessment of contrast-distorted images based on natural scene statistics, IEEE Signal Processing Letters 22 (7) (2014) 838–842.
[20]
Guo Y., Li H., Zhuang P., Underwater image enhancement using a multiscale dense generative adversarial network, IEEE Journal of Oceanic Engineering 45 (3) (2019) 862–870.
[21]
Hambarde P., Murala S., S2DNet: Depth estimation from single image and sparse samples, IEEE Transactions on Computational Imaging 6 (2020) 806–817.
[22]
Hambarde P., Murala S., Dhall A., UW-GAN: Single-image depth estimation and image enhancement for underwater images, IEEE Transactions on Instrumentation and Measurement 70 (2021) 1–12.
[23]
Hou G., Li Y., Yang H., Li K., Pan Z., UID2021: An underwater image dataset for evaluation of no-reference quality assessment metrics, ACM Transactions on Multimedia Computing, Communications and Applications 19 (4) (2023) 1–24.
[24]
Islam M.J., Xia Y., Sattar J., Fast underwater image enhancement for improved visual perception, IEEE Robotics and Automation Letters 5 (2) (2020) 3227–3234.
[25]
Jagalingam P., Hegde A.V., A review of quality metrics for fused image, Aquatic Procedia 4 (2015) 133–142.
[26]
Jiang Q., Gu Y., Li C., Cong R., Shao F., Underwater image enhancement quality evaluation: Benchmark dataset and objective metric, IEEE Transactions on Circuits and Systems for Video Technology 32 (9) (2022) 5959–5974.
[27]
Jiang Z., Li Z., Yang S., Fan X., Liu R., Target oriented perceptual adversarial fusion network for underwater image enhancement, IEEE Transactions on Circuits and Systems for Video Technology 32 (10) (2022) 6584–6598.
[28]
Kang Y., Jiang Q., Li C., Ren W., Liu H., Wang P., A perception-aware decomposition and fusion framework for underwater image enhancement, IEEE Transactions on Circuits and Systems for Video Technology 33 (3) (2022) 988–1002.
[29]
Lai W.-S., Huang J.-B., Ahuja N., Yang M.-H., Fast and accurate image super-resolution with deep laplacian pyramid networks, IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (11) (2018) 2599–2613.
[30]
Lee, J.-H., Heo, M., Kim, K.-R., & Kim, C.-S. (2018). Single-image depth estimation based on fourier domain analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 330–339).
[31]
Li C., Anwar S., Hou J., Cong R., Guo C., Ren W., Underwater image enhancement via medium transmission-guided multi-color space embedding, IEEE Transactions on Image Processing 30 (2021) 4985–5000.
[32]
Li C., Anwar S., Porikli F., Underwater scene prior inspired deep underwater image and video enhancement, Pattern Recognition 98 (2020).
[33]
Li, S., Araujo, I. B., Ren, W., Wang, Z., Tokuda, E. K., Junior, R. H., et al. (2019). Single image deraining: A comprehensive benchmark analysis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3838–3847).
[34]
Li C.-Y., Guo J.-C., Cong R.-M., Pang Y.-W., Wang B., Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior, IEEE Transactions on Image Processing 25 (12) (2016) 5664–5677.
[35]
Li C., Guo J., Guo C., Emerging from water: Underwater image color correction based on weakly supervised color transfer, IEEE Signal processing letters 25 (3) (2018) 323–327.
[36]
Li C., Guo C., Ren W., Cong R., Hou J., Kwong S., et al., An underwater image enhancement benchmark dataset and beyond, IEEE Transactions on Image Processing 29 (2019) 4376–4389.
[37]
Li H., Li J., Wang W., A fusion adversarial underwater image enhancement network with a public test dataset, 2019, arXiv preprint arXiv:1906.06819.
[38]
Li, B., Peng, X., Wang, Z., Xu, J., & Feng, D. (2018). End-to-end united video dehazing and detection. In Proceedings of the AAAI conference on artificial intelligence. Vol. 32. No. 1.
[39]
Li B., Ren W., Fu D., Tao D., Feng D., Zeng W., et al., Benchmarking single-image dehazing and beyond, IEEE Transactions on Image Processing 28 (1) (2018) 492–505.
[40]
Li J., Skinner K.A., Eustice R.M., Johnson-Roberson M., WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images, IEEE Robotics and Automation letters 3 (1) (2017) 387–394.
[41]
Li K., Wu L., Qi Q., Liu W., Gao X., Zhou L., et al., Beyond single reference for training: underwater image enhancement via comparative learning, IEEE Transactions on Circuits and Systems for Video Technology (2022).
[42]
Li, S., Xue, K., Zhu, B., Ding, C., Gao, X., Wei, D., et al. (2020). Falcon: A fourier transform based approach for fast and secure convolutional neural network predictions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8705–8714).
[43]
Liu R., Fan X., Zhu M., Hou M., Luo Z., Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light, IEEE Transactions on Circuits and Systems for Video Technology 30 (12) (2020) 4861–4875.
[44]
Liu R., Jiang Z., Yang S., Fan X., Twin adversarial contrastive learning for underwater image enhancement and beyond, IEEE Transactions on Image Processing 31 (2022) 4922–4936.
[45]
Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., et al. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 10012–10022).
[46]
Mao X., Liu Y., Shen W., Li Q., Wang Y., Deep residual fourier transformation for single image deblurring, 2021, arXiv preprint arXiv:2111.11745.
[47]
Mittal A., Soundararajan R., Bovik A.C., Making a “completely blind” image quality analyzer, IEEE Signal Processing Letters 20 (3) (2012) 209–212.
[48]
Naik, A., Swarnakar, A., & Mittal, K. (2021). Shallow-uwnet: Compressed model for underwater image enhancement (student abstract). In Proceedings of the AAAI conference on artificial intelligence. Vol. 35. No. 18 (pp. 15853–15854).
[49]
Panetta K., Gao C., Agaian S., Human-visual-system-inspired underwater image quality measures, IEEE Journal of Oceanic Engineering 41 (3) (2015) 541–551.
[50]
Patil P.W., Thawakar O., Dudhane A., Murala S., Motion saliency based generative adversarial network for underwater moving object segmentation, in: 2019 IEEE international conference on image processing, IEEE, 2019, pp. 1565–1569.
[51]
Peng Y.-T., Cao K., Cosman P.C., Generalization of the dark channel prior for single image restoration, IEEE Transactions on Image Processing 27 (6) (2018) 2856–2868.
[52]
Peng Y.-T., Cosman P.C., Underwater image restoration based on image blurriness and light absorption, IEEE Transactions on Image Processing 26 (4) (2017) 1579–1594.
[53]
Rao Y., Zhao W., Zhu Z., Lu J., Zhou J., Global filter networks for image classification, in: Advances in neural information processing systems. Vol. 34, 2021, pp. 980–993.
[54]
Redmon J., Farhadi A., Yolov3: An incremental improvement, 2018, arXiv preprint arXiv:1804.02767.
[55]
Ronneberger O., Fischer P., Brox T., U-net: Convolutional networks for biomedical image segmentation, in: Medical image computing and computer-assisted intervention–MICCAI 2015: 18th International conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III. Vol. 18, Springer, 2015, pp. 234–241.
[56]
Schechner Y.Y., Karpel N., Clear underwater vision, in: Proceedings of the 2004 IEEE computer society conference on computer vision and pattern recognition, 2004. Vol. 1, IEEE, 2004, p. I.
[57]
Simonyan K., Zisserman A., Very deep convolutional networks for large-scale image recognition, 2014, arXiv preprint arXiv:1409.1556.
[58]
Song W., Wang Y., Huang D., Liotta A., Perra C., Enhancement of underwater images with statistical model of background light and optimization of transmission map, IEEE Transactions on Broadcasting 66 (1) (2020) 153–169.
[59]
Suvorov, R., Logacheva, E., Mashikhin, A., Remizova, A., Ashukha, A., Silvestrov, A., et al. (2022). Resolution-robust large mask inpainting with fourier convolutions. In Proceedings of the IEEE/CVF Winter conference on applications of computer vision (pp. 2149–2159).
[60]
VidalMata R.G., Banerjee S., RichardWebster B., Albright M., Davalos P., McCloskey S., et al., Bridging the gap between computational photography and visual recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence 43 (12) (2020) 4272–4290.
[61]
Wang Z., Bovik A.C., Sheikh H.R., Simoncelli E.P., Image quality assessment: from error visibility to structural similarity, IEEE Transactions on Image Processing 13 (4) (2004) 600–612.
[62]
Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., & Li, H. (2022). Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 17683–17693).
[63]
Yang, Y., & Soatto, S. (2020). Fda: Fourier domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4085–4095).
[64]
Yang M., Sowmya A., An underwater color image quality evaluation metric, IEEE Transactions on Image Processing 24 (12) (2015) 6062–6071.
[65]
Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan, F. S., & Yang, M.-H. (2022). Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5728–5739).
[66]
Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan, F. S., Yang, M.-H., et al. (2021). Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 14821–14831).
[67]
Zhang S., Wang T., Dong J., Yu H., Underwater image enhancement via extended multi-scale retinex, Neurocomputing 245 (2017) 1–9.
[68]
Zhang L., Zhang L., Mou X., Zhang D., FSIM: A feature similarity index for image quality assessment, IEEE Transactions on Image Processing 20 (8) (2011) 2378–2386.
[69]
Zhang W., Zhuang P., Sun H.-H., Li G., Kwong S., Li C., Underwater image enhancement via minimal color loss and locally adaptive contrast enhancement, IEEE Transactions on Image Processing 31 (2022) 3997–4010.
[70]
Zhou J., Pang L., Zhang D., Zhang W., Underwater image enhancement method via multi-interval subhistogram perspective equalization, IEEE Journal of Oceanic Engineering (2023).
[71]
Zhou J., Sun J., Zhang W., Lin Z., Multi-view underwater image enhancement method via embedded fusion mechanism, Engineering Applications of Artificial Intelligence 121 (2023).
[72]
Zhou J., Yang T., Zhang W., Underwater vision enhancement technologies: A comprehensive review, challenges, and recent trends, Applied Intelligence 53 (3) (2023) 3594–3621.
[73]
Zhou J., Zhang D., Ren W., Zhang W., Auto color correction of underwater images utilizing depth information, IEEE Geoscience and Remote Sensing Letters 19 (2022) 1–5,.
[74]
Zhou J., Zhang D., Zhang W., Underwater image enhancement method via multi-feature prior fusion, Applied Intelligence (2022) 1–23.
[75]
Zhou J., Zhang D., Zhang W., Cross-view enhancement network for underwater images, Engineering Applications of Artificial Intelligence 121 (2023).
[76]
Zhuang P., Li C., Wu J., Bayesian retinex underwater image enhancement, Engineering Applications of Artificial Intelligence 101 (2021).
[77]
Zhuang P., Wu J., Porikli F., Li C., Underwater image enhancement with hyper-laplacian reflectance priors, IEEE Transactions on Image Processing 31 (2022) 5442–5455.
[78]
Zou, W., Jiang, M., Zhang, Y., Chen, L., Lu, Z., & Wu, Y. (2021). SDWNet: A straight dilated network with wavelet transformation for image deblurring. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1895–1904).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Expert Systems with Applications: An International Journal
Expert Systems with Applications: An International Journal  Volume 242, Issue C
May 2024
1585 pages

Publisher

Pergamon Press, Inc.

United States

Publication History

Published: 16 May 2024

Author Tags

  1. Convolutional neural network
  2. Underwater image enhancement
  3. Scattering removal
  4. Gated multipurpose unit

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media