DBSF-Net: Infrared Image Colorization Based on the Generative Adversarial Model with Dual-Branch Feature Extraction and Spatial-Frequency-Domain Discrimination
Abstract
:1. Introduction
2. Related Work
2.1. CNN-Based Methods
2.2. GAN-Based Methods
2.3. Transformer-Based Structures
3. Materials and Methods
3.1. Overall Framework of the Proposed Algorithm
3.2. The Structure of Multi-Branch Feature Extraction Network
3.2.1. Lightweight-Transformer-Based Global Feature Extractor
3.2.2. Inverse Neural Network (INN)-Based Detail Feature Extractor
3.2.3. Saliency Query Module
3.2.4. Discriminator Based on Spatial-Domain and Frequency-Domain Information
3.3. Loss Function
4. Experiment and Results
4.1. Implementation Details
4.2. Experimental Results
4.3. Ablation Experiment
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhao, G.; Hu, Z.; Feng, S.; Wang, Z.; Wu, H. GLFuse: A Global and Local Four-Branch Feature Extraction Network for Infrared and Visible Image Fusion. Remote Sens. 2024, 16, 3246. [Google Scholar] [CrossRef]
- Gao, X.; Liu, S. BCMFIFuse: A Bilateral Cross-Modal Feature Interaction-Based Network for Infrared and Visible Image Fusion. Remote Sens. 2024, 16, 3136. [Google Scholar] [CrossRef]
- St-Laurent, L.; Maldague, X.; Prévost, D. Combination of colour and thermal sensors for enhanced object detection. In Proceedings of the 2007 10th International Conference on Information Fusion, Quebec, QC, Canada, 9–12 July 2007; pp. 1–8. [Google Scholar]
- Watson, J.D. 9—The Human Visual System. In Brain Mapping: The Systems; Toga, A.W., Mazziotta, J.C., Eds.; Academic Press: San Diego, CA, USA, 2000; pp. 263–289. [Google Scholar] [CrossRef]
- Luo, F.Y.; Liu, S.L.; Cao, Y.J.; Yang, K.F.; Xie, C.Y.; Liu, Y.; Li, Y.J. Nighttime Thermal Infrared Image Colorization with Feedback-Based Object Appearance Learning. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 4745–4761. [Google Scholar] [CrossRef]
- Yatziv, L.; Sapiro, G. Fast image and video colorization using chrominance blending. IEEE Trans. Image Process. 2006, 15, 1120–1129. [Google Scholar] [CrossRef]
- Qu, Y.; Wong, T.T.; Heng, P.A. Manga colorization. ACM Trans. Graph. 2006, 25, 1214–1220. [Google Scholar] [CrossRef]
- Luan, Q.; Wen, F.; Cohen-Or, D.; Liang, L.; Xu, Y.Q.; Shum, H.Y. Natural image colorization. In Proceedings of the 18th Eurographics Conference on Rendering Techniques, Goslar, DEU, Goslar, Germany, 25–27 June 2007; EGSR’07. pp. 309–320. [Google Scholar]
- An, X.; Pellacini, F. AppProp: All-pairs appearance-space edit propagation. ACM Trans. Graph. 2008, 27, 1–9. [Google Scholar] [CrossRef]
- Fattal, R. Edge-avoiding wavelets and their applications. ACM Trans. Graph. 2009, 28, 22. [Google Scholar] [CrossRef]
- Xu, K.; Li, Y.; Ju, T.; Hu, S.M.; Liu, T.Q. Efficient affinity-based edit propagation using K-D tree. ACM Trans. Graph. 2009, 28, 1–6. [Google Scholar] [CrossRef]
- Ironi, R.; Cohen-Or, D.; Lischinski, D. Colorization by example. In Proceedings of the Eurographics Symposium on Rendering, Konstanz, Germany, 29 June–1 July 2005. [Google Scholar]
- Liu, X.; Wan, L.; Qu, Y.; Wong, T.T.; Lin, S.; Leung, C.S.; Heng, P.A. Intrinsic colorization. ACM Trans. Graph. 2008, 27, 152. [Google Scholar] [CrossRef]
- Morimoto, Y.; Taguchi, Y.; Naemura, T. Automatic colorization of grayscale images using multiple images on the web. In Proceedings of the SIGGRAPH 2009: Talks, New York, NY, USA, 3–7 August 2009. SIGGRAPH ’09. [Google Scholar] [CrossRef]
- Gupta, R.K.; Chia, A.Y.S.; Rajan, D.; Ng, E.S.; Zhiyong, H. Image colorization using similar images. In Proceedings of the 20th ACM International Conference on Multimedia, New York, NY, USA, 29 October–2 November 2012; MM ’12. pp. 369–378. [Google Scholar] [CrossRef]
- Bugeau, A.; Ta, V.T.; Papadakis, N. Variational Exemplar-Based Image Colorization. IEEE Trans. Image Process. 2014, 23, 298–307. [Google Scholar] [CrossRef]
- Li, B.; Lai, Y.K.; John, M.; Rosin, P.L. Automatic Example-Based Image Colorization Using Location-Aware Cross-Scale Matching. IEEE Trans. Image Process. 2019, 28, 4606–4619. [Google Scholar] [CrossRef] [PubMed]
- Fang, F.; Wang, T.; Zeng, T.; Zhang, G. A Superpixel-Based Variational Model for Image Colorization. IEEE Trans. Vis. Comput. Graph. 2020, 26, 2931–2943. [Google Scholar] [CrossRef]
- Wang, J.; Wang, X. VCells: Simple and Efficient Superpixels Using Edge-Weighted Centroidal Voronoi Tessellations. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1241–1247. [Google Scholar] [CrossRef]
- Yang, S.; Sun, M.; Lou, X.; Yang, H.; Liu, D. Nighttime Thermal Infrared Image Translation Integrating Visible Images. Remote Sens. 2024, 16, 666. [Google Scholar] [CrossRef]
- Yang, S.; Sun, M.; Lou, X.; Yang, H.; Zhou, H. An Unpaired Thermal Infrared Image Translation Method Using GMA-CycleGAN. Remote Sens. 2023, 15, 663. [Google Scholar] [CrossRef]
- Tan, D.; Liu, Y.; Li, G.; Yao, L.; Sun, S.; He, Y. Serial GANs: A Feature-Preserving Heterogeneous Remote Sensing Image Transformation Model. Remote Sens. 2021, 13, 3968. [Google Scholar] [CrossRef]
- Tang, R.; Liu, H.; Wei, J. Visualizing Near Infrared Hyperspectral Images with Generative Adversarial Networks. Remote Sens. 2020, 12, 3848. [Google Scholar] [CrossRef]
- Cheng, Z.; Yang, Q.; Sheng, B. Deep Colorization. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; ICCV ’15. pp. 415–423. [Google Scholar] [CrossRef]
- Iizuka, S.; Simo-Serra, E.; Ishikawa, H. Let there be color! Joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Trans. Graph. 2016, 35, 110. [Google Scholar] [CrossRef]
- Larsson, G.; Maire, M.; Shakhnarovich, G. Learning Representations for Automatic Colorization. 2017. Available online: http://arxiv.org/abs/1603.06668 (accessed on 16 September 2024).
- Zhang, R.; Isola, P.; Efros, A.A. Colorful Image Colorization. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
- Lee, G.; Shin, S.; Na, T.; Woo, S.S. Real-Time User-guided Adaptive Colorization with Vision Transformer. In Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 1–6 January 2024; pp. 473–482. [Google Scholar]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 2, Cambridge, MA, USA, 8–13 December 2014; NIPS’14. pp. 2672–2680. [Google Scholar]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar] [CrossRef]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar] [CrossRef]
- He, M.; Chen, D.; Liao, J.; Sander, P.V.; Yuan, L. Deep exemplar-based colorization. ACM Trans. Graph. 2018, 37, 47. [Google Scholar] [CrossRef]
- Zhang, B.; He, M.; Liao, J.; Sander, P.V.; Yuan, L.; Bermak, A.; Chen, D. Deep Exemplar-Based Video Colorization. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8044–8053. [Google Scholar] [CrossRef]
- Dabas, C.; Jain, S.; Bansal, A.; Sharma, V. Implementation of image colorization with convolutional neural network. Int. J. Syst. Assur. Eng. Manag. 2020, 11, 625–634. [Google Scholar] [CrossRef]
- Dong, X.; Li, W.; Wang, X. Pyramid convolutional network for colorization in monochrome-color multi-lens camera system. Neurocomputing 2021, 450, 129–142. [Google Scholar] [CrossRef]
- Pang, Y.; Jin, X.; Fu, J.; Chen, Z. Structure-preserving feature alignment for old photo colorization. Pattern Recogn. 2024, 145, 109968. [Google Scholar] [CrossRef]
- Suárez, P.L.; Sappa, A.D.; Vintimilla, B.X. Colorizing Infrared Images Through a Triplet Conditional DCGAN Architecture. In Proceedings of the International Conference on Image Analysis and Processing, Catania, Italy, 11–15 September 2017. [Google Scholar]
- Benaim, S.; Wolf, L. One-sided unsupervised domain mapping. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; NIPS’17. pp. 752–762. [Google Scholar]
- Bansal, A.; Ma, S.; Ramanan, D.; Sheikh, Y. Recycle-GAN: Unsupervised Video Retargeting. In Proceedings of the Computer Vision—ECCV 2018: 15th European Conference, Munich, Germany, 8–14 September 2018; pp. 122–138. [Google Scholar] [CrossRef]
- Kniaz, V.V.; Knyaz, V.A.; Hladůvka, J.; Kropatsch, W.G.; Mizginov, V. ThermalGAN: Multimodal Color-to-Thermal Image Translation for Person Re-identification in Multispectral Dataset. In Proceedings of the ECCV Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Mehri, A.; Sappa, A.D. Colorizing Near Infrared Images through a Cyclic Adversarial Approach of Unpaired Samples. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 971–979. [Google Scholar] [CrossRef]
- Abbott, R.; Robertson, N.M.; del Rincón, J.M.; Connor, B. Unsupervised object detection via LWIR/RGB translation. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 407–415. [Google Scholar]
- Emami, H.; Aliabadi, M.M.; Dong, M.; Chinnam, R.B. SPA-GAN: Spatial Attention GAN for Image-to-Image Translation. [arXiv:cs.CV/1908.06616]. 2020. Available online: http://arxiv.org/abs/1908.06616 (accessed on 17 September 2024).
- Chen, R.; Huang, W.; Huang, B.; Sun, F.; Fang, B. Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 8165–8174. [Google Scholar] [CrossRef]
- Park, T.; Efros, A.A.; Zhang, R.; Zhu, J.Y. Contrastive Learning for Unpaired Image-to-Image Translation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Han, J.; Shoeiby, M.; Petersson, L.; Armin, M.A. Dual Contrastive Learning for Unsupervised Image-to-Image Translation. [arXiv:cs.CV/2104.07689]. 2021. Available online: http://arxiv.org/abs/2104.07689 (accessed on 17 September 2024).
- Huang, S.; Jin, X.; Jiang, Q.; Li, J.; Lee, S.J.; Wang, P.; Yao, S. A fully-automatic image colorization scheme using improved CycleGAN with skip connections. Multimed. Tools Appl. 2021, 80, 26465–26492. [Google Scholar] [CrossRef]
- Li, S.; Han, B.; Yu, Z.; Liu, C.H.; Chen, K.; Wang, S. I2V-GAN: Unpaired Infrared-to-Visible Video Translation. In Proceedings of the 29th ACM International Conference on Multimedia, New York, NY, USA, 17 October 2021; MM ’21. pp. 3061–3069. [Google Scholar] [CrossRef]
- Yadav, N.K.; Singh, S.K.; Dubey, S.R. MobileAR-GAN: MobileNet-Based Efficient Attentive Recurrent Generative Adversarial Network for Infrared-to-Visual Transformations. IEEE Trans. Instrum. Meas. 2022, 71, 1–9. [Google Scholar] [CrossRef]
- Luo, F.; Li, Y.; Zeng, G.; Peng, P.; Wang, G.; Li, Y. Thermal Infrared Image Colorization for Nighttime Driving Scenes With Top-Down Guided Attention. IEEE Trans. Intell. Transp. Syst. 2021, 23, 15808–15823. [Google Scholar] [CrossRef]
- Yu, Z.; Chen, K.; Li, S.; Han, B.; Liu, C.H.; Wang, S. ROMA: Cross-Domain Region Similarity Matching for Unpaired Nighttime Infrared to Daytime Visible Video Translation. [arXiv:cs.CV/2204.12367]. 2022. Available online: http://arxiv.org/abs/2204.12367 (accessed on 17 September 2024).
- Guo, J.; Li, J.; Fu, H.; Gong, M.; Zhang, K.; Tao, D. Alleviating Semantics Distortion in Unsupervised Low-Level Image-to-Image Translation via Structure Consistency Constraint. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 18228–18238. [Google Scholar] [CrossRef]
- Lin, Y.; Zhang, S.; Chen, T.; Lu, Y.; Li, G.; Shi, Y. Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image Translation. In Proceedings of the 30th ACM International Conference on Multimedia, New York, NY, USA, 10 October 2022; MM ’22. pp. 1186–1194. [Google Scholar] [CrossRef]
- Bharti, V.; Biswas, B.; Shukla, K.K. QEMCGAN: Quantized Evolutionary Gradient Aware Multiobjective Cyclic GAN for Medical Image Translation. IEEE J. Biomed. Health Inform. 2024, 28, 1240–1251. [Google Scholar] [CrossRef] [PubMed]
- Zhao, M.; Feng, G.; Tan, J.; Zhang, N.; Lu, X. CSTGAN: Cycle Swin Transformer GAN for Unpaired Infrared Image Colorization. In Proceedings of the 2022 3rd International Conference on Control, Robotics and Intelligent System, New York, NY, USA, 26–28 August 2022; CCRIS ’22. pp. 241–247. [Google Scholar] [CrossRef]
- Feng, L.; Geng, G.; Li, Q.; Jiang, Y.H.; Li, Z.; Li, K. CRPGAN: Learning image-to-image translation of two unpaired images by cross-attention mechanism and parallelization strategy. PLoS ONE 2023, 18, e0280073. [Google Scholar] [CrossRef]
- Gou, Y.; Li, M.; Song, Y.; He, Y.; Wang, L. Multi-feature contrastive learning for unpaired image-to-image translation. Complex Intell. Syst. 2022, 9, 4111–4122. [Google Scholar] [CrossRef]
- Liu, Y.; Zhao, H.; Chan, K.C.K.; Wang, X.; Loy, C.C.; Qiao, Y.; Dong, C. Temporally consistent video colorization with deep feature propagation and self-regularization learning. Comput. Vis. Media 2021, 10, 375–395. [Google Scholar] [CrossRef]
- Liang, Z.; Li, Z.; Zhou, S.; Li, C.; Loy, C.C. Control Color: Multimodal Diffusion-based Interactive Image Colorization. arXiv 2024, arXiv:2402.10855. [Google Scholar]
- Wei, C.; Chen, H.; Bai, L.; Han, J.; Chen, X. Infrared colorization with cross-modality zero-shot learning. Neurocomputing 2024, 579, 127449. [Google Scholar] [CrossRef]
- Kumar, M.; Weissenborn, D.; Kalchbrenner, N. Colorization Transformer. arXiv 2021, arXiv:2102.04432. [Google Scholar]
- Kim, S.; Baek, J.; Park, J.; Kim, G.; Kim, S. InstaFormer: Instance-Aware Image-to-Image Translation with Transformer. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 18300–18310. [Google Scholar] [CrossRef]
- Ji, X.; Jiang, B.; Luo, D.; Tao, G.; Chu, W.; Xie, Z.; Wang, C.; Tai, Y. ColorFormer: Image Colorization via Color Memory Assisted Hybrid-Attention Transformer; Springer: Cham, Switzerland, 2022. [Google Scholar]
- Zheng, W.; Li, Q.; Zhang, G.; Wan, P.; Wang, Z. ITTR: Unpaired Image-to-Image Translation with Transformers. [arXiv:cs.CV/2203.16015]. 2022. Available online: http://arxiv.org/abs/2203.16015 (accessed on 17 September 2024).
- Torbunov, D.; Huang, Y.; Yu, H.; zhi Huang, J.; Yoo, S.; Lin, M.; Viren, B.; Ren, Y. UVCGAN: UNet Vision Transformer cycle-consistent GAN for unpaired image-to-image translation. In Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2–7 January 2023; pp. 702–712. [Google Scholar]
- Ma, T.; Li, B.; Liu, W.; Hua, M.; Dong, J.; Tan, T. CFFT-GAN: Cross-domain Feature Fusion Transformer for Exemplar-based Image Translation. arXiv 2023, arXiv:2302.01608. [Google Scholar] [CrossRef]
- Jiang, C.; Gao, F.; Ma, B.; Lin, Y.; Wang, N.; Xu, G. Masked and Adaptive Transformer for Exemplar Based Image Translation. [arXiv:cs.CV/2303.17123]. 2023. Available online: http://arxiv.org/abs/2303.17123 (accessed on 17 September 2024).
- Chen, S.Y.; Li, X.; Zhang, X.; Wang, M.; Zhang, Y.; Han, J.; Zhang, Y. Exemplar-based Video Colorization with Long-term Spatiotemporal Dependency. Knowl. Based Syst. 2023, 284, 111240. [Google Scholar] [CrossRef]
- Wu, Z.; Liu, Z.; Lin, J.; Lin, Y.; Han, S. Lite Transformer with Long-Short Range Attention. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 30 April 2020. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar] [CrossRef]
- Hwang, S.; Park, J.; Kim, N.; Choi, Y.; Kweon, I.S. Multispectral pedestrian detection: Benchmark dataset and baseline. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1037–1045. [Google Scholar] [CrossRef]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
- Sheikh, H.; Bovik, A.; de Veciana, G. An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Trans. Image Process. 2005, 14, 2117–2128. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.; Pan, Y.; Yao, T.; Tian, X.; Mei, T. Mocycle-GAN: Unpaired Video-to-Video Translation. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019. [Google Scholar]
- Anoosheh, A.; Sattler, T.; Timofte, R.; Pollefeys, M.; Gool, L.V. Night-to-Day Image Translation for Retrieval-Based Localization. [arXiv:cs.CV/1809.09767]. 2019. Available online: http://arxiv.org/abs/1809.09767 (accessed on 10 September 2024).
Indicator | CycleGAN | ToDayGAN | MocycleGAN | CUT | DECENT | Ours |
---|---|---|---|---|---|---|
PSNR ↑ | 13.3907 | 10.6866 | 13.9347 | 14.2114 | 15.2230 | 15.7025 |
SSIM ↑ | 0.3091 | 0.2952 | 0.4075 | 0.3659 | 0.3035 | 0.4038 |
LPIPS ↓ | 0.4109 | 0.5542 | 0.4647 | 0.3528 | 0.3831 | 0.3637 |
VIF ↑ | 0.7827 | 0.8057 | 0.8040 | 0.8026 | 0.8068 | 0.8162 |
Indicator | CycleGAN | ToDayGAN | MocycleGAN | CUT | DECENT | Ours |
---|---|---|---|---|---|---|
PSNR ↑ | 14.1729 | 10.3912 | 10.0528 | 12.3128 | 17.5589 | 17.6733 |
SSIM ↑ | 0.4584 | 0.2694 | 0.2245 | 0.3153 | 0.5026 | 0.5697 |
LPIPS ↓ | 0.4113 | 0.5733 | 0.5986 | 0.4287 | 0.4952 | 0.3868 |
VIF ↑ | 0.7907 | 0.7788 | 0.7934 | 0.7864 | 0.8063 | 0.8471 |
Indicator | CycleGAN | ToDayGAN | MocycleGAN | CUT | DECENT | Ours |
---|---|---|---|---|---|---|
PSNR ↑ | 13.6457 | 10.5213 | 11.5467 | 12.5498 | 16.8880 | 17.4392 |
SSIM ↑ | 0.4408 | 0.2441 | 0.3059 | 0.3102 | 0.5798 | 0.6123 |
LPIPS ↓ | 0.4329 | 0.6035 | 0.5130 | 0.4422 | 0.2965 | 0.2919 |
VIF ↑ | 0.8266 | 0.8039 | 0.8026 | 0.8073 | 0.8400 | 0.8475 |
Indicator | CycleGAN | ToDayGAN | MocycleGAN | CUT | DECENT | Ours |
---|---|---|---|---|---|---|
Parameter(M) ↓ | 28.286 | 83.171 | 42.147 | 14.703 | 57.183 | 12.371 |
Runtime(s) ↓ | 0.251 | 0.783 | 0.374 | 0.137 | 0.243 | 0.081 |
Methods Compared | PSNR ↑ | SSIM ↑ | LPIPS ↓ | VIF ↑ |
---|---|---|---|---|
Baseline | 17.1539 | 0.5285 | 0.3457 | 0.8320 |
Baseline + local feature extraction | 18.7583 | 0.5580 | 0.3123 | 0.7947 |
Baseline + global feature extraction | 20.1790 | 0.6102 | 0.2921 | 0.8424 |
Baseline + baseline + global + local | 20.2499 | 0.6114 | 0.2769 | 0.8438 |
Ours | 20.5134 | 0.6156 | 0.2407 | 0.8404 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, S.; Ma, D.; Ding, Y.; Xian, Y.; Zhang, T. DBSF-Net: Infrared Image Colorization Based on the Generative Adversarial Model with Dual-Branch Feature Extraction and Spatial-Frequency-Domain Discrimination. Remote Sens. 2024, 16, 3766. https://doi.org/10.3390/rs16203766
Li S, Ma D, Ding Y, Xian Y, Zhang T. DBSF-Net: Infrared Image Colorization Based on the Generative Adversarial Model with Dual-Branch Feature Extraction and Spatial-Frequency-Domain Discrimination. Remote Sensing. 2024; 16(20):3766. https://doi.org/10.3390/rs16203766
Chicago/Turabian StyleLi, Shaopeng, Decao Ma, Yao Ding, Yong Xian, and Tao Zhang. 2024. "DBSF-Net: Infrared Image Colorization Based on the Generative Adversarial Model with Dual-Branch Feature Extraction and Spatial-Frequency-Domain Discrimination" Remote Sensing 16, no. 20: 3766. https://doi.org/10.3390/rs16203766