Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Joint strong edge and multi-stream adaptive fusion network for non-uniform image deblurring

Published: 01 November 2022 Publication History

Highlights

An efficient non-uniform motion deblurring network is proposed to introduce edge information to guide image deblurring task.
A new Multi-stream Adaptive Fusion Module (MAFM) is designed to focus on crucial information in different information flows.
A novel Dense Attention Feature Extraction Module (DAFEM) is developed to obtain significant recovery information.
Edge loss and perceptual loss are introduced to optimize the generator to improve the image restoration effect.

Abstract

Non-uniform motion deblurring has been a challenging problem in the field of computer vision. Currently, deep learning-based deblurring methods have made promising achievements. In this paper, we propose a new joint strong edge and multi-stream adaptive fusion network to achieve non-uniform motion deblurring. The edge map and the blurred map are jointly used as network inputs and Edge Extraction Network (EEN) guides the Deblurring Network (DN) for image recovery and to complement the important edge information. The Multi-stream Adaptive Fusion Module (MAFM) adaptively fuses the edge information and features from the encoder and decoder to reduce feature redundancy to avoid image artifacts. Furthermore, the Dense Attention Feature Extraction Module (DAFEM) is designed to focus on the severely blurred regions of blurry images to obtain important recovery information. In addition, an edge loss function is added to measure the difference of edge features between the generated and clear images to further recover the edges of the deblurred images. Experiments show that our method outperforms currently public methods in terms of PSNR, SSIM and VIF, and generates images with less blur and sharper edges.

References

[1]
L. Xu, S. Zheng, J. Jia, Unnatural L0 Sparse Representation for Natural Image Deblurring, in: 2013 IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 1107–1114.
[2]
J. Pan, D. Sun, H. Pfister, M. Yang, Blind Image Deblurring Using Dark Channel Prior, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1628–1636.
[3]
R. Fergus, B. Singh, A. Hertzmann, S.T. Roweis, W.T. Freeman, Removing Camera Shake from a Single Photograph, ACM Transactions on Graphics 25 (3) (2006) 787–794.
[4]
Perrone D, Favaro P. Total Variation Blind Deconvolution: The Devil Is in the Details, in: 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 2909–2916.
[5]
L. Xu, J. Jia, Two-Phase Kernel Estimation for Robust Motion Deblurring, in: European Conference on Computer Vision 2010, 2010, pp. 157–170.
[6]
O. Whyte, J. Sivic, A. Zisserman, J. Ponce, Non-uniform Deblurring for Shaken Images, in: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010, pp. 491–498.
[7]
T.H. Kim, B. Ahn, K.M. Lee, Dynamic Scene Deblurring, in: 2013 IEEE International Conference on Computer Vision, 2013, pp. 3160–3167.
[8]
L. Xu, J. Ren, C. Liu, J. Jia, Deep Convolutional Neural Network for Image Deconvolution, in: International Conference on Neural Information Processing Systems, 2014, pp. 1790–1798.
[9]
J. Sun, W. Cao, X.u. Zongben, J. Ponce, Learning a Convolutional Neural Network for Non-uniform Motion Blur Removal, in: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 769–777.
[10]
S. Cho, Jue Wang, S. Lee, Handling outliers in non-blind image deconvolution, in: 2011 International Conference on Computer Vision, 2011, pp. 495–502.
[11]
C.J. Schuler, M. Hirsch, S. Harmeling, B. Schölkopf, Learning to Deblur, IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (7) (2016) 1439–1451.
[12]
A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms, in: 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 1964–1971.
[13]
Q. Shan, J. Jia, A. Agarwala, High-quality motion deblurring from a single image, ACM Transactions on Graphics 27 (3) (2008) 1–10.
[14]
A. Chakrabarti, A Neural Approach to Blind Motion Deblurring, in: European Conference on Computer Vision, 2016, pp. 221–235.
[15]
L. Li, J. Pan, W. Lai, C. Gao, N. Sang, M. Yang, Learning a Discriminative Prior for Blind Image Deblurring, in: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 6616–6625.
[16]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, et al. Generative Adversarial Nets, in: Neural Information Processing Systems, 2014, pp. 2672–2680.
[17]
Z. Chen, L. Chang, Blind Motion Deblurring via Inception resdensenet by Using Gan Model, in: ICASSP 2019 – 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 1463–1467.
[18]
S. Nah, T.H. Kim, K.M. Lee, Deep multi-scale convolutional neural network for dynamic scene deblurring, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 257–265.
[19]
X. Tao, H. Gao, X. Shen, J. Wang, J. Jia, Scale-recurrent network for deep image deblurring, in: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 8174–8182.
[20]
J. Zhang, J. Pan, J. Ren, et al. Dynamic scene deblurring using spatially variant recurrent neural networks, in: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 2521–2529.
[21]
J. Johnson, A. Alahi, L. Fei-Fei, Perceptual Losses for Real-Time Style Transfer and Super-Resolution, in: European Conference on Computer Vision, 2016, pp. 694–711.
[22]
J. Li, F. Fang, K. Mei, G. Zhang, Multi-scale residual network for image super-resolution, in: European Conference on Computer Vision, 2018, pp. 517-532.
[23]
Y. Zhang, Y. Tian, Y. Kong, B. Zhong, Y. Fu, Residual dense network for image super-resolution, in: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 2472–2481.
[24]
F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, X. Tang, Residual attention network for image classification, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6450–6458.
[25]
O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, J. Matas, Deblurgan: Blind motion deblurring using conditional adversarial networks, in: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 8183–8192.
[26]
O. Kupyn, T. Martyniuk, J. Wu, Z. Wang, DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better, in: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 8877–8886.
[27]
L. Chen, Q. Sun, F. Wang, Attention-adaptive and deformable convolutional modules for dynamic scene deblurring, Information Sciences 546 (2021) 368–377.
[28]
Y. Chen, G. Cui, J. Zhang, J. Zhao, A Deep Motion Deblurring Network Using Channel Adaptive Residual Module, IEEE Access 9 (2021) 65638–65649.
[29]
Y. Li, Y.e. Luo, G. Zhang, L.u. Jianwei, Single image deblurring with cross-layer feature fusion and consecutive attention, Journal of Visual Communication and Image Representation 78 (2021).
[30]
Q. Qi, J. Guo, W. Jin, Attention Network for Non-Uniform Deblurring, IEEE Access 8 (2020) 100044–100057.
[31]
J. Liu, W. Sun, M. Li, Recurrent conditional generative adversarial network for image deblurring, IEEE Access 7 (2019) 6186–6193.
[32]
J. Hu, L. Shen, S. Albanie, G. Sun, E. Wu, Squeeze-and-Excitation Networks, IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (8) (2020) 2011–2023.
[33]
S. Woo, J. Park, J. Y. Lee, I. S. Kweon, CBAM: Convolutional Block Attention Module, in: European Conference on Computer Vision, 2018, pp. 3–19.
[34]
T.M. Quan, D. Hildebrand, W.-K. Jeong, FusionNet: A Deep Fully Residual Convolutional Neural Network for Image Segmentation in Connectomics, Frontiers in Computer, Science (2016).
[35]
J. Kumar, I. D. Mastan, S. Raman, FMD-cGAN: Fast Motion Deblurring using Conditional Generative Adversarial Networks, 2021, arXiv:2111.15438.
[36]
S. Zheng, Z. Zhu, J. Cheng, Y. Guo, Y. Zhao, Edge Heuristic GAN for Non-uniform Blind Deblurring, IEEE Signal Processing Letters 26 (10) (2019, 2019,) 1546–1550.
[37]
D. Hu, J. Tan, L. Zhang, X. Ge, Image deblurring based on enhanced salient edge selection, The Visual Computer (2021) 1–16.
[38]
H. Hosseinpour, F. Samadzadegan, F.D. Javan, CMGFNet: A deep cross-modal gated fusion network for building extraction from very high-resolution remote sensing images, ISPRS Journal of Photogrammetry and Remote Sensing 184 (2022) 96–115.
[39]
J. Arevalo, T. Solorio, M. Montes-y-Gómez, F.A. González, Gated multimodal networks, Neural Computing and Applications 32 (14) (2020) 10209–10228.
[40]
J. Cui, W. Li, W. Gong, Multi-stream attentive generative adversarial network for dynamic scene deblurring, Neurocomputing 383 (2020) 39–56.
[41]
H. Zhao, W.u. Di, S.u. Hang, S. Zheng, J. Chen, Gradient-based conditional generative adversarial network for non-uniform blind deblurring via DenseResNet, Journal of Visual Communication and Image Representation 74 (2021).
[42]
M. Xu, Z. Wang, J. Zhu, X. Jia, S. Jia, Multi-Attention Generative Adversarial Network for Remote Sensing Image Super-Resolution, 2021, arXiv:2107.06536.
[43]
C. Wu, Y. Wo, G. Han, Z. Wu, J. Liang, Non-uniform image blind deblurring by two-stage fully convolution network, IET Image Processing 14 (11) (2020) 2588–2596.
[44]
A. Gupta, N. Joshi, C. Lawrence Zitnick, M. Cohen, B. Curless, Single Image Deblurring Using Motion Density Functions, in: European Conference on Computer Vision, 2010, pp. 171–184.
[45]
A. Rares, M.J.T. Reinders, J. Biemond, Edge-based image restoration, IEEE Transactions on Image Processing 14 (10) (2005) 1454–1468.
[46]
P. Isola, J. Zhu, T. Zhou, A.A. Efros, Image-to-Image Translation with Conditional Adversarial Networks, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5967–5976.
[47]
M. Arjovsky, S. Chintala, L. Bottou, Wasserstein GAN, 2017, arXiv:1701.07875.
[48]
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A. Courville, Improved Training of Wasserstein GANs, in: the 31st International Conference on Neural Information Processing Systerms, 2017, pp. 5769–5779.
[49]
H. Gao, X. Tao, X. Shen, J. Jia, Dynamic Scene Deblurring With Parameter Selective Sharing and Nested Skip Connections, in: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 3843–3851.
[50]
K. Simonyan, A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, 2014, arXiv:1409.1556.
[51]
H.R. Sheikh, A.C. Bovik, Image information and visual quality, IEEE Transactions on Image Processing 15 (2) (2006) 430–444.
[52]
S. Nah et al., NTIRE 2019 Challenge on Video Deblurring and Super-Resolution: Dataset and Study, in: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019, pp. 1996–2005.

Cited By

View all
  • (2024)A novel dynamic scene deblurring framework based on hybrid activation and edge-assisted dual-branch residualsThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-024-03390-740:6(3849-3869)Online publication date: 1-Jun-2024

Index Terms

  1. Joint strong edge and multi-stream adaptive fusion network for non-uniform image deblurring
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image Journal of Visual Communication and Image Representation
          Journal of Visual Communication and Image Representation  Volume 89, Issue C
          Nov 2022
          530 pages

          Publisher

          Academic Press, Inc.

          United States

          Publication History

          Published: 01 November 2022

          Author Tags

          1. Non-uniform motion deblurring
          2. Attention mechanisms
          3. Edge extraction algorithm
          4. Generative adversarial network

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 28 Jan 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)A novel dynamic scene deblurring framework based on hybrid activation and edge-assisted dual-branch residualsThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-024-03390-740:6(3849-3869)Online publication date: 1-Jun-2024

          View Options

          View options

          Figures

          Tables

          Media

          Share

          Share

          Share this Publication link

          Share on social media