Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Generative image deblurring based on multi-scaled residual adversary network driven by composed prior-posterior loss

Published: 01 December 2019 Publication History

Abstract

Conditional Generative Adversarial Networks (CGANs) have been introduced to generate realistic images from extremely degraded inputs. However, these generative models without prior knowledge of spatial distributions has limited performance to deal with various complex scenes. In this paper, we proposed a image deblurring network based on CGANs to generate ideal images without any blurring assumption. To overcome adversarial insufficiency, an extended classifier with different attribute domains is formulated to replace the original discriminator of CGANs. Inspired by residual learning, a set of skip-connections are cohered to transfer multi-scaled spatial features to the following high-level operations. Furthermore, this adversary architecture is driven by a composite loss that integrates histogram of gradients (HoG) and geodesic distance. In experiments, an uniformed adversarial iteration is circularly applied to improve image degenerations. Extensive results show that the proposed deblurring approach significantly outperforms state-of-the-art methods on both qualitative and quantitative evaluations.

References

[1]
A. Mahalakshmi, B. Shanthini, A survey on image deblurring, in: International Conference on Computer Communication & Informatics, 2016.
[2]
M.S.C. Almeida, L.B. Almeida, Blind deblurring of natural images, in: IEEE International Conference on Acoustics, 2008.
[3]
A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, Adv. Neural Inform. Process. Syst. (2012) 1097–1105.
[4]
H. Zhang, J. Yang, Y. Zhang, N.M. Nasrabadi, T.S. Huang, Close the loop: joint blind image restoration and recognition with sparse representation prior, in: 2011 International Conference on Computer Vision, IEEE, 2011, pp. 770–777.
[5]
C.J. Schuler, M. Hirsch, S. Harmeling, B. Schölkopf, Learning to deblur, IEEE Trans. Pattern Anal. Mach. Intell. 38 (2016) 1439–1451.
[6]
J. Sun, W. Cao, Z. Xu, J. Ponce, Learning a convolutional neural network for non-uniform motion blur removal, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 769–777.
[7]
L. Xu, J.S. Ren, C. Liu, J. Jia, Deep convolutional neural network for image deconvolution, in: Advances in Neural Information Processing Systems, 2014, pp. 1790–1798.
[8]
R. Szeliski, Computer Vision: Algorithms and Applications, Springer Science & Business Media, 2010.
[9]
R.L. White, Image restoration using the damped Richardson-Lucy method, Instrumentation in Astronomy VIII, vol. 2198, International Society for Optics and Photonics, 1994, pp. 1342–1349.
[10]
A.D. Hillery, R.T. Chin, Iterative wiener filters for image restoration, IEEE Trans. Signal Process. 39 (1991) 1892–1899.
[11]
J. Miskin, D.J. MacKay, Ensemble learning for blind image separation and deconvolution, in: Advances in Independent Component Analysis, Springer, 2000, pp. 123–141.
[12]
A.C. Likas, N.P. Galatsanos, A variational approach for bayesian blind image deconvolution, IEEE Trans. Signal Process. 52 (2004) 2222–2233.
[13]
M. Rafael, M. Javier, A.K. Katsaggelos, Blind deconvolution using a variational approach to parameter, image, and blur estimation, IEEE Trans. Image Process. Publ. IEEE Sig. Process. Soc. 15 (2006) 3715–3727.
[14]
R. Fergus, B. Singh, A. Hertzmann, S.T. Roweis, W.T. Freeman, Removing camera shake from a single photograph, ACM Transactions on Graphics (TOG), vol. 25, ACM, 2006, pp. 787–794.
[15]
T.E. Bishop, S.D. Babacan, B. Amizic, A.K. Katsaggelos, T. Chan, R. Molina, Blind image deconvolution: problem formulation and existing approaches, Blind Image Deconvol. Theory Appl. (2007) 1–41.
[16]
R. Molina, A.K. Katsaggelos, J. Abad, J. Mateos, A bayesian approach to blind deconvolution based on Dirichlet distributions, in: IEEE International Conference on Acoustics, 1997.
[17]
L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D: Nonlinear Phenomena 60 (1992) 259–268.
[18]
O. Whyte, J. Sivic, A. Zisserman, J. Ponce, Non-uniform deblurring for shaken images, Int. J. Comput. Vision 98 (2012) 168–186.
[19]
R. Yan, S. Ling, Blind image blur estimation via deep learning, IEEE Trans. Image Process. 25 (2016) 1910–1921.
[20]
A. Gupta, N. Joshi, C.L. Zitnick, M. Cohen, B. Curless, Single image deblurring using motion density functions, in: European Conference on Computer Vision, Springer, 2010, pp. 171–184.
[21]
S. Harmeling, H. Michael, B. Schölkopf, Space-variant single-image blind deconvolution for removing camera shake, in: Advances in Neural Information Processing Systems, 2010, pp. 829–837.
[22]
D. Gong, J. Yang, L. Liu, Y. Zhang, I. Reid, C. Shen, A. Van Den Hengel, Q. Shi, From motion blur to motion flow: a deep learning solution for removing heterogeneous motion blur, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2319–2328.
[23]
V. Jain, S. Seung, Natural image denoising with convolutional networks, Adv. Neural Inform. Process. Syst. (2009) 769–776.
[24]
S. Nah, T. Hyun Kim, K. Mu Lee, Deep multi-scale convolutional neural network for dynamic scene deblurring, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3883–3891.
[25]
F. Cole, D. Belanger, D. Krishnan, A. Sarna, I. Mosseri, W.T. Freeman, Synthesizing normalized faces from facial identity features, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3703–3712.
[26]
L. Tran, X. Yin, X. Liu, Disentangled representation learning gan for pose-invariant face recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1415–1424.
[27]
C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al., Photo-realistic single image super-resolution using a generative adversarial network, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4681–4690.
[28]
O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, J. Matas, Deblurgan: blind motion deblurring using conditional adversarial networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8183–8192.
[29]
K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
[30]
Z. Liu, P. Luo, X. Wang, X. Tang, Deep learning face attributes in the wild, in: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3730–3738.
[31]
W.-S. Lai, J.-B. Huang, Z. Hu, N. Ahuja, M.-H. Yang, A comparative study for single image blind deblurring, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1701–1709.
[32]
S. Ren, K. He, R. Girshick, J. Sun, Faster r-cnn: towards real-time object detection with region proposal networks, Adv. Neural Inform. Process. Syst. (2015) 91–99.
[33]
K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv: 1409.1556, 2014.
[34]
H. Li, Y. Wanga, Z. Yang, R. Wang, X. Li, D. Tao, Discriminative dictionary learning-based multiple component decomposition for detail-preserving noisy image fusion, IEEE Trans. Instrum. Meas. (2019) 1.
[35]
H. Li, X. He, D. Tao, Y. Tang, R. Wang, Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning, Pattern Recogn. 79 (2018) 130–146.
[36]
K. Zhang, W. Luo, Y. Zhong, M. Lin, H. Li, Adversarial spatio-temporal learning for video deblurring, IEEE Trans. Image Process. (2018) 1.
[37]
S. Su, M. Delbracio, J. Wang, G. Sapiro, O. Wang, Deep video deblurring for hand-held cameras, in: IEEE Conference on Computer Vision & Pattern Recognition, 2017.
[38]
J. Jia, Single image motion deblurring using transparency, in: IEEE Conference on Computer Vision & Pattern Recognition, 2007.
[39]
N. Joshi, R. Szeliski, D.J. Kriegman, Psf estimation using sharp edge prediction, in: IEEE Conference on Computer Vision & Pattern Recognition, 2008.
[40]
J.-F. Cai, H. Ji, C. Liu, Z. Shen, Blind motion deblurring from a single image using sparse approximation, in: 2009 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2009, pp. 104–111.
[41]
H.S. Wong, L. Guan, A neural learning approach for adaptive image restoration using a fuzzy model-based network architecture, IEEE Trans. Neural Netw. 12 (2001) 516–531.
[42]
W. Ren, X. Cao, J. Pan, X. Guo, W. Zuo, M.H. Yang, Image deblurring via enhanced low rank prior, IEEE Trans. Image Process. 25 (2016) 3426–3437.
[43]
C. Xiaochun, R. Wenqi, Z. Wangmeng, G. Xiaojie, F. Hassan, Scene text deblurring using text-specific multiscale dictionaries, IEEE Trans. Image Process. 24 (2015) 1302–1314.
[44]
J. Kim, J. Kwon Lee, K. Mu Lee, Accurate image super-resolution using very deep convolutional networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1646–1654.
[45]
J. Kim, J. Kwon Lee, K. Mu Lee, Deeply-recursive convolutional network for image super-resolution, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1637–1645.
[46]
X.-J. Mao, C. Shen, Y.-B. Yang, Image denoising using very deep fully convolutional encoder-decoder networks with symmetric skip connections, arXiv preprint arXiv: 1603.09056 2 (2016).
[47]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, in: Advances in Neural Information Processing Systems, 2014, pp. 2672–2680.
[48]
C. Li, M. Wand, Precomputed real-time texture synthesis with markovian generative adversarial networks, in: European Conference on Computer Vision, Springer, 2016, pp. 702–716.
[49]
H. Emami, M. Dong, S.P. Nejad-Davarani, C.K. Glide-Hurst, Generating synthetic cts from magnetic resonance images using generative adversarial networks, Med. Phys. (2018).
[50]
H. Yao, T. Qiao, M. Xu, N. Zheng, Robust multi-classifier for camera model identification based on convolution neural network, IEEE Access 6 (2018) 24973–24982.
[51]
G. Kovács, L. Tóth, D. Van Compernolle, S. Ganapathy, Increasing the robustness of cnn acoustic models using autoregressive moving average spectrogram features and channel dropout, Pattern Recogn. Lett. 100 (2017) 44–50.
[52]
X. Tao, H. Gao, X. Shen, J. Wang, J. Jia, Scale-recurrent network for deep image deblurring, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8174–8182.
[53]
L. Sun, S. Cho, J. Wang, J. Hays, Edge-based blur kernel estimation using patch priors, in: IEEE International Conference on Computational Photography (ICCP), IEEE, 2013, pp. 1–8.
[54]
S. Cho, S. Lee, Fast motion deblurring, ACM Trans. Graph. (TOG) 28 (2009) 145.
[55]
Y. Wang, J. Li, Y. Lu, F. Yao, Image quality evaluation based on image weighted separating block peak signal to noise ratio, in: International Conference on Neural Networks & Signal Processing, 2003.

Cited By

View all

Index Terms

  1. Generative image deblurring based on multi-scaled residual adversary network driven by composed prior-posterior loss
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Journal of Visual Communication and Image Representation
        Journal of Visual Communication and Image Representation  Volume 65, Issue C
        Dec 2019
        271 pages

        Publisher

        Academic Press, Inc.

        United States

        Publication History

        Published: 01 December 2019

        Author Tags

        1. Image deblurring
        2. Generative adversarial network
        3. Residual learning
        4. Prior distribution
        5. Histogram of gradients

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 27 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all

        View Options

        View options

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media