Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

HighBoostNet: a deep light-weight image super-resolution network using high-boost residual blocks

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Image distortion is an inevitable part of the image acquisition process, which negatively affects the high-frequency contents of the images. Therefore, it is important to improve the high-frequency contents of the acquired degraded images in the imaging systems and devices. High-boost filtering is an effective method that is used by the scanning and printing devices for enhancing the high-frequency contents of the images and improving their visual quality. In view of this, in this paper, we first develop a residual block for the task of image super-resolution that employs the high-boost filtering operations. It is demonstrated that the super-resolution network, which is formed by a cascade of the proposed residual block, is able to provide a high performance. Further, in this paper, we propose a novel learning method that improves the generalization capability of our deep super-resolution network. Specifically, we generalize the mapping between the spaces of the degraded low-resolution image and the ground-truth image by employing the multiple supervised learning strategy. It is shown that the proposed multiple supervised learning strategy leads to obtaining the weights of our super-resolution network in such a way that its performance is still high when the images are degraded by a set of degradation parameters that is slightly different than that used for the training process.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The datasets analyzed during the current study are referred in the paper.

References

  1. Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.-L.: Low- complexity single-image super-resolution based on nonnegative neighbor embedding. In: BMVC (2012)

  2. Zeyde, R., Elad, M., Protter, M.: On Single image scale-up using sparse-representations. In: Curves and surfaces (2012)

  3. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV (2001)

  4. Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: CVPR (2015)

  5. Wang, Z., Chen, J., Hoi, S.C.H.: Deep learning for image super resolution: a survey. IEEE TPAMI 43(10), 3365–3387 (2020)

    Article  Google Scholar 

  6. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Patt. Anal. Mach. Intell. 38(2), 295–307 (2015)

    Article  Google Scholar 

  7. Dong, C., Loy, C. C., Tang, X.: Accelerating the super-resolution convolutional neural network. In: ECCV (2016)

  8. Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deep convolutional network. In: CVPR (2016)

  9. Shi, W., Caballero, J., Huszar, F., Totz, J., Aitken, A.P., Bishop, R., Rueckert, D., Wang, Z.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: CVPR (2016)

  10. Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Loy, C.C., Qiao, Y., Tang, X.: ESRGAN: Enhanced super-resolution generative adversarial networks, In: ECCVW (2018)

  11. Hui, Z., Wang, X., Gao, X.: Fast and accurate single image super-resolution via information distillation network. In: CVPR (2018)

  12. Chen, R., Qu, Y., Li, C., Zeng, K., Xie, Y., Li, C.: Single-image super-resolution via joint statistic models-guided deep auto-encoder network. Neural Comput. Appl. 32, 4885–4896 (2020)

    Article  Google Scholar 

  13. Wu, Y., Ji, X., Ji, W., Tian, Y., Zhou, H.: CASR: a context-aware residual network for single-image super-resolution. Neural Comput. Appl. 32, 14533–14548 (2020)

    Article  Google Scholar 

  14. Ahn, N., Kang, B., Sohn, K.A.: Fast, Accurate, and lightweight super-resolution with cascading residual network. In: ECCV (2018)

  15. Lai, W-S. Huang, J-B., Ahuja, N., Yang, M-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: CVPR (2017)

  16. Esmaeilzehi, A., Ahmad, M.O., Swamy, M.N.S.: SRNSSI: a deep light-weight network for single image super resolution using spatial and spectral information. IEEE Trans. Comput. Imag. 7, 409–421 (2021)

    Article  Google Scholar 

  17. Gu, J., Lu, H., Zuo, W., Dong, C.: Blind super-resolution with iterative kernel correction. In: CVPR (2019)

  18. Sajjadi, M.S.M., Scholkopf, B., Hirsch, M.: Enhancenet: single image super-resolution through automated texture synthesis. In: ICCV (2017)

  19. Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: CVPRW (2017)

  20. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: ECCV (2018)

  21. Haris, M., Shakhnarovich, G., Ukita, N.: Deep backprojection networks for superresolution. In: CVPR (2018)

  22. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR (2018)

  23. Tai, Y., Yang, J., Liu, X., Xu, C.: MemNet: a persistent memory network for image restoration. In: ICCV (2017)

  24. Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR (2019)

  25. Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: CVPR (2017)

  26. Zhao, H., Kong, X., He, J., Qiao, Y., Dong, C.: Efficient image super-resolution using pixel attention. In: ECCVW (2020)

  27. Luo, X., Xie, Y., Zhang, Y., Qu, Y., Li, C., Fu, Y.: LatticeNet: towards lightweight image super-resolution with lattice block. In: ECCV (2020)

  28. Hui, Z., Gao, X., Yang, Y., Wang, X.: Lightweight image super-resolution with information multi-distillation network. In: ACMMM (2019)

  29. Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: ECCV (2018)

  30. Esmaeilzehi, A., Ahmad, M.O., Swamy, M.N.S.: MGHCNET: a deep multi-scale granular and holistic channel feature generation network for image super resolution. In: ICME (2020)

  31. Ma, C., Rao, Y., Cheng, Y., Chen, C., Lu, J., Zhou, J.: Structure-preserving super resolution with gradient guidance. In: CVPR (2020)

  32. Yang, W., Feng, J., Yang, J., Zhao, F., Liu, J., Guo, Z., Yan, S.: Deep edge guided recurrent residual learning for image super-resolution. IEEE Trans. Image Process. 26(12), 5895–5907 (2017)

    Article  MathSciNet  Google Scholar 

  33. Guo, Y., Chen, J., Wang, J., Chen, Q., Cao, J., Deng, Z., Xuy, Y., Tany, M.: Closed-loop matters: dual regression networks for single image super-resolution. In: CVPR (2020)

  34. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV (2020)

  35. Mustafa, A., Mikhailiuk, A., Iliescu, D.A., Babbar, V., Mantiuk, R.K.: Training a task-specific image reconstruction loss. In: WACV (2022)

  36. Yang, Y., Qi, Y.: Hierarchical accumulation network with grid attention for image super-resolution. Knowl.-Based Syst. 5(233), 107520 (2021)

    Article  Google Scholar 

  37. Liang, J., Lugmayr, A., Zhang, K., Danelljan, M., Van Gool, L., Timofte, R.: Hierarchical conditional flow: a unified framework for image super-resolution and image rescaling. In: ICCV (2021)

  38. Dai, T., Cai, J., Zhang, Y., Xia, S.T., Zhang, L.: Second-order attention network for single image super-resolution. In: CVPR (2019)

  39. Niu, B., Wen, W., Ren, W., Zhang, X., Yang, L., Wang, S., Zhang, K., Cao, X., Shen, H.: Single image super-resolution via a holistic attention network. In: CVPR (2020)

  40. Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: dataset and study. In: CVPR (2017)

  41. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., et. al.: Tensorflow: large scale machine learning on heterogeneous systems (2015)

  42. Wang, Z., Bovic, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alireza Esmaeilzehi.

Ethics declarations

Conflict of interest

There is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Esmaeilzehi, A., Ma, L., Swamy, M.N.S. et al. HighBoostNet: a deep light-weight image super-resolution network using high-boost residual blocks. Vis Comput 40, 1111–1129 (2024). https://doi.org/10.1007/s00371-023-02835-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-02835-9

Keywords