Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

F4SR: A Feed-Forward Regression Approach for Few-Shot Face Super-Resolution

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15038))

Included in the following conference series:

  • 134 Accesses

Abstract

This paper presents a novel approach to face super-resolution that explicitly models the relationship between low-resolution and high-resolution images. Unlike many existing methods, the proposed approach does not require a large number of high-resolution and low-resolution image pairs for training, making it applicable in scenarios with limited training data. By utilizing a feed-forward regression model, the proposed method provides a more interpretable and transparent approach to face super-resolution, enhancing the explainability of the super-resolution process. In particular, by progressively exploiting the contextual information of local patch, the proposed feed-forward regression method can model the relationship between the low-resolution and high-resolution images within a large receptive field. This is somewhat similar to the idea of convolutional neural networks, but the proposed approach is completely interpretable. Experimental results demonstrate that the proposed method achieves good performance in generating high-resolution face images, outperforming several existing methods. Overall, the proposed method contributes to advancing the field of face super-resolution by introducing a more interpretable and transparent approach that can achieve good results with minimal training data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 74.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Baker, S., Kanade, T.: Hallucinating faces. In: FG, pp. 83–88 (2000)

    Google Scholar 

  2. Chan, K.C., Wang, X., Xu, X., Gu, J., Loy, C.C.: Glean: generative latent bank for large-factor image super-resolution. In: CVPR, pp. 14245–14254 (2021)

    Google Scholar 

  3. Chang, H., Yeung, D.Y., Xiong, Y.: Super-resolution through neighbor embedding. In: CVPR, vol. 1, pp. 275–282 (2004)

    Google Scholar 

  4. Chen, L., Pan, J., Jiang, J., Zhang, J., Han, Z., Bao, L.: Multi-stage degradation homogenization for super-resolution of face images with extreme degradations. IEEE TIP 30, 5600–5612 (2021)

    Google Scholar 

  5. Chen, X., Tan, J., Wang, T., Zhang, K., Luo, W., Cao, X.: Towards real-world blind face restoration with generative diffusion prior. IEEE TCSVT (2024)

    Google Scholar 

  6. Chen, Y., Tai, Y., Liu, X., Shen, C., Yang, J.: FSRNet: end-to-end learning face super-resolution with facial priors. In: CVPR, pp. 2492–2501 (2018)

    Google Scholar 

  7. Essa, I.A., Pentland, A.P.: Coding, analysis, interpretation, and recognition of facial expressions. IEEE TPAMI 19(7), 757–763 (1997)

    Article  Google Scholar 

  8. Gao, G., Xu, Z., Li, J., Yang, J., Zeng, T., Qi, G.J.: CTCNet: a CNN-transformer cooperation network for face image super-resolution. IEEE TIP 32, 1978–1991 (2023)

    Google Scholar 

  9. Gao, G., Yu, Y., Lu, H., Yang, J., Yue, D.: Context-patch representation learning with adaptive neighbor embedding for robust face image super-resolution. IEEE TMM 25, 1879–1889 (2023)

    Google Scholar 

  10. Gu, Y., et al.: VQFR: blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV, pp. 126–143 (2022)

    Google Scholar 

  11. Hu, Y., Wang, Y., Zhang, J.: Dear-GAN: degradation-aware face restoration with GAN prior. IEEE TCSVT (2023)

    Google Scholar 

  12. Jiang, J., Hu, R., Wang, Z., Han, Z.: Face super-resolution via multilayer locality-constrained iterative neighbor embedding and intermediate dictionary learning. IEEE TIP 23(10), 4220–4231 (2014)

    MathSciNet  Google Scholar 

  13. Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: a survey. ACM Comput. Surv. 55(1), 1–36 (2023)

    Article  Google Scholar 

  14. Jiang, J., Yu, Y., Tang, S., Ma, J., Aizawa, A., Aizawa, K.: Context-patch face hallucination based on thresholding locality-constrained representation and reproducing learning. IEEE TCYB 50(1), 324–337 (2018)

    Google Scholar 

  15. Jung, C., Jiao, L., Liu, B., Gong, M.: Position-patch based face hallucination using convex optimization. IEEE Signal Proc. Let. 18(6), 367–370 (2011)

    Article  Google Scholar 

  16. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017)

    Google Scholar 

  17. Li, S., Deng, W.: Deep facial expression recognition: a survey. IEEE TAC 13(3), 1195–1215 (2020)

    MathSciNet  Google Scholar 

  18. Liu, C., Shum, H.Y., Freeman, W.T.: Face hallucination: theory and practice. Int. J. Comput. Vis. 75(1), 115–134 (2007)

    Article  Google Scholar 

  19. Liu, C., Shum, H.Y., Zhang, C.: A two-step approach to hallucinating faces: global parametric model and local nonparametric model. In: CVPR, vol. 1, pp. 192–198 (2001)

    Google Scholar 

  20. Liu, L., Feng, Q., Chen, C.L.P., Wang, Y.: Noise robust face hallucination based on smooth correntropy representation. IEEE TNNLS 33(10), 5953–5965 (2022)

    MathSciNet  Google Scholar 

  21. Ma, X., Zhang, J., Qi, C.: Hallucinating face by position-patch. Pattern Recogn. 43(6), 2224–2236 (2010)

    Article  Google Scholar 

  22. Rai, D., Rajput, S.S.: Low-light robust face image super-resolution via neuro-fuzzy inferencing based locality constrained representation. IEEE TIM (2023)

    Google Scholar 

  23. Rajput, S.S., Rai, D., Kumar, B.: OEINR-RFH: outlier elimination based iterative neighbor representation for robust face hallucination. Expert Syst. Appl. 237, 121553 (2024)

    Article  Google Scholar 

  24. Shi, J., Qi, C.: From local geometry to global structure: learning latent subspace for low-resolution face image recognition. IEEE Signal Proc. Let. 22(5), 554–558 (2015)

    Article  Google Scholar 

  25. Thomaz, C.E., Giraldi, G.A.: A new ranking method for principal components analysis and its application to face image analysis. Image Vis. Comput. 28(6), 902–913 (2010)

    Article  Google Scholar 

  26. Wang, M., Deng, W.: Deep face recognition: a survey. Neurocomputing 429, 215–244 (2021)

    Article  Google Scholar 

  27. Wang, N., Tao, D., Gao, X., Li, X., Li, J.: A comprehensive survey to face hallucination. Int. J. Comput. Vis. 106(1), 9–30 (2014)

    Article  Google Scholar 

  28. Wang, X., Tang, X.: Hallucinating face by Eigentransformation. IEEE Trans. Syst. Man Cybern. Part C-Appl. Rev. 35(3), 425–434 (2005)

    Google Scholar 

  29. Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: CVPR, pp. 9168–9178 (2021)

    Google Scholar 

  30. Wang, Y., Hu, Y., Zhang, J.: Panini-Net: GAN prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 2576–2584 (2022)

    Google Scholar 

  31. Wang, Z., et al.: DR2: diffusion-based robust degradation remover for blind face restoration. In: CVPR, pp. 1704–1713 (2023)

    Google Scholar 

  32. Yang, J., Wright, J., Huang, T., Ma, Y.: Image super-resolution via sparse representation. IEEE TIP 19(11), 2861–2873 (2010)

    MathSciNet  Google Scholar 

  33. Yang, P., Zhou, S., Tao, Q., Loy, C.C.: PGDiff: guiding diffusion models for versatile face restoration via partial guidance. NIPS 36 (2024)

    Google Scholar 

  34. Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. In: NeurIPS (2022)

    Google Scholar 

  35. Zhu, F., et al.: Blind face restoration via integrating face shape and generative priors. In: CVPR, pp. 7662–7671 (2022)

    Google Scholar 

  36. Zou, W.W.W., Yuen, P.C.: Very low resolution face recognition problem. IEEE TIP 21(1), 327–340 (2012)

    MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jican Fu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fu, J., Jiang, K., Liu, X. (2025). F4SR: A Feed-Forward Regression Approach for Few-Shot Face Super-Resolution. In: Lin, Z., et al. Pattern Recognition and Computer Vision. PRCV 2024. Lecture Notes in Computer Science, vol 15038. Springer, Singapore. https://doi.org/10.1007/978-981-97-8685-5_14

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-8685-5_14

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-8684-8

  • Online ISBN: 978-981-97-8685-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics