Abstract
This paper presents a novel approach to face super-resolution that explicitly models the relationship between low-resolution and high-resolution images. Unlike many existing methods, the proposed approach does not require a large number of high-resolution and low-resolution image pairs for training, making it applicable in scenarios with limited training data. By utilizing a feed-forward regression model, the proposed method provides a more interpretable and transparent approach to face super-resolution, enhancing the explainability of the super-resolution process. In particular, by progressively exploiting the contextual information of local patch, the proposed feed-forward regression method can model the relationship between the low-resolution and high-resolution images within a large receptive field. This is somewhat similar to the idea of convolutional neural networks, but the proposed approach is completely interpretable. Experimental results demonstrate that the proposed method achieves good performance in generating high-resolution face images, outperforming several existing methods. Overall, the proposed method contributes to advancing the field of face super-resolution by introducing a more interpretable and transparent approach that can achieve good results with minimal training data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Baker, S., Kanade, T.: Hallucinating faces. In: FG, pp. 83–88 (2000)
Chan, K.C., Wang, X., Xu, X., Gu, J., Loy, C.C.: Glean: generative latent bank for large-factor image super-resolution. In: CVPR, pp. 14245–14254 (2021)
Chang, H., Yeung, D.Y., Xiong, Y.: Super-resolution through neighbor embedding. In: CVPR, vol. 1, pp. 275–282 (2004)
Chen, L., Pan, J., Jiang, J., Zhang, J., Han, Z., Bao, L.: Multi-stage degradation homogenization for super-resolution of face images with extreme degradations. IEEE TIP 30, 5600–5612 (2021)
Chen, X., Tan, J., Wang, T., Zhang, K., Luo, W., Cao, X.: Towards real-world blind face restoration with generative diffusion prior. IEEE TCSVT (2024)
Chen, Y., Tai, Y., Liu, X., Shen, C., Yang, J.: FSRNet: end-to-end learning face super-resolution with facial priors. In: CVPR, pp. 2492–2501 (2018)
Essa, I.A., Pentland, A.P.: Coding, analysis, interpretation, and recognition of facial expressions. IEEE TPAMI 19(7), 757–763 (1997)
Gao, G., Xu, Z., Li, J., Yang, J., Zeng, T., Qi, G.J.: CTCNet: a CNN-transformer cooperation network for face image super-resolution. IEEE TIP 32, 1978–1991 (2023)
Gao, G., Yu, Y., Lu, H., Yang, J., Yue, D.: Context-patch representation learning with adaptive neighbor embedding for robust face image super-resolution. IEEE TMM 25, 1879–1889 (2023)
Gu, Y., et al.: VQFR: blind face restoration with vector-quantized dictionary and parallel decoder. In: ECCV, pp. 126–143 (2022)
Hu, Y., Wang, Y., Zhang, J.: Dear-GAN: degradation-aware face restoration with GAN prior. IEEE TCSVT (2023)
Jiang, J., Hu, R., Wang, Z., Han, Z.: Face super-resolution via multilayer locality-constrained iterative neighbor embedding and intermediate dictionary learning. IEEE TIP 23(10), 4220–4231 (2014)
Jiang, J., Wang, C., Liu, X., Ma, J.: Deep learning-based face super-resolution: a survey. ACM Comput. Surv. 55(1), 1–36 (2023)
Jiang, J., Yu, Y., Tang, S., Ma, J., Aizawa, A., Aizawa, K.: Context-patch face hallucination based on thresholding locality-constrained representation and reproducing learning. IEEE TCYB 50(1), 324–337 (2018)
Jung, C., Jiao, L., Liu, B., Gong, M.: Position-patch based face hallucination using convex optimization. IEEE Signal Proc. Let. 18(6), 367–370 (2011)
Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017)
Li, S., Deng, W.: Deep facial expression recognition: a survey. IEEE TAC 13(3), 1195–1215 (2020)
Liu, C., Shum, H.Y., Freeman, W.T.: Face hallucination: theory and practice. Int. J. Comput. Vis. 75(1), 115–134 (2007)
Liu, C., Shum, H.Y., Zhang, C.: A two-step approach to hallucinating faces: global parametric model and local nonparametric model. In: CVPR, vol. 1, pp. 192–198 (2001)
Liu, L., Feng, Q., Chen, C.L.P., Wang, Y.: Noise robust face hallucination based on smooth correntropy representation. IEEE TNNLS 33(10), 5953–5965 (2022)
Ma, X., Zhang, J., Qi, C.: Hallucinating face by position-patch. Pattern Recogn. 43(6), 2224–2236 (2010)
Rai, D., Rajput, S.S.: Low-light robust face image super-resolution via neuro-fuzzy inferencing based locality constrained representation. IEEE TIM (2023)
Rajput, S.S., Rai, D., Kumar, B.: OEINR-RFH: outlier elimination based iterative neighbor representation for robust face hallucination. Expert Syst. Appl. 237, 121553 (2024)
Shi, J., Qi, C.: From local geometry to global structure: learning latent subspace for low-resolution face image recognition. IEEE Signal Proc. Let. 22(5), 554–558 (2015)
Thomaz, C.E., Giraldi, G.A.: A new ranking method for principal components analysis and its application to face image analysis. Image Vis. Comput. 28(6), 902–913 (2010)
Wang, M., Deng, W.: Deep face recognition: a survey. Neurocomputing 429, 215–244 (2021)
Wang, N., Tao, D., Gao, X., Li, X., Li, J.: A comprehensive survey to face hallucination. Int. J. Comput. Vis. 106(1), 9–30 (2014)
Wang, X., Tang, X.: Hallucinating face by Eigentransformation. IEEE Trans. Syst. Man Cybern. Part C-Appl. Rev. 35(3), 425–434 (2005)
Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: CVPR, pp. 9168–9178 (2021)
Wang, Y., Hu, Y., Zhang, J.: Panini-Net: GAN prior based degradation-aware feature interpolation for face restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 2576–2584 (2022)
Wang, Z., et al.: DR2: diffusion-based robust degradation remover for blind face restoration. In: CVPR, pp. 1704–1713 (2023)
Yang, J., Wright, J., Huang, T., Ma, Y.: Image super-resolution via sparse representation. IEEE TIP 19(11), 2861–2873 (2010)
Yang, P., Zhou, S., Tao, Q., Loy, C.C.: PGDiff: guiding diffusion models for versatile face restoration via partial guidance. NIPS 36 (2024)
Zhou, S., Chan, K.C., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. In: NeurIPS (2022)
Zhu, F., et al.: Blind face restoration via integrating face shape and generative priors. In: CVPR, pp. 7662–7671 (2022)
Zou, W.W.W., Yuen, P.C.: Very low resolution face recognition problem. IEEE TIP 21(1), 327–340 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Fu, J., Jiang, K., Liu, X. (2025). F4SR: A Feed-Forward Regression Approach for Few-Shot Face Super-Resolution. In: Lin, Z., et al. Pattern Recognition and Computer Vision. PRCV 2024. Lecture Notes in Computer Science, vol 15038. Springer, Singapore. https://doi.org/10.1007/978-981-97-8685-5_14
Download citation
DOI: https://doi.org/10.1007/978-981-97-8685-5_14
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-8684-8
Online ISBN: 978-981-97-8685-5
eBook Packages: Computer ScienceComputer Science (R0)